text
stringlengths
40.6k
491k
## Chapter 1 Introduction and overview ### 1.1 Some history Chaotic dynamics may be said to have started with the work of the French mathematician Henri Poincare at about the turn of the century. Poincare's motivation was partly provided by the problem of the orbits of three celestial bodies experiencing mutual gravational attraction (e.g., a star and two planets). By considering the behavior of orbits arising from _sets_ of initial points (rather than focusing on _individual_ orbits), Poincare was able to show that very complicated (now called chaotic) orbits were possible. Subsequent noteworthy early mathematical work on chaotic dynamics includes that of G. Birkhoff in the 1920s, M. L. Cartwright and J. E. Littlewood in the 1940s, S. Smale in the 1960s, and Soviet mathematicians, notably A. N. Kolmogorov and his coworkers. In spite of this work, however, the possibility of chaos in real physical systems was not widely appreciated until relatively recently. The reasons for this were first that the mathematical papers are difficult to read for workers in other fields, and second that the theorems proven were often not strong enough to convince researchers in these other fields that this type of behavior would be important in their systems. The situation has now changed drastically, and much of the credit for this can be ascribed to the extensive numerical solution of dynamical systems on digital computers. Using such solutions, the chaotic character of the time evolutions in situations of practical importance has become dramatically clear. Furthermore, the complexity of the dynamics cannot be blamed on unknown extraneous experimental effects, as might be the case when dealing with an actual physical system. In this chapter, we shall provide some of the phenomenology of chaos and will introduce some of the more basic concepts. The aim is to provide a motivating overview1 in preparation for the more detailed treatments to be pursued in the rest of this book. ### 1.2 Examples of chaotic behavior Most students of science or engineering have seen examples of dynamical behavior which can be fully analyzed mathematically and in which the system eventually (after some transient period) settles either into periodic motion (a limit cycle) or into a steady state (i.e., a situation in which the system ceases its motion). When one relies on being able to specify an orbit analytically, these two cases will typically (and falsely) appear to be the only important motions. The point is that chaotic orbits are also very common but cannot be represented using standard analytical functions. Chaotic motions are neither steady nor periodic. Indeed, they appear to be very complex, and, when viewing such motions, adjectives like wild, turbulent, and random come to mind. In spite of the complexity of these motions, they commonly occur in systems which themselves are not complex and are even surprisingly simple. (In addition to steady state, periodic and chaotic motion, there is a fourth common type of motion, namely quasiperiodic motion. We defer our discussion of quasiperiodicity to Chapter 6.) Before giving a definition of chaos we first present some examples and background material. As a first example of chaotic motion, we consider an experiment of Moon and Holmes (1979). The apparatus is shown in Figure 1.1. When the apparatus is at rest, the steel beam has two stable steady state equilibria: either the tip of the beam is deflected toward the left magnet or toward the right magnet. In the experiment, the horizontal position of the apparatus was oscillated sinusoidally with time. Under certain conditions, when this was done, the tip of the steel beam was observed to oscillate in a very irregular manner. As an indication of this very irregular behavior, Figure 1.2(_a_) shows the output signal of a strain gauge attached to the beam (Figure 1.1). Although the apparatus appears to be very simple, one might attribute the observed complicated motion to complexities in the physical situation, such as the excitation of higher order vibrational modes in the beam, possible noise in the sinusoidal shaking device, etc. To show that it is not necessary to invoke such effects, Moon and Holmes considered a simple model for their experiment, namely, the forced Duffing equation in the following form, \[\frac{\mathrm{d}^{2}y}{\mathrm{d}t^{2}}+\nu\frac{\mathrm{d}y}{\mathrm{d}t}+(y^ {3}-y)=g\sin t. \tag{1.1}\]In Eq. (1.1), the first two terms represent the inertia of the beam and dissipative effects, while the third term represents the effects of the magnets and the elastic force. The sinusoidal term on the right hand side represents the shaking of the apparatus. In the absence of shaking (\(g=0\)), Eq. (1.1) possesses two stable steady states, \(y=1\) and \(y=-1\), corre spending to the two previously mentioned stable steady states of the beam. (There is also an unstable steady state \(y=0\).) Figure 1.2(\(b\)) shows the results of a digital computed numerical solution of Eq. (1.1) for a particular choice of \(v\) and \(g\). We observe that the results of the physical experiment are qualitatively similar to those of the numerical solution. Figure 1.1: The apparatus of Moon and Holmes (1979). Figure 1.2: (\(a\)) Signal from the strain gauge. (\(b\)) Numerical solution of Eq. (1.1) (Moon and Holmes, 1979). Thus, it is unnecessary to invoke complicated physical processes to explain the observed complicated motion. As a second example, we consider the experiment of Shaw (1984) illustrated schematically in Figure 3. In this experiment, a slow steady inflow of water to a 'faucet' was maintained. Water drops fall from the faucet, and the times at which successive drops pass a sensing device are recorded. Thus, the data consists of the discrete set of times \(t_{1}\), \(t_{2}\),..., \(t_{n}\),... at which drops were observed by the sensor. From these data, the time intervals between successive drops can be formed, \(\Delta t_{n}\equiv t_{n+1}-t_{n}\). When the inflow rate to the faucet is sufficiently small, the time intervals \(\Delta t_{n}\) are all equal. As the inflow rate is increased, the time interval sequence becomes periodic with a short interval \(\Delta t_{a}\) followed by a longer interval \(\Delta t_{b}\), so that the sequence of time intervals is of the form..., \(\Delta t_{a}\), \(\Delta t_{b}\), \(\Delta t_{a}\), \(\Delta t_{b}\), \(\Delta t_{a}\),.... We call this a period two sequence since \(\Delta t_{n}=\Delta t_{n+2}\). As the inflow rate is further increased, periodic sequences of longer and longer periods were observed, until, at sufficiently large inflow rate, the sequence \(\Delta t_{1}\), \(\Delta t_{2}\), \(\Delta t_{3}\),... apparently has no regularity. This irregular sequence is argued to be due to chaotic dynamics. As a third example, we consider the problem of chaotic Rayleigh Benard convection, originally studied theoretically and computationally in the seminal paper of Lorenz (1963) and experimentally by, for example, Ahlers and Behringer (1978), Gollub and Benson (1980), Berge _et al._ (1980) and Libchaber and Maurer (1980). In Rayleigh Benard convection, one considers a fluid contained between two rigid plates and subjected to gravity, as shown in Figure 4. The bottom plate is maintained at a higher temperature \(T_{0}+\Delta T\) than the temperature \(T_{0}\) of the top plate. As a result, the fluid near the warmer lower plate expands, and buoyancy creates a tendency for this fluid to rise. Similarly, the cooler Figure 3: Schematic illustration of the experiment of Shaw (1984). more dense fluid near the top plate has a tendency to fall. While Lorenz's equations are too idealized a model to describe the experiments accurately, in the case where the experiments were done with vertical bounding side walls situated at a spacing of two to three times the distance between the horizontal walls, there was a degree of qualitative correspondence between the model and the experiments. In particular, in this case, for some range of values of the temperature difference \(\Delta T\), the experiments show that the fluid will execute a _steady_ convective cellular flow, as shown in the figure. At a somewhat larger value of the temperature difference, the flow becomes time dependent, and this time dependence is chaotic. This general behavior is also predicted by Lorenz's paper. From these simple examples, it is clear that chaos should be expected to be a very common basic dynamical state in a wide variety of systems. Indeed, chaotic dynamics has by now been shown to be of potential importance in many different fields including fluids [2], plasmas [3], solid state devices [4], circuits [5], lasers [6], mechanical devices [7], biology [8], chemistry [9], acoustics [10], celestial mechanics [11], etc. In both the dripping faucet example and the Rayleigh Benard convextion example, our discussions indicated a situation as shown schematically in Figure 5. Namely, there was a system parameter, labeled \(p\) in Figure 5, such that, at a value \(p=p_{1}\), the motion is observed to be nonchaotic, and at another value \(p=p_{2}\), the motion is chaotic. (For the faucet example, \(p\) is the inflow rate, while for the example of Rayleigh Benard convection, \(p\) is the temperature difference \(\Delta T\).) The natural question raised by Figure 5 is _how does chaos come about as the parameter \(p\) is varied continuously from \(p_{1}\) to \(p_{2}\)_? That is, how do the dynamical motions of the system evolve with continuous variation of \(p\) from \(p_{1}\) and \(p_{2}\)? This question of the _routes to chaos_[12] will be considered in detail in Chapter 8. Figure 5: Schematic illustration of the question of the transition to chaos with variation of a system parameter. ### 1.3 Dynamical systems A _dynamical system_ may be defined as a deterministic mathematical prescription for evolving the state of a system forward in time. Time here either may be a continuous variable, or else it may be a discrete integer valued variable. An example of a dynamical system in which time (denoted _t_) is a continuous variable is a system of \(N\) first order, auto nomous, ordinary differential equations, \[\left.\begin{array}{l}\mathrm{d}x^{(1)}/\mathrm{d}t=F_{1}(x^{(1)},\,x^{(2)},\,\ldots,\,x^{(N)}),\\ \mathrm{d}x^{(2)}/\mathrm{d}t=F_{2}(x^{(1)},\,x^{(2)},\,\ldots,\,x^{(N)}),\\ \vdots\\ \mathrm{d}x^{(N)}/\mathrm{d}t=F_{N}(x^{(1)},\,x^{(2)},\,\ldots,\,x^{(N)}), \end{array}\right\} \tag{1.2}\] which we shall often write in vector form as \[\mathrm{d}\mathbf{x}(t)/\mathrm{d}t=\mathbf{F}[\mathbf{x}(t)], \tag{1.3}\] where \(\mathbf{x}\) is an \(N\) dimensional vector. This is a dynamical system because, for any initial state of the system \(\mathbf{x}(0)\), we can in principle solve the equations to obtain the future system state \(\mathbf{x}(t)\) for \(t>0\). Figure 1.6 shows the path followed by the system state as it evolves with time in a case where \(N=3\). The space (\(x^{(1)}\), \(x^{(2)}\), \(x^{(3)}\)) in the figure is referred to as _phase space_, and the path in phase space followed by the system as it evolves with time is referred to as an _orbit_ or _trajectory._ Also, it is common to refer to a continous time dynamical system as a _flow._ (This latter terminology is apparently motivated by considering the trajectories generated by _all_ the initial conditions in the phase space as roughly analogous to the paths followed by the particles of a flowing fluid.) Figure 1.6: An orbit in a three dimensional (\(N=3\)) phase space. In the case of discrete, integer valued time (with \(n\) denoting the time variable, \(n=0\), \(1\), \(2\), \(\ldots\)), an example of a dynamical system is a map, which we write in vector form as \[\mathbf{x}_{n+1}=\mathbf{M}(\mathbf{x}_{n}), \tag{4}\] where \(\mathbf{x}_{n}\) is \(N\) dimensional, \(\mathbf{x}_{n}=(x_{n}^{(1)}\), \(x_{n}^{(2)}\), \(\ldots\), \(x_{n}^{(N)}\)). Given an initial state \(\mathbf{x}_{0}\), we obtain the state at time \(n=1\) by \(\mathbf{x}_{1}=\mathbf{M}(\mathbf{x}_{0})\). Having determined \(\mathbf{x}_{1}\), we can then determine the state at \(n=2\) by \(\mathbf{x}_{2}=\mathbf{M}(\mathbf{x}_{1})\) and so on. Thus, given an initial condition \(\mathbf{x}_{0}\), we generate an orbit (or trajectory) of the discrete time system: \(\mathbf{x}_{0}\), \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\), \(\ldots\). As we shall see, a continous time system of dimensionality \(N\) can often profitably be reduced to a discrete time map of dimensionality \(N-1\) via the Poincare surface of section technique. It is reasonable to conjecture that the complexity of the possible structure of orbits can be greater for larger system dimensionality. Thus, a natural question is _how large does \(N\) have to be in order for chaos to be possible_? For the case of \(N\) first order autonomous ordinary differential equations, the answer is that \[N\Rightarrow 3 \tag{5}\] is sufficient.[13] Thus, if one is given an autonomous first order system with \(N=2\), chaos can be ruled out immediately. Example:Consider the forced damped pendulum equation (cf. Figure 7) \[\frac{\mathrm{d}^{2}\theta}{\mathrm{d}t^{2}}+\nu\frac{\mathrm{d}\theta}{\mathrm{ d}t}+\sin\theta=T\sin(2\pi fi),\] (6a) where the first term represents inertia, the second, friction at the pivot, the third, gravity, and the term on the right hand side represents a sinusoidal torque applied at the pivot. (This equation also describes the behavior of a simple Josephson junction circuit.) We ask: is chaos ruled out for the driven damped pendulum equation? To answer this question, we put the equation (which is second order and nonautonomous) into first order autonomous form by the substitution \[x^{(1)} =\mathrm{d}\theta/\mathrm{d}t,\] \[x^{(2)} =\theta,\] \[x^{(3)} =2\pi fi.\] (Note that, since both \(x^{(2)}\) and \(x^{(3)}\) appear in Eq. (6a) as the argument of a sine function, they can be regarded as angles and may, if desired, be defined to lie between \(0\) and \(2\pi\).) The driven damped pendulum equation then yields the following first order autonomous system. Figure 7: Forced, damped pendulum. \[\begin{split}\mathrm{d}x^{(1)}/\mathrm{d}t&=\,T\sin x^{(3 )}-\sin x^{(2)}-vx^{(1)},\\ \mathrm{d}x^{(2)}/\mathrm{d}t&=x^{(1)},\\ \mathrm{d}x^{(3)}/\mathrm{d}t&=2\pi f.\end{split} \tag{6b}\] Since \(N=3\), chaos is not ruled out. Indeed, numerical solutions show that both chaotic and periodic solutions of the driven damped pendulum equation are possible depending on the particular choice of system parameters \(v\), \(T\) and \(f\). We now consider the question of the required dimensionality for chaos for the case of maps. In this case, we must distinguish between invertible and noninvertible maps. We say the map \(\mathbf{M}\) is invertible if, given \(\mathbf{x}_{n+1}\), we can solve \(\mathbf{x}_{n+1}=\mathbf{M}(\mathbf{x}_{n})\) uniquely for \(\mathbf{x}_{n}\). If this is so, we denote the solution for \(\mathbf{x}_{n}\) as \[\mathbf{x}_{n}=\mathbf{M}^{-1}(\mathbf{x}_{n+1}), \tag{7}\] and we call \(\mathbf{M}^{-1}\) the inverse of \(\mathbf{M}\). For example, consider the one dimensional (\(N=1\)) map [14], \[\mathbf{M}(x)=\,rx(1-x), \tag{8}\] which is commonly called the 'logistic map.' As shown in Figure 8, this map is not invertible because for a given \(x_{n+1}\) there are two possible values of \(x_{n}\) from which it could have come. On the other hand, consider the two dimensional map, \[\begin{split} x^{(1)}_{n+1}&=f(x^{(1)}_{n})-Jx^{(2) }_{n},\\ x^{(2)}_{n+1}&=x^{(1)}_{n}.\end{split} \tag{9}\] This map is clearly invertible as long as \(J\neq 0\), \[\begin{split} x^{(1)}_{n}&=x^{(2)}_{n+1},\\ x^{(2)}_{n}&=J^{-1}[f(x^{(2)}_{n+1})-x^{(1)}_{n+1}]. \end{split} \tag{10}\] Figure 8: Noninvertibility of the logistic map. We can now state the dimensionality requirements on maps. If the map is invertible, then there can be no chaos unless \[N\quad\ 2. \tag{1.11}\] If the map is noninvertible, chaos is possible even in one dimensional maps. Indeed, the logistic map Eq. (1.8) exhibits chaos for large enough \(r\). It is often useful to reduce a continuous time system (or 'flow') to a discrete time map by a technique called the Poincare surface of section method. We consider \(N\) first order autonomous ordinary differential equa (Eq. (1.2)). The 'Poincare map' represents a reduction of the \(N\) dimensional flow to an \((N-1)\) dimensional map. For illustrative purposes, we take \(N=3\) and illustrate the construction in Figure 1.9. Consider a solution of (1.2). Now, choose some appropriate \((N-1)\) dimensional surface (the'surface of section') in the \(N\) dimensional phase space, and observe the intersections of the orbit with the surface. In Figure 1.9, the surface of section is the plane \(x^{(3)}=K\), but we emphasize that in general the choice of the surface can be tailored in a convenient way to the particular problem. Points \(A\) and \(B\) represent two successive crossings of the surface of section. Point \(A\) uniquely determines point \(B\), because \(A\) can be used as an initial condition in (1.2) to determine \(B\). Likewise, \(B\) uniquely determines \(A\) by reversing time in (1.2) and using \(B\) as the initial condition. Thus, the Poincare map in this illustration represents an invertible two dimensional map transforming the coordinates \((x^{(1)}_{n},\,x^{(2)}_{n})\) of the \(n\)th piercing of the surface of section to the coordinates \((x^{(1)}_{n+1},\,x^{(2)}_{n+1})\) at Figure 1.9: A Poincaré surface of section. piercing \(n+1\). This equivalence of an \(N\) dimensional flow with an \((N-1)\) dimensional invertible map shows that the requirements Eq. (1.11) for chaos in a map follows from Eq. (1.5) for chaos in a flow. Another way to create a map from the flow generated by the system of autonomous differential equations (1.3) is to sample the flow at discrete times \(t_{n}=t_{0}+nT\) (\(n=0\), \(1\), \(2\), \(\ldots\)), where the sampling interval \(T\) can be chosen on the basis of convenience. Thus, a continuous time trajectory \(\mathbf{x}(t)\) yields a discrete time trajectory \(\mathbf{x}_{n}\equiv\mathbf{x}(t_{n})\). The quantity \(\mathbf{x}_{n+1}\) is uniquely determined from \(\mathbf{x}_{n}\) since we can use \(\mathbf{x}_{n}\) as an initial condition in Eqs. (1.3) and integrate the equations forward for an amount of time \(T\) to determine \(\mathbf{x}_{n+1}\). Thus, in principle, we have a map \(\mathbf{x}_{n+1}=\mathbf{M}(\mathbf{x}_{n})\). We call this map the time \(T\) map. The time \(T\) map is invertible (like the Poincare map), since the differential equations (1.3) can be integrated backward in time. Unlike the Poincare map, the dimensionality of the time \(T\) map is the same as that of the flow. ### 1.4 Attractors In Hamiltonian systems (cf. Chapter 7) such as arise in Newton's equations for the motion of particles without friction, there are choices of the phase space variables (e.g., the canonically conjugate position and momentum variables) such that phase space volumes are preserved under the time evolution. That is, if we choose an initial (\(t=0\)) closed (\(N-1\)) dimensional surface \(S_{0}\) in the \(N\) dimensional \(\mathbf{x}\) phase space, and then evolve each point on the surface \(S_{0}\) forward in time by using them as initial conditions in Eq. (1.3), then the closed surface \(S_{0}\) evolves to a closed surface \(S_{t}\) at some later time \(t\), and the \(N\) dimensional volumes \(V(0)\) of the region enclosed by \(S_{0}\) and \(V(t)\) of the region enclosed by \(S_{t}\) are the same, \(V(t)=V(0)\). We call such a volume preserving system _conservative_. On the other hand, if the flow does not preserve volumes, and cannot be made to do so by a change of variables, then we say that the system is _nonconservative_. By the divergence theorem, we have that \[\mathrm{d}V(t)/\mathrm{d}t=\int_{S_{t}}\nabla\cdot\mathbf{F}d^{N}x, \tag{1.12}\] where \(\int_{S_{t}}\) signifies the integral over the volume interior to the surface \(S_{t}\), and \(\nabla\cdot\mathbf{F}\equiv\sum_{i=1}^{N}\partial F_{t}(x^{(1)}\), \(\ldots\), \(x^{(N)})/\partial x^{(i)}\). For example, for the forced damped pendulum equation written in first order autonomous form, Eq. (1.6b), we have that \(\nabla\cdot\mathbf{F}=-\nu\), which is independent of the phase space position \(\mathbf{x}\) and is negative. From (1.12), we have \(\mathrm{d}V(t)/\mathrm{d}t=-\nu V(t)\) so that \(V\) decreases exponentially with time, \(V(t)=\exp(-\nu t)V(0)\). In general, \(\nabla\cdot\mathbf{F}\) will be a function of phase space position \(\mathbf{x}\). If \(\nabla\cdot\mathbf{F}<0\) in some region of phase space (signifying volume contraction in that region),then we shall refer to the system as a _dissipative_ system. It is an important concept in dynamics that dissipative systems typically are characterized by the presence of attracting sets or _attractors_ in the phase space. These are bounded subsets to which regions of initial conditions of nonzero phase space volume asymptote as time increases. (Conservative dynamical systems do not have attractors; see the discussion of the Poincare recurrence theorem in Chapter 7.) As an example of an attractor, consider the damped harmonic oscilla tor, \(\mathrm{d}^{2}y/\mathrm{d}t^{2}+r\mathrm{d}y/\mathrm{d}t+\omega^{2}y=0\). A typical trajectory in the phase space (\(x^{(1)}=y\), \(x^{(2)}=\mathrm{d}y/\mathrm{d}t\)) is shown in Figure 1.10(_a_). We see that, as time goes on, the orbit spirals into the origin, and this is true for any initial condition. Thus, in this case the origin, \(x^{(1)}=x^{(2)}=0\), is said to be the 'attractor' of the dynamical system. As a second example, Figure 1.10(_b_) shows the case of a limit cycle (the dashed curve). The initial condition (labeled \(\alpha\)) outside the limit cycle yields an orbit which, with time, spirals into the closed dashed curve on which it circulates in periodic motion in the \(t\to+\infty\) limit. Similarly, the initial condition (labeled \(\beta\)) inside the limit cycle yields an orbit which spirals outward, asymptotically approach being the dashed curve. Thus, in this case, the dashed closed curve is the attractor. An example of an equation displaying a limit cycle attractor as illustrated in Figure 1.10(_b_) is the van der Pol equation, \[\frac{\mathrm{d}^{2}y}{\mathrm{d}t^{2}}+(y^{2}-\eta)\frac{\mathrm{d}y}{ \mathrm{d}t}+\omega^{2}y=0. \tag{1.13}\] This equation was introduced in the 1920s as a model for a simple vacuum tube oscillator circuit. One can speak of conservative and dissipative _maps_. A conservative \(N\) dimensional map is one which preserves \(N\) dimensional phase space volumes on each iterate (or else can be made to do so by a suitable change Figure 1.10: (_a_) The attractor is the point at the origin. (_b_) The attractor is the closed dashed curve. of variables). A map is volume preserving if the magnitude of the determinant of its Jacobian matrix of partial derivatives is one, \[J(\mathbf{x})\equiv|\text{det}[\partial\mathbf{M}(\mathbf{x})/\partial\mathbf{x}] |=1.\] For example, for a continuous time Hamiltonian system, a surface of section formed by setting one of the \(N\) canonically conjugate variables equal to a constant can be shown to yield a volume preserving map in the remaining \(N-1\) canonically conjugate variables (Chapter 7). On the other hand, if \(J(\mathbf{x})<1\) in some regions, then we say the map is dissipative and, as for flows, typically it can have attractors. For example, Figure 1.11 illustrates the Poincare surface of section map for a three dimensional flow with a limit cycle. We see that _for the map_, the two points \(A_{1}\) and \(A_{2}\) together constitute the attractor. That is, the orbit of the two dimensional surface of section map \(\mathbf{x}_{n+1}=\mathbf{M}(\mathbf{x}_{n})\) yields a sequence \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\), \(\ldots\) which converges to the set consisting of the two points \(A_{1}\) and \(A_{2}\), between which the map orbit sequentially alternates in the limit \(n\to+\infty\). In Figure 1.10, we have two examples, one in which the attractor of a continuous time system is a set of dimension zero (a single point) and one in which the attractor is a set of dimension one (a closed curve). In Figure 1.11, the attractor of the map has dimension zero (it is the two points, \(A_{1}\) and \(A_{2}\)). It is a characteristic of chaotic dynamics that the resulting attractors often have a much more intricate geometrical structure in the phase space than do the examples of attractors cited above. In fact, Figure 1.11: Surface of section for a three dimensional flow with a limit cycle. according to a standard definition of dimension (Section 3.1), these attractors commonly have a value for this dimension which is not an integer. In the terminology of Mandelbrot, such geometrical objects are _fractals_. When an attractor is fractal, it is called a _strange attractor_. As an example of a strange attractor, consider the attractor obtained for the two dimensional Henon map, \[\left.\begin{array}{l}x_{n+1}^{(1)}=A-(x_{n}^{(1)})^{2}+Bx_{n}^{(2)},\\ x_{n+1}^{(2)}=x_{n}^{(1)},\end{array}\right\} \tag{1.14}\] for \(A=1.4\) and \(B=0.3\). See Henon (1976). (Note that Eq. (1.14) is in the form of Eq. (1.9).) Figure 1.12(\(a\)) shows the results of plotting \(10^{4}\) successive points obtained by iterating Eqs. (1.14) (with the initial transient before the orbit settles into the attractor deleted). The result is Figure 1.12: (\(a\)) The Hénon attractor. (\(b\)) Enlargement of region defined by the rectangle in (\(a\)). (\(c\)) Enlargement of region defined by the rectangle in (\(b\)) (Grebogi _et al._, 1987d). essentially a picture of the attractor. Figure 12(_b_) shows that a blow up of the rectangle in Figure 12(_a_) reveals that the attractor apparently has a local small scale structure consisting of a number of parallel lines. A blow up of the rectangle in Figure 12(_b_) is shown in Figure 12(_c_) and reveals more lines. Continuation of this blow up procedure would show that the attractor has similar structure on _arbitrarily small scale_. In fact, roughly speaking, we can regard the attractor in Figure 12(_b_) as consisting of an _uncountable_ infinity of lines. Numerical computations show that the fractal dimension \(D_{0}\) of the attractor in Figure 12 is a number between one and two, \(D_{0}\simeq 1.26\). Hence, this appears to be an example of a strange attractor. Figure 13: The attractor of the forced damped pendulum equation in the surface of section \(x^{(3)}\) modulo \(2\pi=0\) (Crepogi _et al._, 1987d). As another example of a strange attractor, consider the forced damped pendulum (Eqs. (1.6) and Figure 1.7) with \(\nu=0.22,\;T=2.7,\) and \(f=1/2\pi.\) Treating \(\mathrm{x}^{(3)}\) as an angle in phase space, we define \[\bar{\mathrm{x}}^{(3)}=x^{(3)}\text{ modulo }2\pi\] and choose a surface of section \(\bar{\mathrm{x}}^{(3)}=0.\) The modulo operation is defined as \[y\text{ modulo }K\equiv y+pK\] where \(p\) is a positive or negative integer chosen to make \(0\leq y+pK<K.\) The surface of section \(\bar{\mathrm{x}}^{(3)}=0\) is crossed at the times \(t=0,\)\(2\pi,\)\(4\pi,\)\(6\pi,\)\(\ldots.\) (This type of surface of section for a periodically forced system is often referred to as a _stroboscopic_ surface of section, since it shows the system state at successive'snapshots' of the system at evenly spaced time intervals.) As seen in Figure 1.13(_a_) and in the blow up of the rectangle (Figure 1.13(_b_)), the attractor again apparently consists of a number of parallel curves. The fractal dimension of the intersection of the attractor with the surface of section in this case is approximately 1.38. Correspondingly, if one considers the attracting set in the full three dimensional phase space, it has a dimension 2.38 (i.e., one greater than its intersection with the surface of section). ### 1.5 Sensitive dependence on initial conditions A defining attribute of an attractor on which the dynamics is _chaotic_ is that it displays exponentially sensitive dependence on initial conditions. Consider two nearby initial conditions \(\mathbf{x}_{1}(0)\) and \(\mathbf{x}_{2}(0)=\mathbf{x}_{1}(0)+\Delta(0),\) and imagine that they are evolved forward in time by a continuous time dynamical system yielding orbits \(\mathbf{x}_{1}(t)\) and \(\mathbf{x}_{2}(t)\) as shown in Figure 1.14. Figure 1.14: Evolution of two nearby orbits in phase space. At time \(t\), the separation between the two orbits is \(\Delta(t)=\mathbf{x}_{2}(t)-\mathbf{x}_{1}(t)\). If, in the limit \(|\Delta(0)|\to 0\), and large \(t\), orbits remain bounded and the difference between the solutions \(|\Delta(t)|\) grows exponentially for typical orientation of the vector \(\Delta(0)\) (i.e., \(|\Delta(t)|/|\Delta(0)|\sim\exp(ht),\ h>0\)), then we say that the system displays sensitive dependence on initial conditions and is chaotic. By bounded solutions, we mean that there is some ball in phase space, \(|\mathbf{x}|<R<\infty\), which solutions never leave.[15] (Thus, if the motion is on an attractor, then the attractor lies in \(|\mathbf{x}|<R\).) The reason we have imposed the restriction that orbits remain bounded is that, if orbits go to infinity, it is relatively simple for their distances to diverge exponentially. An example is the single, autonomous, linear, first order differential equation \(\mathrm{d}x/\mathrm{d}t=x\). This yields \(\mathrm{d}[x_{2}(t)-x_{1}(t)]/\mathrm{d}t=[x_{2}(t)-x_{1}(t)]\) and hence \(\Delta(t)\sim\exp(t)\). Our requirement of bounded solutions eliminates such trivial cases.[16] For the case of the driven damped pendulum equation, we defined three phase space variables, one of which was \(x^{(3)}=2\pi ft\). As defined, \(x^{(3)}\) is unbounded since it is proportional to \(t\). The reason we can speak of the driven damped pendulum as being chaotic is that, as previously mentioned, \(x^{(3)}\) only occurs as the argument of a sine, and hence it (as well as \(x^{(2)}=\theta\)) can be regarded as an angle. Thus, the phase space coordinates can be taken as \(x^{(1)}\), \(\bar{x}^{(2)}\), \(\bar{x}^{(3)}\), where \(\bar{x}^{(2,3)}\equiv x^{(2,3)}\) modulo \(2\pi\). Since the variables \(\bar{x}^{(2)}\) and \(\bar{x}^{(3)}\) lie between \(0\) and \(2\pi\), they are necessarily bounded. The exponential sensitivity of chaotic solutions means that, as time goes on, small errors in the solution can grow very rapidly (i.e., exponen tially) with time. Hence, after some time, effects such as noise and computer roundoff can totally change the solution from what it would be in the absence of these effects. As an illustration of this, Figure 15 shows the results of a computer experiment on the Henon map, Eq. (14), with \(A=1.4\) and \(B=0.3\). In this figure, we show a picture of the attractor (as in Figure 12(\(a\))) superposed on which are two computations of iterate numbers \(32\ \ 36\) of an orbit originating from the single initial condition (\(x_{0}^{(1)}\), \(x_{0}^{(2)})=(0,\ 0)\) (labeled as an asterisk in the figure). The two computa tions of the orbits are done identically, but one uses single precision and the other double precision. The roundoff error in the single precision computation is about \(10^{-14}\). The orbit computed using single precision is shown as open diamonds, while the orbit using double precision is shown as asterisks. A straight line joins the two orbit locations at each iterate. We see that the difference in the two computations has become as large as the variables themselves. Thus, we cannot meaningfully compute the orbit on the Henon attractor using a computer with \(10^{-14}\) roundoff for more than of the order of \(30\ \ 40\) iterates. Hence, given the state of a chaotic system, its future becomes difficult to predict after a certain point. Returning to the Henon map example, we note that, after the first iterate, the two solutions differ by of the order of \(10^{-14}\) (the roundoff). If the subsequent computations were made _without error_, and the error doubled on each iterate (i.e., an exponential increase of \(2^{n}=\exp(n\ln 2)\)), then the orbits would be separated by an amount of the order of the attractor size at a time roughly determined by \(2^{n}10^{-14}\) 1 or \(n\) 45. If errors double on each iterate, it becomes almost impossible to improve prediction. Say, we can compute exactly, but our initial measurement of the system state is only accurate to within \(10^{-14}\). The above shows that we cannot predict the state of the system past \(n\) 45. Suppose that we wish to predict to a longer time, say, twice as long, i.e., to \(n\) 90. Then we must improve the accuracy of our initial measurement from \(10^{-14}\) to \(10^{-28}\). That is, we must improve our accuracy by a tremendous amount, namely, 14 orders of magnitude! In any practical situation, this is likely to be impossible. Thus, the relatively modest goal of an improvement of prediction time by a factor of two is not feasible. The fact that chaos may make prediction past a certain time difficult, and essentially impossible in a practical sense, has important consequences. Indeed, the work of Lorenz was motivated by the problem of Figure 1.15: After a relatively small number of iterates, two trajectories, one computed using single precision, the other computed using double precision, both originating from the same initial condition, are far apart. (This figure courtesy of Y. Du.) weather prediction. Lorenz was concerned with whether it is possible to do long range prediction of atmospheric conditions. His demonstration that thermally driven convection could result in chaos raises the possibility that the atmosphere is chaotic. Thus, even the smallest perturbation, such as a butterfly flapping its wings, _eventually_ has a large effect. Long term prediction becomes impossible. Given the difficulty of accurate computation, illustrated in Figure 15, one might question the validity of pictures such as Figures 12 and 13 which show thousands of iterates of the map. Is the figure real, or is it merely an artifact of chaos amplified computer roundoff? A partial answer to this question comes from rigorous mathematical proofs of the _shadow ing_ property for certain chaotic systems. Although a numerical trajectory diverges exponentially from the true trajectory with the same initial condition, there exists a true (i.e. errorless) trajectory with a slightly different initial condition (Fig. 16) that stays near (shadows) the numerical trajectory (Anosov, 1967; Bowen, 1970; Hammel, Yorke and Grebogi, 1987; and Problem 3 of Chapter 2). Thus, there is good reason to believe that the apparent fractal structure seen in pictures like Figures 12 and 13 is real. We emphasize that the nonchaotic cases, shown in Figure 10(_a_) and 10(_b_), do not yield long term exponential divergence of solutions. For the damped harmonic oscillator example (Figure 10(_a_)), two initially nearby points approach the point attractor and their energies decrease exponentially to zero with time. Hence, orbits _converge_ exponentially for large time. For the case of a limit cycle (Figure 10(_b_)), orbits initially separated by an amount \(\Delta(0)\) typically eventually wind up on the limit Figure 16: Given a noisy trajectory from the initial condition \(x_{0}\), it is possible to find a slightly different initial condition \(x_{0}^{\prime}\), such that the true (i.e., noiseless) trajectory from \(x_{0}^{\prime}\) shadows the noisy trajectory from \(x_{0}\). cycle attractor separated by an amount of order \(|\Delta(0)|\) and maintain a separation of this order forever. Thus, a small initial error leads to small errors _for all time_. As another example, consider the motion of a particle in a one dimensional anharmonic potential well in the absence of friction (a conservative system). The total particle energy (potential energy plus kinetic energy) is constant with time on an orbit. Each orbit is periodic and the period depends on the particle energy. Two nearby initial conditions, in general, will have slightly different energies and hence slightly different orbit frequencies. This leads to divergence of these orbits, but the divergence is only linear with time rather than exponential; \(|\Delta(t)|\sim(\Delta\omega)t\), where \(\Delta\omega\) is the difference of the orbital frequencies. Thus, if \(|\Delta(0)|\) is reduced by a factor of two (reducing \(\Delta\omega\) by a factor of two), then \(t\) can be doubled, and the same error will be produced. This is in contrast with our chaotic example above where errors doubled on each iterate. In that case, to increase the time by a factor of two, \(|\Delta(0)|\) had to be reduced by a factor of order \(10^{14}\). The dynamics on an attractor is said to be chaotic if there is exponential sensitivity to initial conditions. We will say that an attractor is strange if it is fractal (this definition of strange is often used but is not universally accepted). Thus, chaos describes the dynamics on the attractor, while'strange' refers to the geometry of the attractor. It is possible for chaotic attractors not to be strange (typically the case for one dimensional maps (see the next chapter)), and it is also possible for attractors to be strange but not chaotic (Grebogi _et al._, 1984; Romeiras and Ott, 1987; and Section 6.4). For most cases involving differential equations, strangeness and chaos commonly occur together. ### 1.6 Delay coordinates In experiments one cannot always measure all the components of the vector \(\mathbf{x}(t)\) giving the state of the system. Let us suppose that we can only measure one component, or, more generally, one scalar function of the state vector, \[g(t)=G(\mathbf{x}(t)). \tag{1.15}\] Given such a situation, can we obtain phase space information on the geometry of the attractor? For example, can we somehow make a surface of section revealing fractal structure as in Figures 1.12 and 1.13? The answer is yes. To see that this is so, define the so called delay coordinate vector (Takens, 1980), \(\mathbf{y}=(y^{(1)},\ y^{(2)},\ \ldots,\ y^{(M)})\), by \[\begin{array}{l}y^{(1)}(t)=g(t),\\ y^{(2)}(t)=g(t-\tau),\\ y^{(3)}(t)=g(t-2\tau),\\ \vdots\\ y^{(M)}(t)=g[t-(M-1)\tau],\end{array} \tag{16}\] where \(\tau\) is some fixed time interval, which should be chosen to be of the order of the characteristic time over which \(g(t)\) varies. Given \(\mathbf{x}\) at a specific time \(t_{0}\), one could, in principle, obtain \(\mathbf{x}(t_{0}-m\tau)\) by integrating Eq. (3) backwards in time by an amount \(m\tau\). Thus, \(\mathbf{x}(t_{0}-m\tau)\) is uniquely determined by \(\mathbf{x}(t_{0})\) and can hence be regarded as a function of \(\mathbf{x}(t_{0})\), \[\mathbf{x}(t-m\tau)=\mathbf{L}_{m}(\mathbf{x}(t)).\] Hence, \(g(t-m\tau)=G(\mathbf{L}_{m}(\mathbf{x}(t)))\), and we may thus regard the vector \(\mathbf{y}(t)\) as a function of \(\mathbf{x}(t)\) \[\mathbf{y}=\mathbf{H}(\mathbf{x}).\] We can now imagine making a surface of section in the \(\mathbf{y}\) space. It can be shown (Section 3.9) that, if the number of delays \(M\) is sufficiently large, then we will typically see a qualitatively similar structure as would be seen had we made our surface of section in the original phase space \(\mathbf{x}\). Alternatively, we might simply examine the continuous time trajectory in \(\mathbf{y}\). For example, Figure 17 shows a result for an experiment involving chemical reactions (cf. Section 2.4.3). The vertical axis is the measured concentration \(g(t)\) of one chemical constituent at time \(t\) and the horizontal axis is the same quantity evaluated at \(t-(8.8\) seconds). We see that the delay coordinates \(\mathbf{y}=(g(t)\), \(g(t-8.8))\) traces out a closed curve indi cating a limit cycle. Figure 17: Experimental delay coordinate plot showing a closed curve corresponding to a limit cycle attractor. ## Problems 1. Consider the following systems and specify (i) whether chaos can or cannot be ruled out for these systems, and (ii) whether the system is conservative or dissipative. Justify your answer 1. \(\theta_{n+1}=[\theta_{n}+\Omega+1.5\sin\theta_{n}]\) modulo \(2\pi\), 2. \(\theta_{n+1}=[\theta_{n}+\Omega+0.5\sin\theta_{n}]\) modulo \(2\pi\), 3. \(x_{n+1}=[2x_{n}-x_{n-1}+k\sin x_{n}]\) modulo \(2\pi\), 4. \(x_{n+1}=x_{n}+k(x_{n}-y_{n})^{2}\), \(y_{n+1}=y_{n}+k(x_{n}-y_{n})^{2}\), 5. \(\mathrm{d}x/\mathrm{d}t=v\), \(\mathrm{d}t/\mathrm{d}t=-\alpha v+C\sin(\omega t-kx)\), 6. \(\mathrm{d}y/\mathrm{d}t=B\cos y+C\sin z\) 7. \(\mathrm{d}y/\mathrm{d}t=C\cos z+A\sin x\) 8. \(\mathrm{d}z/\mathrm{d}t=A\cos x+B\sin y\). 2. Consider the one-dimensional motion of a free particle which bounces elastically between a stationary wall located at \(x=0\) and a wall whose position oscillates with time and is given by \(x=L+\Delta\sin(\omega t)\). Derive a map relating the times \(T_{n}\) of the \(n\)th bounce off the oscillating wall and the particle speed \(\upsilon_{n}\) between the \(n\)th bounce and the \((n+1)\)th bounce off the oscillating wall to \(T_{n+1}\) and \(\upsilon_{n+1}\). Assume that \(L\gg\Delta\) so that \(\upsilon_{n}(T_{n+1}-T_{n})\approx 2L\). Is the map relating \((T_{n}\), \(\upsilon_{n})\) to \((T_{n+1}\), \(\upsilon_{n+1})\) conservative? Show that a new variable can be introduced in place of \(T_{n}\), such that the new variable is bounded and results in a map which yields the same \(\upsilon_{n}\) as for the original map for all \(n\). 3. Write a computer program to take iterates of the Henon map. Considering the case \(A=1.4\), \(B=0.3\) and starting from an initial condition \((x_{0}\), \(y_{0})=(0\), \(0)\) iterate the map 20 times and then plot the next 1000 iterates to get a picture of the attractor. 4. Plot the first 25 iterates of the map given by Eq. (8) starting from \(x_{0}=1/2\); \((a)\) for \(r=3.8\) (chaotic attractor), \((b)\) for \(r=2.5\) (period one attractor), and \((c)\) for \(r=3.1\) (period two attractor). 5. For the map (8) with \(r=3.8\) plot the iterates of the two orbits originating from the initial conditions \(x_{0}=0.2\) and \(x_{0}=0.2+10^{-5}\) versus iterate number. When does the separation between the two orbits first exceed 0.2? 6. A wheel of radius \(R\) and moment of inertia \(I\) is mounted on an axle \(A\). The wheel has a massless peg \(P\) attached to its periphery as shown in Figure 18. Every \(\tau\) seconds the peg is struck by a hammer in the downward direction. The hammer imparts an impulsive force to the peg, \(F(t)=f_{0}\quad_{n}\dot{\omega}(t-n\tau)\) where \(\dot{\omega}(t)\) is the delta function and the sum is over integers \(n=0\), \(1\), \(2\), \(\dots\). There is viscous friction between the wheel and the axle such that the axle exerts a frictional torque, \(T_{f}(t)=-\nu Io(t)\), where \(\omega(t)=\mathrm{d}\theta(t)/\mathrm{d}t\). Let \(\theta_{n}\) and \(\omega_{n}\) denote the values of \(\theta(t)\) and \(\omega(t)\) just before the \(n\)th hammer strike. 1. Show that the map expressing \((\theta_{n+1}\), \(\omega_{n+1})\) in terms of \((\theta_{n}\), \(\omega_{n})\) is \(\omega_{n+1}=(\omega_{n}+k\sin\theta_{n})\mathrm{e}^{-\nu t}\); \(\theta_{n+1}=\theta_{n}+\nu\)\((1-\mathrm{e}^{-\nu t})(\omega_{n}+k\sin\theta_{n})\) where \(k=f_{0}R/I\). 2. By numerical iteration from a suitable initial condition (e.g., \(\theta_{0}=1\), \(\omega_{0}=0\)) make a picture of the attractor for the map in (\(a\)) for \(v=1.5\), \(\tau=1\), \(k=9\). 3. Show that this map is conservative if \(v=0\), and find the factor by which areas in (\(\theta\), \(\omega\))-space are contracted on each iterate if \(v>0\). 4. Show explicitly from the map expression in (\(a\)) that the map is invertible. 5. Show that in the limit of large \(\tau\) that the map in (\(a\)) approximately reduces to a one-dimensional map. Under what condition is this one-dimensional map invertible? 6. For \(2>(2\pi v/k)>1\), how many period one orbits are there for the map in (\(a\))? (Regard \(\theta\) and \(\theta+2m\pi\) as physically equivalent for \(m\), a positive or negative integer.) ## Notes 1. Some review articles giving compact overview of chaotic dynamics are those of Helleman (1980), Ott (1981), Shaw (1981) and Grebogi (1987d). 2. Some experiments on chaos in fluids are those of Libchaber and Maurer (1980), Gollub and Benson (1980), Berge (1980), Brandstater (1983) and Sreenivasan (1986). 3. Applications of chaos to plasmas as well as many other topics are dealt with in the book by Sagdeev (1990). 4. For example, Bryant and Jefferies (1984), Iansiti (1985), Carroll (1987), Roukes and Alerhand (1990) and Ditto (1990b). 5. For example, Linsay (1981), Testa (1982) and Rollins and Hunt (1984). 6. For example, Arecchi (1982), Gioggia and Abraham (1984) and Mork (1990). 7. The book by Moon (1987) on chaos contains outlines of results from a number of mechanical applications. 8. Chaotic phenomena and nonlinear dynamics in biology are dealt with in the book by Glass and Mackey (1988). 9. For example, Rossler (1976), Roux (1980), Hudson and Mankin (1981) and Simoyi (1982). 10. For example, Lauterborn (1981). 11. For example, Wisdom (1987) and Petit and Henon (1986). 12. A review on the topic of routes to chaos is that of Eckmann (1981). 13. For example, the Poincare Bendixon theorem (e.g., see Hirsch and Smale (1974)) states that a two-dimensional flow confined to a bounded region of the plane can only have periodic attractors if there are no fixed points of the flow. In particular, chaotic attractors are necessarily absent. 14. See May (1976) for an early discussion of the dynamics of this map. Figure 1.18: Diagram of the setup for Problem 6. 15. Note that, if we take \(|\Delta(0)|\) to be a small constant value (rather than examining \(|\Delta(t)|/|\Delta(0)|\) in the limit that \(|\Delta(0)|\to 0\)), then the growth of \(|\Delta(t)|\) cannot be exponential forever. In particular, \(|\Delta(t)|<2\,R\), and hence exponential growth must cease when \(|\Delta(t)|\) becomes of the order of the attractor size. Thus later on (in Chapters 2 and 4) we shall be defining sensitive dependence on initial conditions in terms of the exponential growth of _differential_ separations between orbits. 16. The definition of chaos given here is for chaotic _attractors_. When dealing with _nonattracting_ chaotic sets (treated in Chapter 5) a more general definition of chaos is called for. Such a more general definition, which seems suitable very broadly, equates chaos with the condition of positive topological entropy. Topological entropy is defined in Chapter 4. ## Chapter 2 One-dimensional maps One dimensional noninvertible maps are the simplest systems capable of chaotic motion.\({}^{1}\) As such, they serve as a convenient starting point for the study of chaos. Indeed, we shall find that a surprisingly large proportion of the phenomena encountered in higher dimensional systems is already present, in some form, in one dimensional maps. ### 2.1 Piecewise linear one-dimensional maps As a first example, we consider the _tent map_, \[x_{n+1}=1-2|x_{n}-\tfrac{1}{2}|. \tag{2.1}\] This map is illustrated in Figure 2.1(_a_). For \(x_{n}<\tfrac{1}{2}\), Eq. (2.1) is \(x_{n+1}=2x_{n}\). Hence initial conditions that are negative remain negative and move off to \(-\infty\), doubling their distance from the origin on each iterate. For \(x_{n}>\tfrac{1}{2}\), \(x_{n+1}=2(1-x_{n})\). Hence, if \(x_{0}>1\), then \(x_{1}<0\), and the subsequent orbit points again move off to \(-\infty\). For \(x_{n}\) in the interval \([0,\,1]\), we have \(0\leq 1-2|x_{n}-\tfrac{1}{2}|\leq 1\), and so the subsequent iterate, \(x_{n+1}\), is also in \([0,\,1]\); hence, if \(0\leq x_{0}\leq 1\), the orbit remains bounded and confined to \([0,\,1]\) for all \(n\geq 0\). We henceforth focus on the dynamics of orbits in \([0,\,1]\). Figure 2.1(_b_) illustrates the action of the map on the interval \([0,\,1]\) as consisting of two steps. In the first step, the interval is uniformly stretched to twice its original length. In the second step, the stretched interval is folded in half, so that the folded line segment is now contained in the original interval. Following a point on the original line segment \([0,\,1]\) through this stretching and folding process, its final location is given in terms of its location before stretching and folding byEq. (2.1). The stretching leads to exponential divergence of nearby trajectories (by a factor of two on each iterate). The folding process keeps the orbit bounded. Note, also, that the folding process causes the map to be noninvertible, since it results in two different values of \(x_{n}\) mapping to the same \(x_{n+1}\). This example illustrates a general result for one dimensional maps mapping an interval into itself (here the interval is [0, 1]). That is, in order for there to be chaos, the map must, on average, be stretching. On the other hand, for the orbit to remain bounded in the presence of stretching, there must also be folding. Hence, for a one dimensional map to be chaotic, it must be noninvertible. To further illustrate the sensitive dependence on initial conditions for the tent map, consider composing the map with itself \(m\) times to obtain \(M^{m}\). Here, \(M^{m}\) is defined as \[M^{m}(x)=M(M^{m-1}(x))=\underbrace{M(M(M(\ldots(M(x))\ldots)))}_{m\text{ times}}\] \[M^{1}(x)=M(x).\] Figure 2.1: The tent map. Thus, \[x_{n+m}=M^{m}(x_{n}). \tag{2.2}\] Figure 2.2(\(a\)) shows \(x_{n+2}=M(M(x_{n}))=M^{2}(x_{n})\) versus \(x_{n}\) for the tent map. To obtain Figure 2.2(\(a\)), we note that, if \(x_{n}\) is equal to 0, \(\frac{1}{2}\) or 1, then two applications of (2.1) yield \(x_{n+2}=0\), while, if \(x_{n}\) is either \(\frac{1}{4}\) or \(\frac{3}{4}\), then two applications of (2.1) yield \(x_{n+2}=1\). Noting that the variation of \(x_{n+2}\) with \(x_{n}\) between these points is linear, Figure 2.2(\(a\)) follows. Figure 2.2(\(b\)) shows \(x_{n+m}=M^{m}(x_{n})\) for arbitrary \(m\). Thus, given the knowledge that an initial condition \(x_{0}\) lies within \(\pm 2^{-m}\) of some point, then Figure 2.2(\(b\)) shows that \(x_{m}\) can lie anywhere in the interval [0, 1]. Hence, the know ledge we have of the small range in which the initial condition falls leads to absolutely no knowledge of the location of orbit points \(x_{n}\) for times \(n\)\(m\). This is a consequence of the exponential sensitivity of chaotic orbits to small changes in initial conditions. Another simple map, closely related to the tent map, is \(M(x)=2x\) modulo 1, \[x_{n+1}=2x_{n}\text{ modulo }1. \tag{2.3}\] Figure 2.3 shows the map and its \(m\)th iterate. This map can be regarded as a map on a circle, since the modulo 1 in Eq. (2.3) makes \(x\) like an angle variable, where \(x\) increasing from 0 to 1 corresponds to one circuit around the circle. Viewed in this way, the action of the \(2x\) modulo 1 map on the circle may be thought of as the stretch twist fold operation illustrated in Figure 2.4. First, the circle is uniformly stretched so that its circumference is twice its original length. Then it is twisted into a figure 8, the upper and lower lobes of which are circles of the original length. The upper circle is then folded down on to the lower circle, and the two circles are pressed together. By following this operation, a point on the original circle (Figure 2.4(\(a\))) is mapped to a point on the final pressed together circle in such a way that its \(x\) coordinate transforms as in Eq. (2.3). Both Figures 2.3(\(b\)) and 2.4 illustrate the chaotic separation of nearby points (by a factor of two) on each iterate of this map. Figure 2.3: (\(a\)) The \(2x\) modulo 1 map, and (\(b\)) its \(m\)th iterate. An alternate way of viewing the \(2x\) modulo 1 map is as an example of a _Bernoulli shift_. Say we represent the initial condition \(x_{0}\) as a binary decimal \[x_{0}=0.a_{1}a_{2}a\ \ \ldots\equiv\sum_{j=1}^{\infty}2^{-j}a_{j}, \tag{2.4}\] where each of the digits \(a_{j}\) is either 0 or 1. Then, the next iterate is obtained by setting the first digit to zero and then moving the decimal point one space to the right, \[x_{1} =0.a_{2}a\ a_{4}\ \ldots\,,\] \[x_{2} =0.a\ a_{4}a_{5}\ \ldots\,,\] and so on. Thus, digits that are initially far to the right of the decimal point, and hence have only a very slight influence on the initial value of \(x\), eventually become the first digit. Thus, a small change of the initial condition, such as changing \(a_{40}\) from a zero to a one (a change of \(x_{0}\) by \(2^{-40}\)), eventually, at time \(n=39\), makes a large change in \(x_{n}\). We define a periodic orbit of a map to have period \(p\) if the orbit successively cycles through \(p\)_distinct_ points \(\mathfrak{X}_{0}\), \(\mathfrak{x}_{1}\),..., \(\mathfrak{x}_{p-1}\). (These points are 'distinct' if \(\bar{x}_{i}\neq\bar{x}_{j}\) unless \(i=j\).) Thus for each such point \(\bar{x}_{j}\) we have \(\bar{x}_{j}=M^{p}(\bar{x}_{j})\) for \(j=0\), \(1\),..., \(p-1\). We can now use the binary Figure 2.4: Stretch twist fold operation. representation (2.4) to construct periodic orbits of the map (2.3). In fact, any infinite sequence of zeros and ones which is made up of identically repeating finite sequence segments produces a binary digit expansion of an initial condition for a periodic orbit. For example, the initial condition \(x_{0}=0.10101010\ldots=\frac{2}{2}\), which is made by repeating the two digit se quence \(10\)_ad infinitum_, is the initial condition for a period \(2\) orbit. Applying (2.3) to \(x_{0}\) yields \(x_{1}=0.010101\ldots=\frac{1}{2}\). Applying (2.3) to \(\frac{1}{2}\) reproduces \(\frac{2}{2}\). Thus, we produce a period \(2\) orbit (\(\frac{2}{2}\), \(\frac{1}{2}\), \(\frac{2}{1}\), \(\ldots\)) which repeats after every second iterate. Similarly, orbits of any arbitrarily large period \(p\) arise from initial conditions of the form \(0.a_{1}a_{2}\ldots a_{p}a_{1}a_{2}\ldots\). We can ask, how many different initial conditions are there that return to themselves after \(p\) iterations? Since there are \(2^{p}\) distinct sequences \(a_{1}\), \(a_{2}\),..., \(a_{p}\), we conclude that there are \(2^{p}-1\) such initial conditions. The minus one arises because the two sequences (\(a_{1}\), \(a_{2}\),..., \(a_{p}\)) = (0, 0,..., 0) and (\(a_{1}\), \(a_{2}\),..., \(a_{p}\)) = (1, 1,..., 1) yield the same results, namely \(0=0.000\ldots\) and \(1=0.111\ldots\), which are the same values modulo \(1\). A point \(y\) on a period \(p\) orbit is also a fixed point (i.e., a period one point) of the \(p\) times composed map, \[y=M^{p}(y).\] We illustrate this for period \(p=2\) and the \(2x\) modulo \(1\) map in Figure 2.5. We note that there are \(2^{p}=2^{2}=4\) intersections of the diagonal line \(x_{n+2}=x_{n}\) with the two times composed map function \(M^{2}(x_{n})\). These intersections are \(0\), \(\lambda\), \(\frac{2}{2}\) and \(1\). Since \(0\) and \(1\) are equivalent, there are Figure 2.5: Fixed points of \(M^{2}\). \(2^{2}-1=\) distinct initial conditions which repeat after two iterates. The point \(0\) is a fixed point of the original map. The points \(\lambda\) and \(\lambda\) are on the period two orbit (\(\lambda\), \(\lambda\), \(\lambda\), \(\lambda\), \(\lambda\), \(\ldots\)). Example:How many period four orbits are there for the \(2x\) modulo \(1\) map? There are \(2^{4}-1=15\) fixed points of \(M^{4}\). Of these \(15\), one is \(0\), and two are \(\lambda\) and \(\lambda\) (since fixed points of \(M\) and \(M^{2}\) are necessarily fixed points of \(M^{4}=(M^{2})^{2}\)). This leaves \(12\) fixed points of \(M^{4}\) which are not fixed points of \(M^{p}\) for any \(p<4\). These must lie on orbits of period four. Thus, there are \(12/4=\) distinct period four orbits. Note that the number of fixed points of \(M^{p}\) for the tent maps is \(2^{p}\), since, as is evident from Figure 2.2, the graph of \(x_{n+p}=M^{p}(x_{n})\) will have \(2^{p}\) intersections with \(x_{n+p}=x_{n}\). Unlike the \(2x\) modulo \(1\) map, the tent map has two distinct fixed points, \(x=0\) and \(x=\lambda\). For both the tent map and the \(2x\) modulo \(1\) map, the number of fixed points of \(M^{p}\) that are not fixed points of \(M\) is \(2^{p}-2\). If \(p\) is a prime number, then all of these \(2^{p}-2\) fixed points must lie on periodic orbits of period \(p\). (If \(p\) is not prime and has integer factors \(p_{1}\), \(p_{2}\), \(\ldots\) (\(p=p_{1}^{n_{1}}\,p_{2}^{n_{2}}\)\(\ldots\)), then some of these \(2^{p}-2\) points will be on orbits of the lower periods \(p_{1}\), \(p_{2}\), \(\ldots\).) Hence, if \(p\) is prime, the number of periodic orbits of period \(p\) is\({}^{2}\) \[N_{p}=(2^{p}-2)/p \tag{2.5}\] for both the tent map and the \(2x\) modulo \(1\) map. This number gets large rapidly; for example, \(N_{11}=186\), \(N_{13}=630\), and \(N_{17}=7710\). For \(p\) not prime, the number \(N_{p}\) of periodic orbits satisfies \(N_{p}<(2^{p}-2)/p\) and is more difficult to obtain, as our example for \(p=4\) above demonstrates. Nevertheless, for _large_\(p\), we always have that \(N_{p}\simeq 2^{p}/p\) with a correc tion which is small compared to \(N_{p}\). We now address the question of the stability of periodic orbits of a one dimensional map. Say we have a periodic orbit of period \(p\): \(\mathfrak{x}_{0}\), \(\mathfrak{x}_{1}\), \(\ldots\), \(\mathfrak{x}_{p-1}\), \(\mathfrak{x}_{p}\), \(\ldots\), where \(\mathfrak{x}_{p}=\mathfrak{x}_{0}\). Then, for each of these \(x\) values \(\bar{x}_{j}=M^{p}(\bar{x}_{j})\), for \(j=0\), \(1\), \(\ldots\), \(p-1\). Say we take an initial condition slightly different from \(\bar{x}_{j}\). We denote this initial condition \(x_{0}=\bar{x}_{j}+\delta_{0}\). As a result of the deviation \(\delta_{0}\), the \(p\)th iterate of \(x_{0}\) is slightly different from \(\bar{x}_{j}\), and we denote it \(x_{p}=\bar{x}_{j}+\delta_{p}\). Thus, \[\bar{x}_{j}+\delta_{p}=M^{p}(\bar{x}_{j}+\delta_{0}).\] Since \(\delta_{0}\) is small, we Taylor expand to first order in \(\delta_{0}\), to obtain \[\delta_{p}=\lambda_{p}\delta_{0}, \tag{2.6}\] where\[\lambda_{p}=\frac{{\rm d}M^{p}(x)}{{\rm d}x}\bigg{|}_{x=\overline{x}_{j}}=\frac{{ \rm d}x_{n+p}}{{\rm d}x_{n}}\bigg{|}_{x_{n}=\overline{x}_{j}}.\] Using \[\frac{{\rm d}x_{n+p}}{{\rm d}x_{n}} =\frac{{\rm d}x_{n+1}}{{\rm d}x_{n}}\frac{{\rm d}x_{n+2}}{{\rm d}x _{n+1}}\ldots\frac{{\rm d}x_{n+p}}{{\rm d}x_{n+p-1}}\] \[=M^{\prime}(x_{n})M^{\prime}(x_{n+1})\ldots M^{\prime}(x_{n+p-1}),\] where \(M^{\prime}(x)\equiv{\rm d}M(x)/{\rm d}x\), we find that \(\lambda_{p}={\rm d}M^{\,p}(x)/{\rm d}x|_{x=\overline{x}_{j}}\) is the same for all points \(\overline{x}_{j}\) on the periodic orbit, \[\lambda_{p}=M^{\prime}(\overline{x}_{0})M^{\prime}(\overline{x}_{1})\ldots M^ {\prime}(\overline{x}_{p-1}). \tag{7}\] (For any \(\overline{x}_{j}=x_{n}\), the points \(x_{n}\), \(x_{n+1}\),..., \(x_{n+p-1}\) cycle through each point on the given periodic orbit, leading to the above product of terms \(M^{\prime}\) at every point in the cycle.) What happens if we follow the point \(\overline{x}_{j}+\delta_{p}=\overline{x}_{j}+\lambda_{p}\delta_{0}\) another \(p\) iterates around the periodic cycle? If we do this, it maps to \(\overline{x}_{j}+\delta_{2\,p}=\overline{x}_{j}+\lambda_{p}\delta_{p}= \overline{x}_{j}+\lambda_{p}^{2}\delta_{0}\), yielding the devia \(\delta_{2\,p}=\lambda_{p}^{2}\delta_{0}\). In general, \[\delta_{mp}=\lambda_{p}^{m}\delta_{0}. \tag{8}\] Thus, the deviation from the periodic orbit grows (if \(|\lambda_{p}|>1\)) or shrinks (if \(|\lambda_{p}|<1\)) by a factor \(|\lambda_{p}|\) on each circuit around the periodic cycle. If \(|\lambda_{p}|>1\), the periodic orbit is said to be _unstable_. This is the case for all the periodic orbits of the tent map and the \(2x\) modulo 1 map. This follows from the fact that \(|M^{\prime}(x)|=2\) for these maps for all \(x\) (except at the single point \(x=\frac{1}{2}\), where the derivative is not defined, and which, in any case, does not lie on any periodic orbit). Thus, by Eq. (7), for these maps, \(|\lambda_{p}|=2^{\,p}>1\), and all of the periodic orbits are unstable. On the other hand, in the next section we shall deal with situations where periodic orbits are stable, \(|\lambda_{p}|<1\). In this case, initial conditions near the periodic orbit asymptote to it, and the periodic orbit is an attractor. We say that the periodic orbit is _stable_ if \(|\lambda_{p}|<1\) and _superstable_ if \(\lambda_{p}=0\). We call \(\lambda_{p}\) the _stability coefficient_ for the periodic orbit. From our previous discussion of the fixed points of \(M^{\,p}\) for the tent map and the \(2x\) modulo 1 map, it is clear that, for these maps, points on periodic orbits are _dense_ in the interval [0, 1]. That is, for any \(x\) in [0, 1] and any \(\varepsilon\), _no matter how small \(\varepsilon\)_ is, there is at least one point on a periodic orbit (actually an infinite number of such points) in \([x-\varepsilon,\,x+\varepsilon]\). For example, from Figure 2.3(\(b\)), there is one fixed point of \(M^{\,p}\) in each interval \([2^{-\,p}(m-1),\,2^{-\,p}\,m]\) for \(m=1\), \(2\),..., \(2^{\,p}\). Thus, there is at least one fixed point of \(M^{\,p}\) and hence at least one periodic point of \(M\) in \([x-\varepsilon,\,x+\varepsilon]\) for \(p>\ln(1/\varepsilon)/\ln 2\) (i.e., \(2^{-\,p}<\varepsilon\)). The fact that periodic points are dense is very significant. It is important to note, however, that the periodic points are a _countably_ infinite set, while the set of _all_ points in the interval [0, 1] is _uncountable_. In this sense, the periodic points, while dense, are still a much smaller set than all the points in [0, 1]. This implies, in particular, that, if one were to choose an initial condition \(x_{0}\) at random in [0, 1] according to a uniform probability distribution in [0, 1], then the probability that \(x_{0}\) lies on a periodic point of the map is zero. Hence, randomly chosen initial conditions do not produce periodic orbits for the tent and \(2x\) modulo 1 maps. Thus, we say that nonperiodic orbits are _typical_ for these maps, and periodic orbits are not typical. If we make a histogram of the fraction of times a finite length orbit originating from a typical initial condition falls in bins of equal size along the \(x\) axis, \([(m-1)/N,\ m/N]\) for \(m=1,\ 2,\ \ldots,\ N\), then the fraction of time spent in a bin approaches \(1/N\) for each bin as the length of the orbit is allowed to increase to infinity. Thus, defining a function \(\rho(x)\) such that, for any interval [\(a,\ b\)] in [0, 1], the fraction of the time typical orbits spend in [\(a,\ b\)] is \(\int_{a}^{b}\rho(x)\mathrm{d}x\), we have that, for the tent map and the \(2x\) modulo 1 map, \[\rho(x)=1\ \mathrm{in}\ [0,\ 1]. \tag{9}\] We call \(\rho(x)\) the _natural invariant density_. Of course, if \(x_{0}\) is chosen to lie exactly on an unstable periodic point, the orbit does not generate a uniform density. We emphasize, however, that such points have zero probability when \(x_{0}\) is chosen randomly. The uniform invariant density generated by orbits from typical initial conditions for the tent and \(2x\) modulo 1 maps is an example of a _natural measure_, a concept which we discuss in Section 2.3.3. ### The logistic map In this section, we consider the logistic map (8), \[x_{n+1}=rx_{n}(1-x_{n}). \tag{10}\] As pointed out by May (1976), this map may be thought of as a simple idealized ecological model for the yearly variations in the population of an insect species. Imagine that every spring these insects hatch out of eggs laid the previous fall; they eat, grow, mature, mate, lay eggs, and then die. Assuming constant conditions each year (same weather, predator popu lation, etc.), the population at year \(n\) uniquely determines the population at year \(n+1\). Thus a one dimensional map applies. Say that the number \(z_{n}\) of insects hatching out of eggs is not too large. Then we can imagine that for each insect, on average, there will be \(r\) eggs laid, each of which hatches at year \(n+1\). This yields a population at year \(n+1\) of \(z_{n+1}=rz_{n}\). Assuming \(r>1\) this also yields an exponentially increasing population \(z_{n}=r^{n}z_{0}\). However, if the population is too large, the insectsmay begin to exhaust their food supply as they eat and grow. Thus some insects may die before they reach maturity. Hence the average number of eggs laid per hatched insect will become less than \(r\) as \(z_{n}\) is increased. The simplest possible assumption incorporating this overcrowding effect would be to say that the number of eggs laid per insect decreases linearly with the insect population, \(r[1-(z_{n}/\overline{z})]\), where \(\overline{z}\) is the insect population at which the insects exhaust all their food supply such that none of them reach maturity and lay eggs. This yields the one dimensional map \(z_{n+1}=rz_{n}[1-(z_{n}/\overline{z})]\). Dividing through by \(\overline{z}\) and letting \(x=z/\overline{z}\), we obtain the logistic map Eq. (10). We now examine the dynamics of the logistics map. In particular, we shall be concerned with the question of how the character of the orbits originating from typically chosen initial conditions changes as the para meter \(r\) is varied. The map function is shown in Figure 8. The maximum of \(M(x)\) occurs at \(x=\frac{1}{2}\) and is \(M(\frac{1}{2})=r/4\). Thus, for \(0\)\(r\)\(4\), if \(x_{n}\) is in \([0,\,1]\), then so is \(x_{n+1}\), and the orbit remains in \([0,\,1]\) for all subsequent time. If \(r>1\), then \(M^{\prime}(0)>1\) and the fixed point at \(x=0\) is unstable. Also, if \(r>1\), then \(M(x)<x\) for negative \(x\), implying that (as for the tent map) negative initial conditions and initial conditions in \(x>1\) generate orbits which tend to \(-\infty\) with increasing time. In this section, we restrict our considerations to \(4\)\(r\)\(1\) and \(x\) in the interval \([0,\,1]\). First, consider the case \(r=4\), \[x_{n+1}=4x_{n}(1-x_{n}). \tag{11}\] In this case, a change of variables transforms the logistic map into the tent map, Eq. (1). For \(x\) in the interval \([0,\,1]\), define \(y\), also in \([0,\,1]\), by \[x=\sin^{2}\biggl{(}\frac{\pi y}{2}\biggr{)}=\frac{1}{2}[1-\cos(\pi y)]. \tag{12}\] (See Figure 6.) Substituting in (11), we obtain \(\sin^{2}(\pi y_{n+1}/2)=1-\cos^{2}(\pi y_{n})=\sin^{2}(\pi y_{n})\). This yields \((\pi y_{n+1}/2)=\pm(\pi y_{n})+s\pi\), where \(s\) is an integer. Recalling that \(y\) is defined to lie in \([0,\,1]\) determines the choice of \(s\) and the sign of the term \(\pi y_{n}\). Thus, we obtain \(y_{n+1}=2y_{n}\) (i.e., the plus sign and \(s=0\)) for \(0\)\(y_{n}\)\(\frac{1}{2}\), and \(y_{n+1}=2-2y_{n}\) (i.e., the minus sign and \(s=1\)) for \(\frac{1}{2}\)\(y_{n}\)\(1\). This is just the tent map. Since the tent map is chaotic, so too must be the logistic map at \(r=4\). Also, the logistic map at \(r=4\) must have the same number of unstable periodic orbits as does the tent map. Since periodic orbits are dense in \([0,\,1]\) for the tent map, they are also dense in \([0,\,1]\) for the logistic map at \(r=4\). In addition, for \(r=4\) typical (e.g., randomly chosen) initial conditions \(x_{0}\) in \([0,\,1]\) yield orbits which generate a smooth invariant density of points as for the tent map. Let \(\rho(x)\) denote the natural invariant density for the logistic map with \(r=4\), and let \(\tilde{\rho}(y)\) be the natural invariant density for the tent map. According to Eq. (9), \(\tilde{\rho}(y)=1\) for \(1\quad y\quad 0\). To find \(\rho(x)\), we make use of the fact that the interval in \(y\), \([y\), \(y+\mathrm{d}y]\), and the corresponding interval in \(x\), \([x\), \(x+\mathrm{d}x]\), obtained by applying the change of variables, must be visited by typical orbits from their respective maps with the same frequency. Thus, \(\rho(x)|\mathrm{d}x|=\tilde{\rho}(y)|\mathrm{d}y|\) or \[\rho(x)=\ \ \frac{\mathrm{d}y(x)}{\mathrm{d}x}\ \tilde{\rho}(y(x)).\] Using (9) for \(\tilde{\rho}(y)\) and (12) for \(y(x)\), we obtain the natural invariant density for the logistic map at \(r=4\), \[\rho(x)=\pi^{-1}/[x(1-x)]^{1/2}. \tag{13}\] This density is graphed in Figure 7. Note the singularities at \(x=0\) and \(x=1\); \(\rho\sim 1/x^{1/2}\) near \(x=0\) and \(\rho\sim 1/(1-x)^{1/2}\) near \(x=1\). Having determined that there is chaos at \(r=4\), let us now examine smaller values of \(r\). For \(r\neq 0\), the logistic map has two fixed points given by the two solutions of \(x=rx(1-x)\), namely, \(x=0\) and \(x=1-1/r\). As already mentioned, the \(x=0\) fixed point is unstable for \(r>1\). Noting that \(M^{\prime}(1-1/r)=2-r\), we see that \(x=1-1/r\) is stable (\(|2-r|<1\)) for \(>r>1\). Hence, \(x=1-1/r\) is a fixed point attractor in this range. Furthermore, it may be shown that there are no periodic orbits with periods \(p>1\) for \(>r\), and that, in the range \(>r>1\), any initial condition \(x_{0}\) which satisfies \(1>x_{0}>0\) approaches the attractor at \(x=1-1/r\). We say that \([0\), \(1]\) is the _basin of attraction_ of the attractor \(x=1-1/r\). We have seen that there is chaos and an infinite number of unstable periodic orbits which are dense in \([0\), \(1]\) at \(r=4\). For \(1<r<\), there are only two periodic orbits of period one and no chaos. How is the infinite Figure 6: The change of variables (12). number of periodic orbits at \(r=4\) and the accompanying chaotic dynamics created as \(r\) is increased continuously from \(r=3\) to \(r=4\)? This is a question we shall be addressing in some detail. To begin, it is instructive to consider the two times iterated logistic map \(M^{2}\) as shown in Figure 2.8. The fixed point of \(M\) at \(x=x_{*}\equiv 1-1/r\) is also a fixed point of \(M^{2}\), and the slope of \(M^{2}(x)\) at \(x=x_{*}\) is \([M^{\prime}(x_{*})]^{2}=(2-r)^{2}\). Thus, as \(r\) increases through \(r=3\), the orbit \(x=x_{*}\) becomes unstable (\(M^{\prime}(x_{*})\) decreases through \(-1\)), and, simultaneously, the slope of \(M^{2}(x)\) at \(x_{*}\) increases from below one to above one. As shown in Figure 2.8, this leads to the creation of two new fixed points of \(M^{2}\). Since these two new fixed points of \(M^{2}\) are not fixed points of \(M\), they must lie on a period two orbit. Thus, at precisely the point where the period one fixed point \(x=x_{*}\) becomes unstable, a period two orbit is created. Furthermore, this period two orbit is stable when it is created. This can be seen from the fact that for \(r\) slightly larger than the slope of \(M^{2}(x)\) at the period two points is necessarily less than 1. The situation is schematically illustrated in Figure 2.8(\(c\)) which shows the solutions of \(M^{2}(x)=x\) as a function of \(r\), with the solutions corresponding to stable orbits shown as solid lines, and the unstable solution (\(x=x_{*}\) for \(r>3\)) shown as a dashed line. The change in the orbit structure, illustrated in Figure 2.8, is called a _period doubling bifurcation_. Another way to visualize the occurrence of the period doubling bifurcation is shown in Figure 2.9. First, we note that the orbit starting from \(x_{n}\) can be obtained by the graphical interpretation of the map Figure 2.7: The invariant density \(\rho(x)\) generated by typical orbits of the logistic map for \(r=4\). relation, \(x_{n+1}=M(x_{n})\), shown in Figure 2.9(_a_). Starting at some value \(x_{n}\) on the horizontal axis, the vertical dashed line in Figure 2.9(_a_) locates the point (\(x_{n}\), \(x_{n+1}\)). Then, going along the horizontal line from the point (\(x_{n}\), \(x_{n+1}\)) to the \(45^{\circ}\) line \(M(x)=x\), we locate the point (\(x_{n+1}\), \(x_{n+1}\)). Going vertically from this point to the curve \(M(x)\), we locate the point (\(x_{n+1}\), \(x_{n+2}\)). Again, going horizontally to the \(45^{\circ}\) line, we come to the point (\(x_{n+2}\), \(x_{n+2}\)). Proceeding in this way we can generate a sequence of orbit points. Using this type of construction, Figure 2.9(_b_) shows convergence to the period one fixed point \(x=x_{*}\) for \(1<r<\), and Figure 2.9(_c_) shows convergence to the period two orbit for \(r\) a little larger than 3. In fact, when the stable period two orbit exists, it attracts typical (in the sense of randomly chosen) initial conditions in [0, 1]. Another way of saying this is that it attracts all points in [0, 1] except for a set of Lebesgue measure zero (roughly, the nonattracted set has zero length). (A compact set \(A\) is of Lebesgue measure zero if, for any \(\varepsilon>0\), we can cover \(A\) with a finite number of intervals whose total length is less than \(\varepsilon\).) The set of points in [0, 1] not attracted to the period two is just the three points \(x=0\) and \(x=x_{*}\) (which are unstable fixed points) and \(x=1\) (which maps to \(x=0\) on one iterate). We have seen that the period one orbit has a stability coefficient \(\lambda_{1}=M^{\prime}(x_{*})=2-r\) which is 1 at \(r=1\) and decreases to \(\lambda_{1}=-1\) at \(r=3\), the point of the period doubling bifurcation. The period two orbit has a stability coefficient \(\lambda_{2}=M^{\prime}(e)M^{\prime}(f)=(M^{2})^{\prime}(e)=(M^{2})^{\prime}(f)\) (where \(x=e\), \(f\) are the points on the period 2 orbit), and \(\lambda_{2}=1\) for \(r=3\). As \(r\) increases past \(r=r_{0}=3\), the stability coefficient \(\lambda_{2}\) de creases from \(\lambda_{2}=1\), eventually becoming negative (e.g., the slope of \(M^{2}(x)\) at \(e\) and \(f\) has decreased to negative values in Figure 2.8(\(b\))). At some value \(r=r_{1}\), the quantity \(\lambda_{2}\) becomes \(-1\), and for \(r>r_{1}\), we have \(\lambda_{2}<-1\); hence, the period two orbit is unstable (\(|\lambda_{2}|>1\)). Thus, \(x=x_{*}\) is stable in a range \(1<r\)\(r_{0}=3\), while the period two orbit is stable in a range \(r_{0}<r\)\(r_{1}\). As \(r\) increases through \(r_{1}\), the period two orbit period doubles to a period four orbit. In doing this, the picture is essentially the same as for the period doubling at \(r=r_{1}\), except that the roles of \(M\) and \(M^{2}\) in our interpretation of the original period doubling are now played by \(M^{2}\) and \(M^{4}\). Increasing \(r\) from \(r_{1}\) to a value \(r_{2}\), the stability coefficient of the period four orbit decreases from \(\lambda_{4}=1\) to \(\lambda_{4}=-1\), past which point a stable period eight orbit appears, and remains stable for a range \(r_{2}<r\). This process of period doublings continues, successively producing an infinite cascade of period doublings with ranges, \(r_{m-1}<r\). \(r_{m}\), in which period \(2^{m}\) orbits are stable. The length in \(r\) of the range of stability for an orbit of period \(2^{m}\) decreases approximately geometrically with \(m\). In particular (Feigenbaum, 1978, 1980a) \[\frac{r_{m}-r_{m-1}}{\mathrm{r}_{m+1}-r_{m}}\to 4.669201\,\ldots\,\equiv \dot{\delta} \tag{14}\] as \(m\to\infty\). Also, there is an accumulation point of an infinite number of period doubling bifurcations at a finite \(r\) value denoted \(r_{\infty}\), \[r_{\infty}\equiv\lim_{m\to\infty}r_{m}=\ \.57\,\ldots\,. \tag{15}\] Equation (14) implies that for large \(m\) \[|r_{\infty}-r_{m}|\simeq\ (\mathrm{const.})\ \dot{\delta}^{-m}. \tag{16}\] Figure 10(\(a\)) shows a schematic plot of the points on the stable \(2^{m}\) cycle as a function of \(r\). (Orbits larger than eight are not shown in this figure because their stability ranges become so tiny.) Figure 10(\(b\)) shows a similar plot, but now the horizontal coordinate is replaced by \(-\log(r_{\infty}-r)\). On this latter plot, the horizontal distance between succesive period doublings approaches a constant as \(r\) approaches \(r_{\infty}\); namely, it approaches \(\log\dot{\delta}\). There is another scaling ratio in addition to \(\dot{\delta}\) that one can define. For this purpose, it is useful to define a _superstable_ period \(2^{m}\) orbit as occurring at that value of \(r\) (denoted \(\dot{r}_{m}\)) at which the stability coefficient for the period \(2^{m}\) orbit is zero. Recall that the stability coefficient decreases from \(1\) to \(-1\) as \(r\) goes from \(r_{m-1}\) to \(r_{m}\), so that \(\dot{r}_{m}\) may, in some sense, be regarded as the middle of the range of the stable period \(2^{m}\) orbit. Since the stability coefficient is zero at \(\dot{r}_{m}\), we see from Eq. (7) that it must be the case that the _critical point_ (defined as the point at the maximum of \(M\), \(M^{\prime}(x)=0\)), \(x=\frac{1}{2}\), is a point on the superstable orbit. Let \(\Delta_{m}\) be the distance between this point and the nearest to it of the other \(2^{m}-1\) points in the cycle. This nearest point turns out to be the point which is one half period displaced from the critical point, \[\Delta_{m}=M^{2^{m-1}}(\tfrac{1}{2})-\tfrac{1}{2}. \tag{17}\] It is found that for large \(m\), the quantity \(\Delta_{m}\) decreases geometrically (Feigenbaum, 1978, 1980a) \[\Delta_{m}/\Delta_{m+1}\to-2.50280\,\ldots\,\equiv-\dot{\alpha}. \tag{18}\](The minus sign in Eq. (2.18) signifies that the nearest orbit point to \(x=\frac{1}{2}\) switches between below \(\frac{1}{2}\) and above \(\frac{1}{2}\) each time \(m\) is increased by 1.) As we shall discuss later (Chapter 8), the scaling numbers \(\hat{\delta}\) and \(\hat{\cdot}\) appearing in Eqs. (2.14) and (2.18) were shown by Feigenbaum to be _universal_ in the sense that they apply not only to the logistic map, but to any typical dissipative system which undergoes a period doubling cascade. Furthermore, these scaling numbers have been verified in experiments on a variety of physical systems, including ones where the describing dynamical system is infinite dimensional (e.g., fluid flows). Thus, the result for a one dimensional map is found to apply to systems with arbitrarily high dimensionality. What happens beyond \(r=r_{\infty}\) (i.e., in the range \(r_{\infty}\)\(r\)\(4\))? To answer this question, Figure 2.11(\(a\)) shows the _bifurcation diagram_ for the logistic map for the range \(r\)\(4\). This diagram is computed in the following way:This procedure is followed until \(r=4\) is reached. The reason for not plotting the first 500 iterates is that we wish our plot to show the orbit on (or very close to) the attractor and not the transient motion leading to it. Thus, the figure essentially shows the attracting set in \(x\) as a function of the parameter \(r\). For example, at \(r=4\), the orbit fills the entire interval [0, 1]. For a value of \(r\) slightly less than 4, the attractor is a single interval contained with [0, 1] as shown in Figure 2.11(\(b\)). As for \(r=4\), this orbit is apparently chaotic, but its motion is restricted to the smaller interval (\(r=4\)). Figure 2.11: (\(a\)) Bifurcation diagram. (\(b\)) A single band chaotic attractor. (\(c\)) A two band chaotic attractor. shown in Figure 2.11(\(b\)). As \(r\) is decreased through a value \(r_{0}^{\prime}\) labeled in Figure 2.11(\(a\)), the attractor splits into two bands, as shown in Figure 2.11(\(c\)). At the value of \(r\) corresponding to Figure 2.11(\(c\)), the orbit on the attractor alternates between the two bands on every iterate. If one were to examine the orbit on every second iterate, then the orbit would always be in the same one of the two bands and would undergo an apparently chaotic sequence restricted to that band, eventually coming arbitrarily close to every point in the band. As \(r\) is decreased from the situation shown in Figure 2.11(\(c\)), the two band attractor splits into a four band attractor at \(r=r_{1}^{\prime}\), into an eight band attractor at \(r=r_{2}^{\prime}\), and so on. The band doublings accumulate on \(r_{\infty}\) from above with the same geometric scaling as for the accumulation of period doublings on \(r_{\infty}\) from below, \[\frac{r_{m-1}^{\prime}-r_{m}^{\prime}}{r_{m}^{\prime}-r_{m+1}^{\prime}} \rightarrow\hat{\delta}, \tag{19}\] where \(\hat{\delta}\) is the same universal number as in Eq. (14). In addition to apparently chaotic orbits, Figure 2.11(\(a\)) also shows that there are narrow ranges within \(r_{\infty}\)\(r\)\(4\) in which the attracting orbit is periodic. For example, the widest such range is occupied by the period three orbit. A blow up of the bifurcation diagram in the range where the period three orbit occurs is shown in Figure 2.12(\(a\)). We see that the period three orbit is born at \(r=r_{\ast}\) ; undergoes a period doubling cascade in which orbits of period \(\times\)\(2^{m}\) are successively produced; becomes chaotic and undergoes a cascade of band mergings (\(3\times 2^{m}\) bands \(\rightarrow\)\(\times\)\(2^{m-1}\) bands), until a range of \(r\) is reached where chaos apparently appears in three bands. Finally, at \(r=r_{\rm c3}\), the attractor abruptly widens into a single band similar in size to that before the stable period three orbit came into existence. We call the range of \(r\) values between the point where the period three orbit is born and the point where the three bands widen into one band a _period three window._ There are an infinite number of windows of arbitrarily high period within the chaotic range \(4\)\(r\)\(r_{\infty}\). For example, there are (\(2^{p}-2\))/(\(2\,p\)) windows of period \(p\) if \(p\) is prime (the reason for this will be given subsequently). Each period \(p\) window essentially contains a replication of the bifurcation diagram for the map over its whole range (e.g., Fig. 2.12(\(b\))). Thus, the windows themselves have windows, which themselves have windows, etc. For example, a window of period \(p=9=\)\(\times\) is discernible in Figure 2.12. As shown by Yorke _et al_. (1985) the bifurcation diagram within typical high period windows becomes universal (map independent) as the period is increased. For example, the ratio of the width in the parameter of a window to the parameter difference between the initiation of the window and the occurrence of the first period doubling in the window universally ap proachesIt has been shown that the windows are dense throughout the chaotic range. That is, given a value of \(r\) for which the orbit is chaotic, then in any \(\varepsilon\) neighbourhood of that \(r\) value \([r-\varepsilon,\,r+\varepsilon]\), one can always find windows no matter how small \(\varepsilon\) is. We can give a heuristic argument for why windows should be dense in the chaotic range of \(r\) values, as follows. Say we have a chaotic orbit at \(r=\tilde{r}\), and we wish to argue that there is a stable periodic orbit (and hence a window) in the \(r\) interval \([\tilde{r}-\varepsilon,\,\tilde{r}+\varepsilon]\) for any \(\varepsilon\). We assume that the initial condition \(x_{0}=\frac{1}{2}\) behaves like a typical initial condition and hence generates an orbit which comes arbitrarily close to every point on the attractor. (This appears to be true except at Figure 2.12: (\(a\)) The period three window. (\(b\)) A blow up of the middle section of the period three window from a value of \(r\) before the period doubling from period 12 to period 24 to \(r=r_{c}\). The details reproduce, almost exactly, the features of Figure 2.11(\(a\)) (from Yorke _et al._, 1985). special \(r\) values, one of which is \(r=4\) for which \(M(\frac{1}{2})=1\), \(M(1)=0\), \(M(0)=0\).) Thus, if we wait long enough, at some time, \(m=\tau\), the orbit will fall very close to \(\frac{1}{2}\). Since \(x_{\tau}=M^{\tau}(\frac{1}{2})\), we can regard \(x_{\tau}\) as purely a function of \(r\). Since \(x_{\tau}\) is close to \(x=\frac{1}{2}\), only a small change of \(r\), say \(r\), should be required to shift \(x_{\tau}\) to \(\frac{1}{2}\). At \(r=\bar{r}+\ r\) we would then necessarily have a stable (in fact, superstable) periodic orbit of period \(\tau\). The orbit is stable by Eq. (2.7) because \(M^{\prime}(\frac{1}{2})=0\), and \(x=\frac{1}{2}\) is a point on the orbit. If \(\mid\ r\mid\) turns out to exceed \(\varepsilon\), we can wait longer and find another \(x_{\tau}\) that is much closer to \(x=\frac{1}{2}\) (indeed, arbitrarily close). Hence, we should be able to make \(\mid\ r\mid\) as small as we wish, and we thus believe that the windows are dense. Given that stable periodic attractors are dense in \(r\), one might question whether there is any room in \(r\) left for chaotic attractors to exist in. That is, if we choose a value of \(r\) randomly according to a uniform probability distribution in [\(r_{\infty}\), 4], is the probability zero that our choice yields chaos? From our bifurcation diagrams it certainly appears that we often see \(r\) values where the orbits are apparently not periodic. Nevertheless, one might argue that these are only periodic orbits of extremely large period. The question has been settled in the proof by Jacobson (1981) which shows that the probability of choosing a chaotic \(r\) is not zero. Hence, chaos for this map is said to be 'typical.' Nevertheless, it may still seem strange that \(r\) intervals of nonchaotic (i.e., periodic) attractors are dense, yet the set of \(r\) values yielding chaotic orbits is still not probability zero. Thus it may be useful at this point to give a simple example of a set with these characteristics. The example is as follows. Say we consider the rational numbers in the interval [0, 1]. These numbers are dense in [0, 1], since any irrational can be approximated by a rational to arbitrary accuracy. The rationals are also countable, since we can arrange them in a linear ordering (such as, \(\frac{1}{2}\), \(\frac{1}{2}\), \(\frac{1}{4}\), \(\frac{1}{2}\), \(\frac{2}{4}\), \(\frac{1}{2}\), \(\frac{2}{\cdot}\), \(\frac{4}{5}\), \(\frac{1}{6}\), \(\frac{1}{7}\), \(\ldots\)). To the \(n\)th rational on this list (denoted \(s_{n}\)), we now associate an interval \(I_{n}=(s_{n}-(\eta/2)(\frac{1}{2})^{n}\), \(s_{n}+(\eta/2)(\frac{1}{2})^{n})\) of length \(2^{-n}\eta\). We are interested in the set \(S_{*}\) formed by taking the interval [0, 1] and then successively removing \(I_{1}\), \(I_{2}\), \(I\), \(\ldots\), \(I_{n}\) in the limit \(n\rightarrow\infty\). Since the total length of all the removed interval sets is \(\sum_{n=1}^{\infty}(\frac{1}{2})^{n}\eta=\eta\), we have that the Lebesgue measure ('length') of \(S_{*}\), denoted \(\mu(S_{*})\) satisfies \[\mu(S_{*})>1-\eta, \tag{2.20}\] which is positive if \(\eta<1\). (The greater than symbol, rather than an equals sign, appears in (2.20) because some of the removed intervals overlap.) Note that the Lebesgue measure of \(S_{*}\) is also the probability of randomly choosing a point in \(S_{*}\) from points in [0, 1]. Thus, for \(\eta<1\), for any point in the set \(S_{*}\), an \(\varepsilon\) neighborhood always contains intervals \(I_{n}\) (which by definition are not in \(S_{*}\)), yet \(S_{*}\) has positive Lebesgue measure. The set \(S_{*}\) is an example of a _Cantor set_ of positive (as opposed to zero) Lebesgue measure. Cantor sets both of zero Lebesgue measure and of positive Lebesgue measure, are common in chaotic dynamics. The appendix to this chapter reviews some elementary material on sets, including Lebesgue measure and Cantor sets. We now ask, what is the mechanism by which the stable period \(p\) orbit initiating the period \(p\) window arises as \(r\) is increased? To answer this question, we consider the example of the period three window. Figure 2.13(_a_) shows the third iterate of the map \(M\) (\(x\)) as a function of \(x\) for \(r\) below \(r_{*}\) (solid curve) and for \(r\) above \(r_{*}\) (dashed curve). At \(r=r_{*}\), the graph of \(M\) (\(x\)) becomes tangent to the line \(x_{n+}\)\(=x_{n}\) at three points near the first and second minimum and the fourth maximum of \(M\) (\(x\)). For slightly larger \(r\), the graph of \(M\) (\(x\)) intersects \(x_{n+}\)\(=x_{n}\) at two points near each of the three tangencies that occurred for \(r=r_{*}\). The slope at three of the intersections is less than 1 and hence represents the stable Figure 2.13: (_a_) \(x_{n+}\) versus \(x_{n}\) for \(r=\.7<r_{*}\) (solid curve) and for \(r=\.9>r_{*}\) (dashed curve). (_b_) Schematic of \(M\) (\(x\)) versus \(x\) near \(x=0\).. period three orbit. The slope at the other three intersections is greater than 1 and represents an unstable period three orbit. Thus, as \(r\) increases through \(r_{*}\), we simultaneously create a period three stable attracting orbit and an unstable period three orbit. Figure 2.13(_b_) schematically shows the graph of \(M\) (\(x\)) near the middle minimum at \(x=\frac{1}{2}\) for five successively larger values of \(r\). The situation is illustrated in Figure 2.14 (only the \(x\) coordinates of the period three orbit points near \(x=\frac{1}{2}\) are plotted). This type of phenomenon is called a _tangent bifurcation_. As we shall see in Sections 2.3.1 and 2.3.2, the occurrence of windows as \(r\) is increased proceeds in a very regular and general order. ### 2.3 General discussion of smooth one-dimensional maps #### General bifurcations of smooth one-dimensional maps A qualitative change in the dynamics which occurs as a system parameter varies is called a _bifurcation_. In this subsection we shall be concerned with bifurcations of _smooth_ one dimensional maps which depend smoothly on a single parameter \(r\). (We say a function is smooth if it is continuous and several times differentiable for all values of its argument.) To emphasize the parameter dependence we write the map function as \(M(x,\,r)\), with the logistic map, Eq. (1.8), as a specific example. Without loss of generality, we shall consider bifurcations of period one orbits (i.e., fixed points). (Bifurcations of a period \(p\) orbit can be reduced to consideration of a period one orbit by shifting attention to the \(p\) times iterated map \(M^{p}\) for which each point on the period \(p\) orbit is a fixed point.) We say a Figure 2.14: Tangent bifurcation. The stable branch is shown as a solid line and the unstable branch is shown as a dashed line. bifurcation is _generic_ if the basic character of the bifurcation cannot be altered by arbitrarily small perturbations that are smooth in \(x\) and \(r\). That is, if \(M(x,\,r)\) is replaced by \(M(x,\,r)+\varepsilon g(x,\,r)\), where \(g\) is smooth, then if \(\varepsilon\) is small enough, the qualitative bifurcation behavior is unchanged. There are three generic types of bifurcations of smooth one dimensional maps: the period doubling bifurcation, the tangent bifurcation, and the inverse period doubling bifurcation. These are illustrated in Figure 2.15, where the parameter \(r\) is taken as increasing to the right, and we have defined forward and backward senses for each bifurcation. Dashed lines are used for unstable orbits and solid lines for stable orbits. Thus, for example, in the forward inverse period doubling bifurcation, an initially unstable period one orbit bifurcates into an unstable period two and a stable period one orbit. (The forward period doubling bifurcation and forward tangent bifurcations have already been discussed in the previous section in the context of the logistic map.) Figure 2.16 shows how the three forward bifurcations can occur as the shape of the map\({}^{4}\) changes with increasing \(r\). Note that in all three cases the magnitude of the stability coefficient of the period one orbit \(|M^{\prime}|\) is 1 at the bifurcation point. In order to get a better idea of the meaning of the word generic as applied to bifurcations, we now give an example of a nongeneric bifurca tion. We consider the bifurcation of the logistic map at \(r=1\). The logistic map has two fixed points \(x=0\) and \(x=x_{*}=1-1/r\). These fixed points Figure 2.15: Generic bifurcations of differentiable one dimensional maps. The system parameter \(r\) increases toward the right. The vertical scale represents the value of the map variable. Dashed lines represent unstable orbits. Solid lines represent stable orbits. coincide at \(r=1\). For \(r\) slightly less than \(1\), \(x=0\) is stable and \(x=x_{*}\) is unstable. For \(r\) slightly greater than \(1\), the stability characteristics of the two fixed points are interchanged. Thus the bifurcation diagram near \(r=1\) is as shown in Figure 2.17(_a_). Now say we perturb the logistic map by adding to it a small number \(\varepsilon\) \[M(x;\;r)=rx(1-x)+\varepsilon. \tag{2.21}\] For \(\varepsilon\) positive, we obtain the picture in Figure 2.17(_b_), in which there is no bifurcation at all, but only a continuous change of the two fixed points with increasing \(r\). For \(\varepsilon\) negative, we obtain two tangent bifurcations as shown in Figure 2.17(_c_). Nongeneric bifurcations require'special' conditions on the map function, and one therefore expects that they would be unlikely to occur in applications. For example, the nongeneric bifurcation of the logistic map at \(r=1\) occurs because the map satisfies the special condition \(M(0,\;r)=0\)_for all_\(r\). We now ask how the infinite number of unstable periodic orbits for the logistic map at \(r=4\) is created as \(r\) is increased to \(r=4\) from values of \(r\) in the range \(>r>1\) (where the only periodic orbits are the fixed points (period one orbits) \(x=0\) and \(x=x_{*}\)). The key point (which we will not demonstrate) is that the logistic map can be shown to have no backward bifurcations and no inverse period doublings. Hence the answer to our question lies in the character of the forward period doubling and tangent bifurcations (the only bifurcations of the logistic map in \(r>1\)). Namely, as \(r\) is increased a tangent bifurcation _creates_ two new orbits, one stable and one unstable. The period doubling bifurcation takes an original stable period \(p\) orbit and replaces it by an unstable period \(p\) and a stable period \(2p\) orbit. Thus the period doubling also creates a new orbit, one of per iod \(2p\), as \(r\) is increased. Also since every stable period \(p\) orbit period doubles, every stable orbit that is created eventually yields an unstable orbit of the same period. The unstable orbits remain unstable and are not destroyed as \(r\) is increased, because they can only be destroyed or rendered stable by backward bifurcations (Figure 15) or by the forward inverse period doubling bifurcation, and these do not occur for the logistic map. Thus every periodic orbit created at lower \(r\) must be present at \(r=4\). As an application of this discussion we can use it to deduce the number of windows that exist for a given period for the logistic map. Assuming that all tangent bifurcations initiate the start of a window (as illustrated, for example, in Figures 12 and 13 for period three), we can utilize our knowledge of the value of \(N_{p}\), the number of periodic orbits at \(r=4\), to deduce the number of tangent bifurcations needed to create them, which then gives the number of windows. For example, in Section 2.1 we found that the tent map has three period four orbits. Hence the logistic map at \(r=4\) must also have three period four orbits. Of these three, one is created when the period two period doubles. The remaining two must have been created by a tangent bifurcation. Hence we conclude that as \(r\) is increased from \(r=1\) to \(r=4\) there must be one period four tangent bifurcation (each tangent bifurcation creates two orbits), and hence there Figure 17: (\(a\)) Bifurcation of the logistic map at \(r=1\). (\(b\)) There is no bifurcation when \(\varepsilon\) is a small positive number. (\(c\)) There are two tangent bifurcations (denoted by dots) when \(\varepsilon\) is a small negative number. is one period four window. If \(p\) is an odd number, then orbits of period \(p\) can only be produced by tangent bifurcations, and the number of windows is \(N_{p}/2\). In particular, if \(p>1\) is prime, we have by Eq. (2.5) that the number of period \(p\) windows of the logistic map is \((2^{p-1}-1)/p\). To conclude our discussion of generic bifurcations of one dimensional maps, we emphasize that these bifurcations have their counterparts in higher dimensional maps and flows. Figures 2.18(_a_) and (_b_) illustrate the occurrence of a period doubling bifurcation of a flow in phase space. Before the bifurcation (Figure 2.18(_a_)) there is a single periodic attracting orbit manifested as a single point in the Poincare surface of section shown in the figure. After the bifurcation there are two periodic orbits (Figure 2.18(_b_)). One of these is an unstable'saddle' periodic orbit (the dashed curve); i.e., it repels orbits in one direction and attracts them in the other direction. This saddle is essentially the continuation of the stable periodic orbit that existed before the bifurcation and manifests itself as a single point in the surface of the section. The other, period doubled, orbit shown in Figure 2.18(_b_) is stable and passes twice through the surface of the section. We may view the period doubled orbit as forming the edges of a Mobius strip in which the dashed unstable orbit lies. Points in the Mobius strip are repelled by the unstable orbit and pushed toward the edge of the strip (i.e., they approach the period doubled orbit). Figure 2.18(_c_) illus trates the counterpart of the tangent bifurcation in a flow. (In the flow context this bifurcation is called a _saddle node_ bifurcation.) At the bifurcation, coincident saddle periodic and attracting periodic orbits are created. As the parameter is increased the two orbits separate as shown. Figure 2.18: (_a_) and (_b_) A period doubling bifurcation. (_c_) A saddle node bifurcated situation. In (_c_) the arrows on the surface of section indicate directions of attraction and repulsion from the saddle and the node fixed points of the surface of section map. #### Organization of the appearance of periodic orbits One dimensional maps are relatively more constrained in their possible dynamics than higher dimensional systems. One consequence of this is the remarkable theorem of Sarkovskii (1964). Consider the following ordering of all the positive integers \[3,\,5,\,7,\,\ldots,\,2\times 3,\,2\times 5,\,2\times 7,\,\ldots,\,2^{2}\times 3,\,2^{2}\times 5,\,2^{2}\times 7,\,\ldots,\] \[2\,\,\times 3,\,2\,\times 5,\,2\,\times 7,\,\ldots,\,2\,\,,\,2^{4}, \,2\,\,,\,2^{2},\,2,\,1 \tag{2.22}\] That is, first we list all the odd numbers except one. Then we list two times all the odd numbers except 1. Then we list \(2^{2}\) times all the odd numbers except 1, and so on. Having done this, we have accounted for all the postive integers except for those which are a power of 2, which we add to the list in decreasing order. **Theorem.** Suppose a continuous map \(M(x)\) of the real line (i.e., the set of points \(-\infty\quad x\quad+\infty\)) has a periodic orbit of period \(p\) (i.e., a point \(\bar{x}\) on the orbit returns to itself after \(p\) iterates and not before). If, in Sarkovskii's ordering (2.22), \(p\) occurs before another integer \(l\) in the list, then the map \(M\) also has a periodic orbit of period \(l\). Note that, if \(M\) has a periodic orbit of period \(p\), and \(p\) is not a power of 2, then the theorem implies that \(M\) must have an infinite number of periodic orbits (in particular all orbits of period \(2^{m}\) for \(m=0,\,1,\,2,\,3,\,\ldots\)). In addition, if \(M\) has a period three orbit, it must also have orbits of all other periods. Thus, for example, in the range of the period three windows for which there is an attracting period three orbit there must be an infinite number of periodic orbits of all periods. In this range of \(r\), however, all periodic orbits except the period three are unstable, and hence do not attract typical initial conditions. In addition to period three implying the existence of all other periods, Li and Yorke (1975) show that the existence of a period three orbit also implies the existence of an uncountable set of orbits which never settle into a periodic cycle and remain nonperiodic forever. (They introduced the term 'chaos' to describe this situation.) These nonperiodic orbits are also nonattracting when the stable period three exists. Thus, for a given value of \(r\) for which the period three orbit is attracting, there are an uncountably infinite number of initial conditions in [0, 1] for which nonperiodic, very compli cated orbits result. However, these initial conditions are of zero Lebesgue measure, and the orbits they yield are unstable. Thus, starting at a typical initial condition very near to one that follows a nonperiodic orbit, the orbit initially follows the nonperiodic orbit, but eventually diverges from it and is then attracted to the period three orbit. The same situation with respectto the existence of nonperiodic orbits applies for all ranges of \(r\), above \(r_{\infty}\), for which stable periodic orbits occur. Note the fundamental difference between this situation, and the situation that holds in \(>r>1\). In that range there is an attracting period one orbit, and the only initial conditions in \([0,\ 1]\) not attracted to it are \(x=0\) and \(x=1\). Still more can be said concerning the organization of periodic orbits. Metropolis _et al_. (1973) consider maps of the form \[M(x,\ r)=\ rf(x),\] where \(f(x)\) has a single maximum. (Actually their considerations are somewhat more general.) At the value of \(r\) at which a periodic orbit of period \(p\) is superstable (i.e., has stability coefficient \(\lambda_{p}\equiv 0\)), the orbit is labeled by a string of \(p-1\) symbols. Since the orbit is superstable, it goes through the critical point. If the \(n\)th iterate from the critical point falls to the right of the critical point, the \(n\)th symbol in the string is an \(R\); if it falls to the left, the \(n\)th symbol in the string is an \(L\). For example, the symbol string corresponding to the period four orbit shown in Figure 2.19 is \(RLL\) or \(RL^{2}\) (no character label is used for the initial condition at the critical point). Metropolis _et al_. show that, as \(r\) is increased, orbits with particular symbol strings appear in accord with a well defined rule. In particular, they show how to produce an ordered list of orbits such that if, as \(r\) is increased, two different orbits on the list occur, then all orbits in between them on the list must also have occurred. For example, if a period nine with symbol sequence \(RL^{2}RLR^{2}L\) appears at some value of \(r\), and a period four with symbol sequence \(RL^{2}\) appears at some larger value of \(r\), then there must appear at some \(r\) value in between a period five with symbol sequence \(RL^{2}R\). Figure 2.19: The symbol sequence for this period four superstable orbit is \(RLL\). #### Measure, ergodicity and Lyapunov exponents for one-dimensional maps We have seen that typical initial conditions for the tent map and logistic map at \(r=4\) generate orbits whose frequency of visits to any given interval is described by a function \(\rho(x)\) which we have called the _natural invariant density_. Another view of the invariant density is the following. Imagine that we start off with an infinite number of initial conditions sprinkled along the \(x\) axis with a smooth density \(\rho_{0}(x)\) such that the fraction of those initial conditions in an interval \([a,\,b]\) is \(\int_{a}^{b}\rho_{0}(x)\mathrm{d}x\). Now imagine applying the map \(M(x)\) to each initial condition. Thus a new density \(\rho_{1}(x)\) is generated. Similarly, applying the map again, we obtain a density \(\rho_{2}(x)\) and so on. The relation evolving a density forward in time is \[\rho_{n+1}(x)=\int\rho_{n}(y)\delta[x-M(y)]\mathrm{d}y,\] (23a) where \[\delta(x)\] is a delta function. Equation ( 23a ) is called the _Frobenius Perron equation_. The _invariant_ density previously discussed satisfies the equation obtained by setting \[\rho_{n+1}(x)=\rho_{n}(x)=\rho(x),\] \[\rho(x)=\int\rho(y)\delta[x-M(y)]\mathrm{d}y. \tag{23b}\] In order to see how Eq. (23a) comes about, consider Figure 20. Orbit points in the range \(x\) to \(x+\mathrm{d}x\) at time \(n+1\) came from the ranges \(y^{(i)}\) to \(y^{(i)}+\mathrm{d}y^{(i)}\) at time \(n\), where \(y^{(i)}\) denote the solutions of the equation Figure 20: Illustration of the derivation of the Frobenius Perron equation. \(M(y)=x\) (for Figure 2.20 there are three such solutions). Thus, the number of orbit points in the interval \(x\) to \(x+\mathrm{d}x\) at time \(n+1\) is the sum of the number of orbit points in the intervals \(y^{(i)}\) to \(y^{(i)}+\mathrm{d}y^{(i)}\) at time \(n\), \[\rho_{n+1}(x)=\quad\rho_{n}(y^{(i)})\ \frac{\mathrm{d}x}{\mathrm{d}y^{(i)}}^{-1}= \quad\rho_{n}(y^{(i)})|M^{\prime}(y^{(i)})|^{-1},\] which is Eq. (2.23a) by virtue of the delta function identity \[\delta(x-M(y))=\quad\delta(y-y^{(i)})|M^{\prime}(y^{(i)})|^{-1}.\] (This can be shown by expanding \(x-M(y)\) to first order around each of its zeros.) For the logistic map at \(r=4\) we have found that the natural invariant density, given by Eq. (2.13), has singular behavior at the two points \(x=0\) and \(x=1\). The reason for this is as follows. If we examine the fraction of time a typical orbit spends within a small interval \(I_{\varepsilon}(\overline{x})=[\overline{x}-\varepsilon,\,\overline{x}+\varepsilon]\), then this quantity, which we call the natural measure of \(I_{\varepsilon}(\overline{x})\), is \[\mu(I_{\varepsilon}(\overline{x}))=\quad\begin{array}{c}\overline{x}+ \varepsilon\\ \overline{x}-\varepsilon\end{array}\rho(x)\mathrm{d}x. \tag{2.24}\] If \(\rho(x)\) is smooth and bounded, then \(\mu(I_{\varepsilon}(\overline{x}))\quad\varepsilon\) for small \(\varepsilon\). Now consider the small \(\varepsilon\) interval, centered at the critical point, \(I_{\varepsilon}(\frac{1}{2})\), for the logistic map at \(r=4\). Since \(\rho(x)\) varies smoothly at \(x=\frac{1}{2}\), we have \(\mu(I_{\varepsilon}(\frac{1}{2}))\quad\varepsilon\). Mapping the interval \(I_{\varepsilon}(\frac{1}{2})\) forward in time by one iterate, it maps to \([1-4\varepsilon^{2},\,1]\). Since every point in \(I_{\varepsilon}(\frac{1}{2})\) maps to \([1-4\varepsilon^{2},\,1]\), we have that \[\mu(I_{\varepsilon}(\frac{1}{2}))=\mu(I_{4\varepsilon^{2}}(1))\] (because \(\rho(x)=0\) for \(x>1\), we may add the interval \([1,\,1+4\varepsilon^{2}]\) to the interval \([1-4\varepsilon^{2},\,1]\) without changing the natural measure). Hence \(\mu(I_{4\varepsilon^{2}}(1))\quad\varepsilon\), implying that \[\mu(I_{\varepsilon}(1))\quad\varepsilon^{1/2}. \tag{2.25}\] Thus for a small \(\varepsilon\), we see that \(\mu(I_{\varepsilon}(1))\) is much larger than would be the case if \(\rho(x)\) were smooth and bounded, at \(x=1\). Utilizing (2.24), we see that (2.25) implies an inverse square root singularity of \(\rho(x)\) at \(x=1\). Now applying the map to the interval \(I_{\varepsilon}(1)\) and noting that \(M(1)=0\), we find that \[\mu(I_{\varepsilon}(0))\quad\varepsilon^{1/2},\] and there is thus also an inverse square root singularity of \(\rho(x)\) at \(x=0\). The main point is that singularities are produced at iterates of the critical point \(x=\frac{1}{2}\). For \(r=4\) the logistic map maps the critical point \(x=\frac{1}{2}\) to \(x=1\) and then maps \(x=1\) to the _fixed point_\(x=0\), and subsequent iterates remain at \(x=0\). In contrast, at lower values of \(r\), where chaotic behavior occurs, we observe numerically that the orbit starting at the critical point apparently does not fall on an unstable fixed point or periodic orbit (except for very specially chosen[6] values of \(r\)). Rather, the orbit from the critical point appears to wander throughout the attracting interval, eventually coming arbitrarily close to every point in the interval. That is, the initial condition at the critical point acts like a typically chosen initial condition. However, we have seen before that, for \(r=4\), the invariant density \(\rho(x)\) was singular at the locations of iterates of the critical point. Now these iterates are dense in the attracting interval. Thus we might expect that \(\rho(x)\) will typically exhibit very 'nasty' behavior. In particular, the function \(\rho(x)\) is expected to be discontinuous everywhere and probably has a dense countable set of \(x\) values (the iterates of the critical point) at which \(\rho(x)\) is infinite. We can imagine that a numerical histogram approximation of \(\rho(x)\) would reveal more and more bins of unusually large values as the bin size is reduced and the orbit length increased. See Figure 2.21. This type of behavior is to be expected for a typical chaotic one dimensional map with a smooth maximum. Rather than deal with the density \(\rho(x)\) one can equivalently deal with the corresponding measure \(\mu\). (In general, even if a density cannot be Figure 2.21: \(\rho(x)\) versus \(x\) for the logistic map at \(r=\.8\). The figure is computed from a histogram. With finer and finer resolution and longer and longer orbit length the sharp peaks in the figure become more numerous and their heights increase without bound (after Shaw, 1981). sensibly defined, a suitable measure can be. Hence measure is a more generally applicable concept.7) More specifically we shall deal with _probability measures_. A probability measure \(\mu\) for a bounded region \(R\) assigns nonnegative numbers to any set in \(R\), is countably additive, and assigns the number 1 to \(R\), \(\mu(R)=1\). By countably additive, we mean that, given any countable family of disjoint (i.e., nonoverlapping) sets \(S_{i}\) in \(R\), then the measure of the union of these sets is the sum of the measures of the sets, \[\mu\left(\bigcup_{i}S_{i}\right)=\quad\mu(S_{i}).\] Given a set \(S\) we define \(M^{-1}(S)\) as the set of points which map to \(S\) on one iterate. Thus \(M^{-1}(S)\) is defined even if the map \(M\) is not invertible (e.g., if \(M\) is the \(2x\) modulo 1 map and \(S=(\underline{x},\,1)\) then \(M^{-1}(S)=(\underline{g},\,\underline{1})\cup(\underline{\xi},\,1)\)). We say a measure \(\mu\) is _invariant_ if \[\mu(S)=\mu(M^{-1}(S)).\] (If the map is invertible this is the same as saying \(\mu(S)=\mu(M(S))\).) Say that we have a chaotic attractor of a one dimensional map and that this attractor has a _basin of attraction_\(B\). We define the basin of attraction as the closure of the set of initial conditions that are attracted to the attractor. (For example, the logistic map for \(0<r<4\) may be thought of as having two attractors. One is the point at \(x=-\infty\) with basin of attraction \([-\infty,\,0]\cup[1,\,+\infty]\). The other is the attractor in \([0,\,1]\) with basin of attraction \([0,\,1]\).) Given an interval \(S\), let \(\mu(S,\,x_{0})\) denote the fraction of time an orbit originating from an initial condition \(x_{0}\) in \(B\) spends in the interval \(S\) in the limit that the orbit length goes to infinity. If \(\mu(S,\,x_{0})\) is the same value for every \(x_{0}\) in the basin of attraction except for a set of \(x_{0}\) values of Lebesgue measure zero, then we say that \(\mu(S,\,x_{0})\) is the _natural measure_ of \(S\). We denote the natural measure of \(S\) by \(\mu(S)\) and say that its value is the common value assumed by \(\mu(S,\,x_{0})\) for all \(x_{0}\) in \(B\) except for a set of Lebesgue measure zero. (In our discussion above of the logistic map for \(0<r<4\) we have implicitly assumed the existence of a natural measure.) For smooth one dimensional maps with chaotic attractors natural measures and densities can be proven to exist under fairly general conditions. We note, however, that proving the existence of a natural measure in cases of higher dimensional systems encountered in applications is an open problem,8 although numerically the existence of a natural measure in such cases seems fairly clear. (An example of a point \(x_{0}\) on the zero Lebesgue measure set of which \(\mu(S,\,x_{0})\neq\mu(S)\) is the case where \(x_{0}\) is chosen to lie on an unstable periodic orbit in \(B\).) The above discussion implies that given a smooth function \(f(x)\) and an attractor with a natural measure \(\mu\), the time average of \(f(x)\) over an orbit originating from a _typical_ initial condition in the basin of the attractor is the same as its natural measure weighted average over \(x\), \[\lim_{T\to\infty}\frac{1}{T}\int_{n=0}^{T}f(M^{n}(x_{0}))=\ \ f(x)\mathrm{d} \mu(x), \tag{26}\] where \(\mathrm{d}\mu(x)=\rho(x)\mathrm{d}x\) when a density exists[9] (as before, by typical we mean all except for a set of Lebesgue measure zero). More generally, we say that an invariant probability measure \(\mu\) (not necessarily the natural measure) is _ergodic_ if it cannot be decomposed such that \[\mu=p\mu_{1}+(1-p)\mu_{2},\,1>p>0,\] where \(\mu_{1}=\mu_{2}\) are invariant probability measures. The _ergodic theorem_ states if \(f(x)\) is an integrable function and \(\mu\) is an ergodic probability measure (in the sense defined above), then the set \(A\) of \(x_{0}\) values for which the limit on the left hand side of Eq. (26) exists and Eq. (26) holds has \(\mu\) measure 1, \(\mu(A)=1\). Alternatively, the set of \(x_{0}\) values for which Eq. (26) does not hold has \(\mu\) measure 0. (Note that the result (26) for an attractor applies not only for \(x_{0}\) chosen as a typical point with respect to the natural measure but also for \(x_{0}\) chosen as a typical point with respect to Lebesgue measure in the basin of attraction.) A convenient indicator of the sensitivity to small orbit perturbations characteristic of chaotic attractors is the Lyapunov exponent (or, in higher dimensionality, exponents). The Lyapunov exponent \(h\) of a one dimensional map gives the average exponential rate of divergence of infinitesimally nearby initial conditions. That is, on average, the separation between two infinitesimally displaced initial points \(x_{0}\) and \(x_{0}+\mathrm{d}x_{0}\) typically grows exponentially as the two points are evolved by the map \(M\). Thus, for large \(n\) \[\mathrm{d}x_{n}\ \ \ \ \exp(hn)\mathrm{d}x_{0},\] where \(\mathrm{d}x_{n}\) denotes the infinitesimal separation between the two points after they have both been iterated by the map \(n\) times. We define the Lyapunov exponent \(h\) as \[h=\lim_{T\to\infty}\frac{1}{T}\ln\,\frac{\mathrm{d}x_{T}}{\mathrm{d}x_{0}}\.\] Noting that \[\mathrm{d}x_{T}/\mathrm{d}x_{0} =(\mathrm{d}x_{T}/\mathrm{d}x_{T-1})(\mathrm{d}x_{T-1}/\mathrm{d} x_{T-1})\ldots(\mathrm{d}x_{2}/\mathrm{d}x_{1})(\mathrm{d}x_{1}/\mathrm{d}x_{0})\] \[=M^{\prime}(x_{T-1})M^{\prime}(x_{T-2})\ldots M^{\prime}(x_{1})M^ {\prime}(x_{0}),\] we have \[h=\lim_{T\to\infty}\frac{1}{T}\int_{n=0}^{T-1}\ln|M^{\prime}(x_{n})|. \tag{2.27}\] The existence of a natural measure implies that the time average on the right hand side of (2.27) will be the same for all orbits \(x_{n}\) in the basin, except for those starting at a set of initial conditions of Lebesgue measure zero. Roughly, two nearby points initially separated by a distance \(0\) typically diverge from each other with time as \(n\)\(0\exp(hn)\) (cf. Figure 1.14). Thus, a positive Lyapunov exponent \(h>0\) indicates chaos. Using \(\ln|M^{\prime}(x)|\) for \(f(x)\) in (2.26), we have \[h= \ln|M^{\prime}(x)|\mathrm{d}\mu(x). \tag{2.28}\] Given a map \(x_{n+1}=M(x_{n})\) we might wish to make a change of variables \(y=g(x)\), where we assume the function \(g\) is continuous and invertible. For this new variable the map becomes \(y_{n+1}=\tilde{M}(y_{n})\) where \(\tilde{M}=g\circ M\circ g^{-1}\), where the symbol \(\circ\) denotes functional composition [i.e., \(\tilde{M}(z)=g(M(g^{-1}(z)))\)]. We say \(M\) and \(\tilde{M}\) are _conjugate_. An example is the conjugacy between the logistic map at \(r=4\) and the tent map (cf. Figure 2.6). If \(g\) is smooth, then \(M\) and \(\tilde{M}\) have the same Lyapunov exponent. (See Problem 16.) ### 2.4 Examples of applications of one-dimensional maps to chaotic systems of higher dimensionality In Chapter 1 we saw that systems of differential equations necessarily yield invertible maps when the Poincare surface of section technique is applied. Since the one dimensional maps studied in the present chapter are all noninvertible, it may be somewhat puzzling as to how noninvertible one dimensional maps might be relevant to situations involving differen trial equations. To clarify this point, consider the two dimensional invertible map (1.9). Eliminating the variable \(x_{n}^{(2)}\), we obtain \[x_{n+1}^{(1)}=f(x_{n}^{(1)})-Jx_{n-1}^{(1)}, \tag{2.29}\] where \(J\) is the Jacobian determinant of the original two dimensional map. If we set \(J=0\), Eq. (2.29) becomes a one dimensional map which can be noninvertible and chaotic. If \(J\) is very small, but not zero, then (1.9) is always invertible. However, for small \(J\) the first term in (2.29) is much larger than the second, and we have \(x_{n+1}^{(1)}\)\(f(x_{n}^{(1)})\). Thus for small \(J\) (implying rapid shrinking of areas as the map is iterated), the one dimensional map yields an approximation to the dynamics of the invertible system with \(J\) nonzero but small. Say we generate a chaotic orbit from some particular initial condition \((x_{0}^{(1)},x_{0}^{(2)})\) of the two dimensional map (1.9). We then record the values obtained for \(x_{n}^{(1)}\) for \(n=0\), 1, 2, \(\ldots\) for our particular orbit. Now say we plot \(x_{n+1}^{(1)}\) versus \(x_{n}^{(1)}\) using these data. From Eq. (2.29) the plotted points should fall approximately on the one dimensional curve \(x_{n+1}^{(1)}=f(x_{n}^{(1)})\). Actually, due to the term \(-Jx_{n-1}^{(1)}\), there will be some spread about this curve. To see the character of this spread we note that the Henon map, Eq. (1.14), is in the same form as Eq. (1.9). Thus, if the Jacobian for the situation plotted in Figure 1.12(_a_) were small (it is 0.3 which is not small), then we should expect to see that the points fall near the curve \(x^{(1)}=A-(x^{(2)})^{2}\). This is a parabola turned on its side. Alternatively, since for the Henon map (and Eq. (1.9)) \(x_{n+1}^{(2)}=x_{n}^{(1)}\), it is also the approximate (for small Jacobian) one dimensional map function turned on its side. Indeed, we see that the attractor in Figure 1.12(_a_) roughly follows the parabola \(x^{(1)}=A-(x^{(2)})^{2}\) (\(A=1.4\)), but has appreciable 'width' about the parabola. Within this width is the fractal like structure seen in the blow ups in Figures 1.12(_b_) and (_c_). As \(J\) is made smaller this width decreases, and eventually the attractor may look like a one dimensional curve. Nevertheless, as long as \(J\) is not exactly zero, magnification of such an _apparent_ curve will always reveal fractal structure. Although our discussion above has been in the context of Eq. (1.9), we expect that, in general, when very strong phase space contraction is present in higher dimensional systems, an approximate one dimensional map may apply. Basically, what can happen is that the attractor becomes highly elongated along a one dimensional unstable direction, and other directions transverse to it are so highly contracted that the dynamics in the transverse directions becomes difficult to discern. We now give some examples in which chaos in a one dimensional map has provided a key to obtaining an understanding of a specific physical model or experiment. #### The Lorenz system Lorenz (1963) considered the Rayleigh Benard instability discussed in Chapter 1. Assuming variations of the fluid to occur in only two spatial dimensions as shown in Figure 1.4, Saltzman (1962) had previously derived a set of first order differential equations by expanding a suitable set of fluid variables in a double spatial Fourier series with coefficients depending on time. Substituting the expansion into the fluid equations results in an infinite set of coupled first order ordinary differential equa tions. Truncation of this sytem by setting Fourier terms beyond a certain order to zero (the 'Galerkin approximation') results in a finite dimensional system which presumably yields an adequate approximation to the in finite dimensional dynamics if the truncation is at sufficiently high order. To gain insight into the types of dynamics that are possible, Lorenz considered a truncation to just three variables. While this truncation is not of high enough order to model the real fluid behavior faithfully, it was assumed that the resulting solutions would give an indication of the type of qualitative behavior of which the actual physical system was capable. The equations Lorenz considered were the following: \[{\rm d}X/{\rm d}t=-\tilde{\sigma}\,X+\tilde{\sigma}\,Y, \tag{30a}\] \[{\rm d}Y/{\rm d}t=-XZ+\tilde{r}X-Y,\] (30b) \[{\rm d}Z/{\rm d}t=XY-\tilde{b}\,Z, \tag{30c}\] where \(\tilde{\sigma}\), \(\tilde{r}\) and \(\tilde{b}\) are dimensionless parameters. Referring to Figure 1.4, the quantity \(X\) is proportional to the circulatory fluid flow velocity, \(Y\) characterizes the temperature difference between rising and falling fluid regions, and \(Z\) characterizes the distortion of the vertical temperature profile from its linear with height equilibrium variation. Lorenz numeri cally considered the case \(\tilde{\sigma}=10\), \(\tilde{b}=8/\) and \(\tilde{r}=28\). Taking the divergence of the phase space flow, we find from Eq. (12) that phase space volumes contract at an exponential rate of \((1+\tilde{\sigma}+\tilde{b})=41/3\), \(V(t)=V(0){\rm exp}[-(41/3)t]\). It is this relatively rapid volume contraction which leads to the applicability of one dimensional map dynamics to this problem. Figure 22 shows a projection of the phase space orbit obtained by Lorenz onto the \(YZ\) plane. The points labelled \(C\) and \(C^{\prime}\) represent _steady_ convective equilibria (i.e., solutions of Eqs. (30) with \({\rm d}X/{\rm d}t={\rm d}Y/{\rm d}t=0\)) which are unstable for the parameter values investi gated by Lorenz. We see that the solution spirals outward from one of the equilibria \(C\) or \(C^{\prime}\) for some time, then switches to spiraling outward from the other equilibrium point. This pattern repeats forever with the number Figure 22: Projection of the phase space orbit for Eqs. (30) on the \(ZY\) plane (Lorenz, 1963). of circuits around an equilibrium before switching appearing to vary in an erratic manner. As one of the ways of analyzing this motion, Lorenz obtained the sequence \(m_{n}\) giving the \(n\)th maximum of the function \(Z(t)\). He then plotted \(m_{n+1}\) versus \(m_{n}\). The resulting data are shown in Figure 2.23, and they clearly tend to fall on an approximate one dimensional map function. Furthermore, we note that the magnitude of the slope \(|{\rm d}m_{n+1}/{\rm d}m_{n}|\) for this function is greater than 1 throughout the range visited by the orbit. This is very similar to the situation for the tent map (cf. Figure 2.1(\(a\))). Since \(|{\rm d}m_{n+1}/{\rm d}m_{n}|>1\) we have, by Eq. (2.28), that the Lyapunov exponent \(h\) is positive, indicating chaos. Note that the maxima of \(Z\) may be regarded as lying on the surface of section \(\tilde{b}Z=XY\) obtained by setting \({\rm d}Z/{\rm d}t=0\) in Eq. (2.30c). #### Instability saturation by quadratically nonlinear three wave coupling Consider a small amplitude wave propagating in a homogenous medium with the wave field represented as \[\{C_{1}\exp(-{\rm i}\omega_{1}t+{\rm i}{\bf k}_{1}\cdot{\bf x})+({\rm complex\ conjugate})\},\] where \(C_{1}\) is a complex number. The quantities \(\omega_{1}\) and \({\bf k}_{1}\) are the (real) frequency and the wavevector of the wave and are, in general, related by some dispersion relation, \(\omega_{1}=\omega_{1}({\bf k})\). Due to nonlinearities in the med ium, this wave can couple strongly to two other linear waves, represented as \[\{C_{2,3}\exp(-{\rm i}\omega_{2,3}t+{\rm i}{\bf k}_{2,3}\cdot{\bf x})+({\rm complex\ conjugate})\},\] Figure 2.23: \(m_{n+1}\) versus \(m_{n}\) for Eqs. (2.30) (Lorenz, 1963). if the three waves are nearly in resonance, that is, if \[\mathbf{k}_{1}=\mathbf{k}_{2}+\mathbf{k}\, \tag{31a}\] \[\omega_{1}=\omega_{2}+\omega\ \ +\delta, \tag{31b}\] where \(\delta\) is small compared to \(\omega_{1,2,3}\). We assume \(\omega_{1,2,3}>0\) so that wave 1, by convention, is the wave of largest frequency. In this case, that complex wave amplitude \(C_{1,2,3}\) (which were constant in the absence of nonlinear interaction) become slow functions of time satisfying the three wave mode coupling equations, \[\mathrm{d}C_{1}/\mathrm{d}t=C\ C_{2}\ \mathrm{exp}(\mathrm{i}\delta t), \tag{32a}\] \[\mathrm{d}C_{2,3}/\mathrm{d}t=-C_{1}C_{3,2}^{*}\ \mathrm{exp}(- \mathrm{i}\delta t), \tag{32b}\] where \(C^{*}\) denotes the conjugate of \(C\), and \(C_{1,2,3}\) have been normalized to make the coefficient of the nonlinear term on the right hand side equal to 1. These equations apply quite generally to the case where the medium in which the waves propagate is conservative (in the sense that there is no net exchange of energy between the waves and the medium). Now let us say that we consider a case where wave 1 is unstable such that the medium is capable of transferring some of its internal energy to the wave. This can happen in systems that are not in thermodynamic equilibrium; examples include pumped lasing media, many situations in plasma physics, and stratified fluids in shear flow. In such a situation the net effect is for the amplitude \(C_{1}\) to increase exponentially in time, \(|C_{1}|\sim\mathrm{exp}(\gamma_{1}t)\), provided the wave amplitude is small enough that nonlinearity can be neglected. As the wave amplitude \(|C_{1}|\equiv a_{1}\) grows, nonlinear coupling to nearly resonant waves, as in Eqs. (32), can become a significant effect. Assuming that the two lower frequency waves, waves 2 and 3, are damped, so that, in the absence of nonlinearity, \(a_{2,3}=|C_{2,3}|\sim\ \mathrm{exp}(-\gamma_{2,3}t)\), Eqs. (32) are modified by the linear growth and damping and become \[\mathrm{d}C_{1}/\mathrm{d}t=\gamma_{1}C_{1}+C_{2}C\ \ \mathrm{exp}(\mathrm{i}\delta t), \tag{33a}\] \[\mathrm{d}C_{2,3}/\mathrm{d}t=-\gamma_{2,3}C_{2,3}-C_{1}C_{3,2}^{*}\ \mathrm{exp}(-\mathrm{i}\delta t). \tag{33b}\] Thus there is the possibility that the linear exponential growth of the unstable wave can be arrested by nonlinearity coupling its energy to damped waves. Introducing \(a_{1}\exp(\mathrm{i}\phi_{1})=C_{1}\), \(a_{2,3}\exp(\mathrm{i}\phi_{2,3})=C_{2,3}\exp(\mathrm{i}\delta t/2)\), \(\phi=\phi_{1}-\phi_{2}-\phi\), (where \(a_{1,2,3}\) and \(\phi_{1,2,3}\) are real variables) and restricting consideration to the important case \(\gamma_{2}=\gamma\ \equiv\gamma\) and \(a_{2}=a\), Eqs. (33) reduce to three real first order equations,\[\mathrm{d}a_{1}/\mathrm{d}t =a_{1}+a_{2}^{2}\cos\phi, \tag{34a}\] \[\mathrm{d}a_{2}/\mathrm{d}t =-a_{2}(\gamma+a_{1}\cos\phi),\] (34b) \[\mathrm{d}\phi/\mathrm{d}t =-\delta+a_{1}^{-1}(2a_{1}^{2}-a_{2}^{2})\mathrm{sin}\,\phi. \tag{34c}\] In Eq. (34a) we have set \(\gamma_{1}=1\) (this can be accomplished by proper normalization). This sytem has been solved numerically by Vyshkind and Rabinovich (1976) and Wersinger _et al_. (1980). Figure 24 shows numerical solutions of Wersinger _et al_. for \(a_{1}\) versus \(t\) for \(\delta=2\) and several values of \(\gamma\). We see that at \(\gamma=\) (Figure 24(_a_)) the orbit settles into a simple periodic motion (a limit cycle attractor). Utilizing a surface of section at \(\phi=\pi/2\), this periodic motion is manifested as a single fixed point. Increasing \(\gamma\) to \(\gamma=9\), the orbit shown in Figure 24(_b_) is obtained. The solution (after the initial transient) is still periodic, but the single fixed point in the surface of section that previously manifested the periodic attractor splits into two points that are visited alternately. Correspondingly, the single peak per period function of Figure 24(_a_) becomes a function with two alternating maxima (Figure 24(_b_)), thus doubling the period. As \(\gamma\) is further increased, more period doublings are observed, and the time evolution eventually becomes apparently chaotic. Figure 24(_c_) shows the time evolution for such an apparently chaotic case at \(\gamma=15\). Figure 25(_a_) shows that points in the surface of section _appear_ to fall on an arc. Since this arc has no _apparent_ thickness, it is natural to try to obtain an approximate reduction to a one dimensional map. This is done in Figure 25(_b_) which shows \(x_{n+1}=a_{2}(t_{n+1})\) versus \(x_{n}=a_{2}(t_{n})\), where \(t_{n}\) denotes the time at the \(n\)th piercing of the surface of section. By a change of variables, \(\vec{x}=(\mathrm{cont.})-x\), the apparently one dimensional map in Figure 25(_b_) can be turned upside down so that it becomes a map with smooth rounded maximum. Thus the map is similar in character to the logistic map, and, correspondingly, the observedphenomenology of the solutions as \(\gamma\) is increased (i.e., period doubling following by chaos) is similar to that for the logistic map as \(r\) is increased. #### Experiments: Chemical chaos and the dripping faucet An experimental system whose chaotic dynamics has been studied by several research groups is the Belousov Zhabotinskii reaction in a well stirred chemical reactor. A reactor consists of a tank into which chemicalsare pumped and an ouput pipe out of which the reaction product is taken. In the experiment the fluid in the tank is stirred rapidly enough that the medium within the tank may be considered homogenous. The reactants in the Belousov Zhabotinskii reaction are Ce\({}_{2}\)(SO\({}_{4}\)), NaBrO, CH\({}_{2}\)(COOH)\({}_{2}\), and H\({}_{2}\)SO\({}_{4}\). These reactants undergo a complex sequence of reactions involving about 25 chemical species. Presumably, a descrip tion of this experiment is provided by the solution of the coupled system of rate equations giving the evolution of the concentration of each chemical species (one first order equation for each species). All the reaction rates and intermediate reactions are, however, not accurately known. Experimentally it is observed that the time dependence of the chemical concentrations in the reactor can be periodic or chaotic. The experiment of Simoyi _et al_. (1982) examined this reaction and demonstrated that the observed chaotic dynamics was well described by a one dimensional map in the parameter regime that they considered. They also examined in detail the sequence of the appearance of periodic orbits in windows as a flow rate parameter was varied and verified that the Metropolis Stein Stein sequence was followed. In their experiment they measured the time dependence of the concen tration of one of the chemicals in the reactor, namely the bromide ion. They then used delay coordinates (Section 6) to deduce the presence of approximately one dimensional dynamics. In particular, for a chaotic case, they plot \(B(t+T)\) versus \(B(t)\) where \(T=\) seconds and \(B(t)\) denotes the concentration of the bromide ion. The result is shown in Figure 2.26(_a_) and may be regarded as a particular projection of the attractor onto a plane. They then consider the value of \(B(t_{n})\equiv x_{n}\) at successive crossings of the dashed line shown in Figure 2.26(_a_). Plotting \(x_{n+1}\) versus \(x_{n}\) for this data, they observe that it appears to lie on a smooth one dimensional map function with a rounded maximum. Figure 2.26(_b_) shows the experimental data as dots and a fitted curve as a solid line. (The map function has a single maximum as required for applicability of the theory of Metropolis, Stein and Stein.) Another experiment which is apparently described by chaotic one dimensional map dynamics is the dripping water faucet experiment of Figure 2.26: (_a_) Attractor projection and (_b_) map function for the experiment of Simoyi _et al_. (1982). Shaw discussed in Section 2.2. In particular when the water flow rate is such that the behavior is chaotic, a plot of experimental data for the time between drops \(t_{n+1}\) versus \(t_{n}\) appears to lie on a one dimensional map function (although there is appreciable spread of the experimental data about a fitted curve). See Figure 2.27. ## Appendix: Some elementary definitions and theorems concerning sets For simplicity we consider sets of the real numbers. An _open interval_, denoted (\(a\), \(b\)), is the set of all \(x\) such that \(a<x<b\). A _closed interval_, denoted [\(a\), \(b\)], is the set of all \(x\) such that \(a\)\(x\)\(b\) with \(b>a\). An _interior point_\(P\) of a set \(S\) is a point such that there exists an \(\varepsilon\) neighborhood (\(P-\varepsilon\), \(P+\varepsilon\)) contained entirely in \(S\). A point \(P\) is a _boundary point_ of \(S\) if any \(\varepsilon\) neighborhood of \(P\) possesses points that are in \(S\) as well as points not in \(S\). A point \(P\) is a _limit point_ of the set \(S\) if every \(\varepsilon\) neighborhood of \(P\) contains at least one point in \(S\) (distinct from \(P\)). This can be shown to be equivalent to the statement that there exists an infinite sequence of distinct points \(x_{1}\), \(x_{2}\), \(\ldots\) all in \(S\) such that \(\lim_{n\to\infty}x_{n}=P\). If a set contains all its limit points it is called a _closed_ set. The _closure_\(\bar{S}\) of a set \(S\) is the set \(S\) plus its limit points. A set \(S\) is _open_ if all its points are interior points. The following two theorems concerning the structure of open and closed sets will be important for our further discussions. Theorem.Every nonempty bounded open set \(S\) can be represented as the sum of a finite or a countably infinite number of disjoint open intervals whose end points do not belong to \(S\). That is, \[S=\quad\underset{k}{\text{ }}{(a_{k},\ b_{k})}.\] The _Lebesgue measure of the open set_\(S\) is \[\mu(S)=\quad\underset{k}{\text{ }}{(b_{k}-a_{k})}.\] Theorem.A nonempty closed set \(S\) is either a closed interval or else can be obtained from a closed interval by removing a finite or countably infinite family of disjoint open intervals whose end points belong to \(S\). Thus a closed set \(S\) can be expressed as \[S=[a,\ b]-\quad\underset{k}{\text{ }}{(a_{k},\ b_{k})}.\] The _Lebesgue measure of the closed set_\(S\) is \[\mu_{L}(S)=(b-a)-\quad\underset{k}{\text{ }}{(b_{k}-a_{k})}.\] (We shall not give the more general definition of the Lebesgue measure applicable to sets that are neither open nor closed, since, for the most part, we shall be dealing with open or closed sets.) In particular, a set has Lebesgue measure zero if for any \(\varepsilon>0\) the set can be covered by a countable union of intervals such that the sum of the length of the intervals is smaller than \(\varepsilon\). For sets in an \(N\) dimensional Cartesian space, we have the analogous definition of a zero Lebesgue measure set: for any \(\varepsilon>0\) the set can be covered by a countable union of \(N\) dimensional cubes whose total volume is less than \(\varepsilon\). A _Cantor set_ is a closed set which consists entirely of boundary points each of which is a limit point of the set. Examples of Cantor sets are given in the text. In general, Cantor sets can have either zero Lebesgue measure or else positive Lebesgue measure. They are also uncountable. ## Problems 1. For the \(2x\) modulo \(1\) map and the tent map find the number of periodic orbits \(N_{p}\) for periods \(p=2\), \(3\),..., \(10\). How many distinct period four orbits are there for the map \(x_{a+1}=\ x_{a}\) modulo \(1\)?2. Find the number of periodic orbits of period \(p\) for \(p=1\), 2, 3, 4, 5, 6 for the map shown in Figure 2.28. 3. Consider the \(2x\) modulo 1 map with noise, \(y_{n+1}=(2y_{n}\) modulo \(1)+(\)noise). Assume that the form of the noise is such as to change randomly all the digits \(a_{j}\) in the binary representation of \(y\) for \(j\) 50. (Thus the noise is of the order of 2-50 10-.) Assume that you are given exact observations of the noisy orbit for a time \(T\): \(y_{0}\), \(y_{1}\),..., \(y_{T}\) (where \(T\) is much larger than 50). Show that there is an initial condition \(x_{0}\) such that the exact 'true' orbit, \(x_{0}\), \(x_{1}\),..., \(x_{T}\), followed by the noiseless map, \(x_{n+1}=2x_{n}\) modulo 1, shadows the noisy orbit, \(y_{0}\), \(y_{1}\),..., \(y_{T}\). In particular, show that \(x_{0}\) can be chosen so that \(|y_{n}-x_{n}|\) 2-49 for all \(n\) from \(n=0\) to \(n=T\). 4. Consider the one-dimensional map \[x_{n+1}=\left\{\begin{array}{llll}x_{n}&\mbox{if }0&x_{n}&1/\\ \mbox{\goth g}(1-x_{n})&\mbox{if }1/&x_{n}&1\end{array}\right.\] 1. Find the locations and stability coefficients for the fixed points. 2. Find the location of the orbit points and the stability coefficient for the period two orbit. 5. For the logistic map find the value of \(r\) at which the superstable period one orbit exists. Find the value of \(r\) at which the superstable period two exists. 6. Show for the logistic map at the value of \(r\) at the merging of the two band attractor to form a one band attractor (i.e., at \(r=r_{0}^{\prime}\)) that the third iterate of \(x=\frac{1}{2}\) lands on the unstable fixed point \(x=1-1/r\). Figure 2.28: Plot of the map for Problem 2. One dimensional maps 7. Consider the map \(M(x;\ r)=r-x^{2}\). Show that it has a forward tangent bifurcation at some value \(r=r_{0}\) at which a stable and an unstable fixed point are created. Find \(r_{0}\) and the locations of the stable and unstable fixed points. Find \(r_{1}\), the value of \(r\) at which the stable fixed point created at \(r_{0}\) becomes unstable. 8. Consider the map \(x_{a+1}=x_{n}+x_{n}\). 1. For some range of values \(\ \ \ <\ \ <\ \ _{+}\) the fixed point at the origin is stable. What are \(\ \ \ _{+}\) and \(\ \ \ _{-}\)? 2. Describe the bifurcation that takes place as increased through \(\* What is the natural invariant density? (Assume \(\rho(x)\) to be constant in the interval (0, \(\frac{\sharp}{2}\)) and in the interval (\(\frac{\sharp}{4}\), 1).) * What is the fraction of time a typical orbit spends in the region 0 \(x\)\(\frac{12}{2}\) * Find the Lyapunov exponent for a typical orbit. 15. Show that the invariant measure for the tent map is stable in the sense that a small smooth perturbation of the density from the natural invariant density decays to zero as the map is iterated. (Use the Frobenius Perron equation to do this.) 16. Show that if \(M\) and \(\tilde{M}\) are conjugate by a smooth change of variables \(g\) then they have the same Lyapunov exponent. Use Eq. (2.28). If you wish, you may assume that a natural invariant density \(\rho(x)\) exists so that \(\mathrm{d}\mu(x)=\rho(x)\mathrm{d}x\). 17. Find the natural invariant density for the map pictured in Figure 2.29. (Hint: assume that \(\rho(x)\) is constant in each of the three intervals (0, \(\frac{\sharp}{2}\)), (\(\frac{\sharp}{2}\)), (\(\frac{\sharp}{2}\), 1).) * Find the Lyapunov exponent for a typical orbit of this map. ## 2 One dimensional maps It is probably the case that chaotic attractors for systems commonly encountered in practice (e.g., the forced damped pendulum) have embedded within them a dense set of periodic orbits. 4. The occurrence of the tangent bifurcation illustrated in Figure 2.16(_b_) shows the map function as concave down with the curve approaching tangency from below the 45\({}^{\circ}\) line. In contrast, Figure 2.13(_b_) shows the map function as concave up and approaching tangency from above. These two pictures are equivalent as a change of variables \(y=(\mbox{const.})-x\) readily shows. Similar comments apply to Figures 2.16(_a_) and (_c_). 5. This is shown in Milnor and Thurston (1987) and is a special property of the logistic map. 6. For example, those at the upper boundary of a window. 7. A particular example of a measure \(\mu\) is when there is a density \(\rho(x)\) and the measure of a set \(A\) is defined as \(\mu(A)=\int_{A}\rho(x)\mbox{d}x\). In other naturally occurring cases that will be of interest to us, the measure will be concentrated on a Cantor set that has Lebesgue measure zero. In such a case a density function \(\rho(x)\) does not exist. 8. Sinai (1972) and Bowen and Ruelle (1975) introduced the concept of natural measure and show that it exists for certain types of attractors (Axiom \(A\) attractors; cf. Chapter 4). 9. For a definition of the integral \(\int\ldots\mbox{d}\mu(x)\) and further discussion of measure and ergodic theory in dynamics see Ruelle (1989). ## Chapter 3 Strange attractors and fractal dimension Perhaps the most basic aspect of a set is its dimension. In Figures 1.10(_a_) and (_b_) we have given two examples of attractors; one is a steady state of a flow represented by a single point in the phase space, while the other is a limit cycle, represented by a simple closed curve. While it is clear what the dimensions of these attracting sets are (zero for the point and one for the curve), it is also the case that invariant sets arising in dynamical systems (such as chaotic attractors) often have structure on arbitrarily fine scale, and the determination of the dimension of such sets is nontrivial. Also the frequency with which orbits visit different regions of a chaotic attractor can have its own arbitrarily fine scaled structure. In such cases the assignment of a dimension value gives a much needed quantitative characterization of the geometrical structure of a complicated object. Furthermore, experimental determination of a dimension value from data for an experimental dynamical process can provide information on the dimensionality of the phase space required of a mathematical dynamical system used to model the observations. These issues are the subjects of this chapter. ### 3.1 The box-counting dimension The _box counting dimension_1 (also called the 'capacity' of the set) provides a relatively simple and appealing way of assigning a dimension to a set in such a way that certain kinds of sets are assigned a dimension which is not an integer. Such sets are called fractals by Mandelbrot, while, in the context of dynamics, attracting sets with fractal properties havebeen called strange attractors. (The latter term was introduced by Ruelle and Takens (1971).) Assume that we have a set which lies in an \(N\) dimensional Cartesian space. We then imagine covering the space by a grid of \(N\) dimensional cubes of edge length \(\varepsilon\). (If \(N=2\) then the 'cubes' are squares, while if \(N=1\) the 'cubes' are intervals of length \(\varepsilon\).) We then count the number of cubes \(\tilde{N}(\varepsilon)\) needed to cover the set. We do this for successively smaller \(\varepsilon\) values. The box counting dimension is then given by2 \[D_{0}=\lim_{\varepsilon\to 0}\frac{\ln\tilde{N}(\varepsilon)}{\ln(1/ \varepsilon)}. \tag{3.1}\] As an example, consider the case of some simple sets lying in a two dimensional Cartesian space, Figure 3.1. The three geometrical sets shown are (\(a\)) a set consisting of two points, (\(b\)) a curve segment, and (\(c\)) the area inside a closed curve. The squares required to cover the sets are shown Figure 3.1: Illustration of \(\tilde{N}(\varepsilon)\) for sets consisting of (\(a\)) two points, (\(b\)) a curve segment, and (\(c\)) the area inside a closed curve. cross hatched in the figure. In the case of Figure 3.1(_a_), we see that \(\tilde{N}(\varepsilon)=2\) independent of \(\varepsilon\); thus Eq. (3.1) yields \(D_{0}=0\). In the case of Figure 3.1(_b_), we have \(\tilde{N}(\varepsilon)\sim l/\varepsilon\) for small \(\varepsilon\), where \(l\) is the length of the curve; thus Eq. (3.1) yields \(D_{0}=1\). Similarly for the area (Figure 3.1(_c_)), \(\tilde{N}(\varepsilon)\sim A/\varepsilon^{2}\) where \(A\) is the area, and \(D_{0}=2\). Hence we see that the box counting dimension yields, as it should, correct dimension values for simple nonfractal sets: 0, 1, and 2 for a set of a finite number of points, a simple smooth curve, and an area. (Note that to obtain \(D_{0}\) we only require a rather crude estimate of the dependence of \(\tilde{N}(\varepsilon)\) on \(\varepsilon\). For example, plugging \(\tilde{N}(\varepsilon)=K\varepsilon^{-d}\) in Eq. (3.1) yields \(D_{0}=d\)_independent_ of the constant of proportionality \(K\). In this regard see also Problem 3 of Chapter 5.) Now let us consider a somewhat more interesting set, the middle third Cantor set. This set is defined as follows. Take the closed interval [0, 1], and remove the open middle third interval (\(\frac{1}{3}\), \(\frac{2}{3}\)), leaving the two intervals [0, \(\frac{1}{3}\)] and [\(\frac{2}{3}\), 1]. Now remove the open middle thirds of each of these two intervals leaving four closed intervals of length \(\frac{1}{9}\) each, namely the intervals [0, \(\frac{1}{9}\)], [\(\frac{2}{9}\), \(\frac{1}{3}\)], [\(\frac{2}{3}\), \(\frac{2}{9}\)] and [\(\frac{8}{9}\), 1]. Continuing in this way _ad infinitum_, the set of remaining points is the middle third Cantor set. The construction is illustrated in Figure 3.2. This set has zero Lebesgue measure since at the \(n\)th stage of the construction the total length of the remaining intervals is (\(\frac{2}{3}\))\({}^{n}\), and this length goes to zero as \(n\) goes to infinity. Although of zero length, the set is also uncountable. To see that this is so, we make a one to one correspondence of points in the Cantor set with all the numbers in the interval [0, 1]. Each point in the Cantor set can be specified by giving its location at successive stages of the construction of the set. For example, 'at the first stage it is to the right of the removed interval (which has length \(\frac{1}{3}\)); at the second stage it is to the left of the interval of length \(\frac{1}{9}\) that is removed from the center of the interval of length \(\frac{1}{3}\) in which it fell in the first stage; at the third stage it is to the left of the removed interval; at the fourth stage it is to the right of the removed interval', etc. Associating right with 1 and left with 0 yields a representation of an element of the Cantor set as an infinite string of zeros and ones. For the example above, Figure 3.2: First stages of the construction of the middle third Cantor set. the string is 1001 \(\ldots\). All combinations of zeros and ones are possible and each string represents a different element of the Cantor set. Infinite strings of zeros and ones can also be used to represent all the numbers between 0 and 1 via the binary decimal representation, Eq. (4). Hence, by identify ing similar sequences of zeros and ones in the two representations, we have a one to one correspondence between points in the Cantor set and points in the set \([0,\,1]\), and the Cantor set is therefore uncountable. To calculate the box counting dimension of the middle third Cantor set, let us consider a sequence \(\varepsilon_{n}\) of \(\varepsilon\) values converging to zero as \(n\) approaches infinity, \(\lim_{n\to\infty}\varepsilon_{n}=0\). Then by Eq. (11) we have \(D_{0}=\lim_{n\to\infty}\left[\ln\tilde{N}(\varepsilon_{n})\right]/\ln(1/ \varepsilon_{n})\). The most convenient choice for \(\varepsilon_{n}\) is \(\varepsilon_{n}=(\frac{1}{3})^{n}\). By the construction of the Cantor set (Figure 2), we then have \(\tilde{N}(\varepsilon_{n})=2^{n}\) and \[D_{0}=\ln 2/\ln 3=0.63\,\ldots\,.\] Hence we obtain for the dimension a number between zero and one, indicating that the set is a fractal. In the examples we have given (i.e., the sets in Figure 11 and the middle third Cantor set), we have that \[\tilde{N}(\varepsilon)\quad\varepsilon^{-D_{0}}. \tag{12}\] That is, the number of 'cubes' needed to cover the set increases with \(\varepsilon\) in a power law fashion with exponent \(D_{0}\). If one assumes that one only needs to resolve the location of points in the set to within an accuracy of \(\varepsilon\), then \(\tilde{N}(\varepsilon)\) tells us roughly how much information we need to do this (the information is the locations of the \(\tilde{N}(\varepsilon)\) cubes), and \(D_{0}\) tells us how rapidly the required information increases as the required accuracy in creases. The middle third Cantor set is _self similar_ in the sense that smaller pieces of it reproduce the entire set upon magnification. For example, magnifying the interval \([\frac{2}{3},\,\frac{2}{5}]\) by a factor of 9 yields a picture which is identical to that for the set in the interval [0, 1]. In the case of fractal sets arising in typical dynamical systems, such as the forced damped pendulum (Figure 13), self similarity rarely holds.[3] In such cases the fractal nature reveals itself upon successive magnifications as structure on all scales. That is, as successive magnifications are made about any point in the set, we do not arrive at a situation where, at some sufficiently large magnifica tion and all magnifications beyond that, we see only a single point, a line, or a flat surface. We now give a simple example of a dynamical system yielding a fractal set. We consider the one dimensional map \[M(x)=\left\{\begin{array}{ll}2\eta x,&\mbox{if}\;\;x<\frac{1}{2},\\ 2\eta(x-1)+1,&\mbox{if}\;\;x>\frac{1}{2},\end{array}\right. \tag{3.3}\] illustrated in Figure 3.3. For \(\eta=1\) and \(x\) restricted to [0, 1] this is the \(2x\) modulo 1 map, Eq. (2.3), discussed in Section 2.1. Here we consider the case \(\eta>1\). Note that any point in \(x>1\) gets mapped toward \(x=+\infty\) on successive iterates, while any point in \(x<0\) gets mapped toward \(x=-\infty\) on successive iterates. Thus we focus on the \(x\) interval [0, 1]. We note that \(M(x)>1\) for \(x\) in [\(1/2\eta\), \(\frac{1}{2}\)], so that whenever \(x\) falls in this interval it is mapped to \(x>1\) and then toward \(x=+\infty\) on subsequent iterates. Like wise, \(M(x)<0\) for \(x\) in the interval [\(\frac{1}{2}\), \(1-1/2\eta\)], and whenever \(x\) falls in this interval it is consequently mapped to \(x<0\) and then toward \(x=-\infty\). Say we consider an initial uniform distribution of points in the interval [0, 1]. \(\rho_{0}(x)=1\) in [0, 1] and \(\rho_{0}(x)=0\) outside [0, 1]. Then a fraction, \(\Delta=1-1/\eta\), of these points will be mapped to \(x>1\) or \(x<0\) on one iterate. By the Frobenius Perron equation, Eq. (2.23a), the density \(\rho_{1}(x)\)is again uniform in [0, 1], \(\rho_{1}(x)=1-\Delta\) for \(x\) in [0, 1]. Repeating this process we obtain \[\rho_{n}(x)=(1-\Delta)^{n}\text{, for }x\text{ in }[0,\,1]. \tag{3.4}\] We see that the fraction of points remaining in [0, 1] decreases exponentially with time \[\int_{0}^{1}\!\rho_{n}(x)\mathrm{d}x=\exp(-\gamma\,n), \tag{3.5}\] where \(\gamma=\ln(1-\Delta)^{-1}=\ln\eta\). We now ask what is the character of the set of initial conditions which never leaves the interval [0, 1]. Since the fraction of a uniformly distributed initial distribution remaining in [0, 1] decreases exponentially with time, Eq. (3.5), the set of points which remain forever must have zero Lebesgue measure. To see what the character of this set is, consider those initial conditions which remain for at least one iterate. This is clearly the set of two intervals [0, \(1/2\eta\)] and [\(1-1/2\eta\), 1]. Now we ask, what is the set which remains for at least two iterates. The action of the map on both the interval [0, \(1/2\eta\)] and the interval [\(1-1/2\eta\), 1] is to stretch each interval uniformly to a length of one and map it to the interval [0, 1]. Hence there is an interval of length \((\Delta/2)(1-\Delta)\) in the middle of the intervals [0, \(1/2\eta\)] and [\(1-1/2\eta\), 1] which leaves on the second iterate. Thus there are four intervals of initial conditions which remain for at least two iterates. For example, for \(\eta=\frac{3}{2}\) (corresponding to \(\Delta=1-1/\eta=1/3\)), the sets of points that remain for at least one iterate and for at least two iterates are the sets of intervals seen in Figure 3.2 at the first and second stage of construction of the middle third Cantor set. Thus we see that for \(\eta=\frac{3}{2}\) the set of points which remains forever in [0, 1] is just the middle third Cantor set. For arbitrary \(\eta>1\), the set which remains forever is also a Cantor set, but its dimension is a function of \(\eta\) (cf. Problem 2), \[D_{0}=(\ln 2)/(\ln 2\eta). \tag{3.6}\] It is interesting to consider the evolution of points on the Cantor set which remains forever in the interval [0, 1]. This set is invariant under the map, since a point which remains in [0, 1] forever is necessarily mapped to another point which remains forever. Previously we made a correspon dence between points \(x\) on the middle third Cantor set and points on the interval [0, 1] expressed as a binary decimal. The same can be done for the invariant Cantor set of the map, Eq. (3.3), for arbitrary \(\eta>1\). That is, assign a 1 or a 0 according to whether the point is in the right or left interval at each stage of the construction. Alternatively, if we iterate the point \(x\) with time and let \(a_{n}=1\) if \(M^{n}(x)>\frac{1}{2}\) and let \(a_{n}=0\) if \(M^{n}(x)<\frac{1}{2}\), then the point \(x\) has the symbol sequence representation \[a_{0}a_{1}a_{2}\,\ldots\,.\] Furthermore, operation of the map on the point \(x\) in the Cantor set transforms it to the location corresponding to the point with symbol sequence \[a_{1}a_{2}a_{3}\,\ldots\,.\] Thus, as for the case \(\eta=1\) (Section 2.1), the dynamics of those points which do not go to \(x=\pm\infty\) is fully specified by the _symbolic dynamics_ of the Bernoulli shift operation. The difference between \(\eta=1\) and \(\eta>1\) is that, in the former case, the invariant set is the entire interval \([0,\,1]\), while for the latter case it is a zero Lebesgue measure Cantor set in \([0,\,1]\). Since for \(\eta>1\) all points in \([0,\,1]\) except for a set of Lebesgue measure zero (the Cantor set) eventually leave \([0,\,1]\), the Cantor set we have found for \(\eta>1\) is not an attractor. However, if \(\Delta\) is very small (\(\eta\) close to \(1\)), many initial conditions will remain in \([0,\,1]\) for a long time before leaving. During the time that they remain in \([0,\,1]\) they undergo orbits which have all the hallmarks of chaos. We call such orbits chaotic transients, and we identify \(1/\gamma\) in Eq. (3.5) as the typical duration of such a chaotic transient. Furthermore, since the dynamics of points on the Cantor set is equivalent to the Bernoulli shift, which also describes the dynamics of the \(2x\) modulo \(1\) map, and since the latter is chaotic (its Lyapunov exponent is \(h=\ln 2>0\)), we therefore also call the dynamics of Eq. (3.3) on the invariant Cantor set chaotic. ### The generalized baker's map In this section we introduce the generalized baker's map4 and discuss some of its properties. It will become evident that this map is an extremely useful tool for conceptualizing many of the basic properties of strange attractors. We define the generalized baker's map as a transformation of the unit square \([0,\,1]\times[0,\,1]\), \[x_{n+1}= \begin{array}{ll}\lambda_{a}x_{n},&\mbox{if}\,\,\,y_{n}<\alpha,\\ (1-\lambda_{b})+\lambda_{b}x_{n},&\mbox{if}\,\,\,y_{n}>\alpha,\\ y_{n+1}=&\mbox{if}\,\,\,y_{n}<\alpha,\\ (y_{n}-\alpha)/\beta,&\mbox{if}\,\,\,y_{n}>\alpha,\end{array}\] (3.7a) where \[\beta=1-\alpha\] and \[\lambda_{a}+\lambda_{b}\approx 1\]. for the piece in \[y>\alpha\] (Figure 3.4 ( \[b\] )). We then compress and the upper piece by a factor \(1/\beta\), so that both are of unit length (Figure 3.4(_c_)). We then take the upper piece and place it back in the unit square with its right vertical edge coincident with the right vertical edge of the unit square (Figure 3.4(_d_)). Thus the map Eq. (3.7) maps the unit square into two strips within the square, one in \(0\quad x\quad\lambda_{a}\) and one in \(1-\lambda_{b}\quad x\quad 1\). Applying the map a second time, maps the two strips of Figure 3.4(_d_) into four strips (Figure 3.5), one of width \(\lambda_{a}^{2}\), one of width \(\lambda_{b}^{2}\), and two of width \(\lambda_{a}\lambda_{b}\). Application of the map more times results in more strips of narrower width, and the widths approach zero as \(n\) approaches infinity. After \(n\) applications there will be \(2^{n}\) strips of varying widths, \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) for \(m=0\), \(1\), \(2\),..., \(n\). The number of strips \(Z(n,\ m)\) of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) at the \(n\)th stage is given by the binomial coefficient (Problem 14) \[Z(n,\ m)=\frac{n!}{m!(n-m)!}. \tag{3.8}\] Assuming that \(\lambda_{a}+\lambda_{b}<1\) (rather than \(\lambda_{a}+\lambda_{b}=1\)), computer generated orbits (Problem 12) show that the attractor for the generalized baker's map appears to consist of many parallel vertical lines. In fact, as we shall see, there is a Cantor set of these vertical lines. That is, the intersection of the attractor with a horizontal line is a Cantor set. (Note the apparent qualitative similarity of this structure with the blow ups of the Henon attractor seen in Figures 12(_b_) and (_c_).) Let \(\hat{D}_{0}\) denote the box counting dimension of the intersection of the strange attractor with a horizontal line. Then the dimension \(D_{0}\) of the attractor is \[D_{0}=1+\hat{D}_{0}. \tag{3.9}\] (This follows from the definition, Eq. (3.1).) To find \(\hat{D}_{0}\) we note a self similarity of the attractor. Namely, if we take the strip in the \(x\) interval \([0,\lambda_{a}]\) in Figure 5 and magnify it horizontally by a factor \(\lambda_{a}^{-1}\), then Figure 5(_d_) is reproduced. Likewise, if we horizontally magnify the region of Figure 5 in the \(x\) interval [(\(1-\lambda_{b}\)), 1] by the factor \(\lambda_{b}^{-1}\), then Figure 5(_d_) is again reproduced. Let us express \(\hat{N}(\varepsilon)\), the number of \(\varepsilon\) length intervals needed to cover the intersection of the attractor with a horizontal line, as \[\hat{N}(\varepsilon)=\hat{N}_{a}(\varepsilon)+\hat{N}_{b}(\varepsilon), \tag{3.10}\] where \(\hat{N}_{a}(\varepsilon)\) is the number of intervals needed to cover that part of the attractor that lies in [0, \(\lambda_{a}\)], and \(\hat{N}_{b}(\varepsilon)\) is the number needed for that part of the attractor in [(\(1-\lambda_{b}\)), 1]. By the self similarity of the attractor we have \[\hat{N}_{a}(\varepsilon)=\hat{N}(\varepsilon/\lambda_{a}),\;\hat{N}_{b}( \varepsilon)=\hat{N}(\varepsilon/\lambda_{b}). \tag{3.11}\] Assuming that \(\hat{N}(\varepsilon)\) scales like \(\hat{N}(\varepsilon)\simeq K\varepsilon^{-\hat{D}_{0}}\) (Eq. (3.2)) and substituting this and Eq. (3.11) into Eq. (3.10), we obtain a transcendental equation for \(\hat{D}_{0}\), \[\lambda_{a}^{\hat{D}_{0}}+\lambda_{b}^{\hat{D}_{0}}=1. \tag{3.12}\] For \(\lambda_{a}+\lambda_{b}<1\), the solution for \(\hat{D}_{0}\) is between zero and one. Hence by Eq. (3.9) the attractor has a dimension between 1 and 2. For example, if \(\lambda_{a}=\lambda_{b}=\frac{1}{3}\), we obtain from Eq. (3.12) the result \(\hat{D}_{0}=(\ln 2)/(\ln 3)\), and the intersection of the attractor with a horizontal line is just the middle third Cantor set. In this case \(D_{0}=1.63\ldots\). For \(\lambda_{a}+\lambda_{b}=1\), there are no gaps between the vertical strips in Figure 3.4(\(d\)) and the solution of Eq. (3.12) is \(\hat{D}_{0}=1\) corresponding to \(D_{0}=2\) (the attractor in this case is the entire unit square). ### Measure and the spectrum of \(D_{q}\) dimensions Say we cover a chaotic attractor with a grid of cubes (as we would do if we were interested in computing \(D_{0}\)), and then we look at the frequency with which typical orbits visit the various cubes covering the attractor in the limit that the orbit length goes to infinity. If these frequencies are the same for all initial conditions in the basin of attraction of the attractor except for a set of Lebesgue measure zero, then we say that these frequencies are the natural measures of the cubes. That is, for a typical \(\mathbf{x}_{0}\) in the basin of the attractor, the natural measure of a typical cube \(C_{i}\) is \[\mu_{i}=\lim_{T\to\infty}\frac{\eta(C_{i},\;\mathbf{x}_{0},\;T)}{T}, \tag{3.13}\] where \(\eta(C_{i},\;\mathbf{x}_{0},\;T)\) is the amount of time the orbit originating from \(\mathbf{x}_{0}\) spends in \(C_{i}\) in the time interval \(0\quad\quad t\quad T\). In cases where a property holds for all points in a set except for a subset whose measure is zero, we say that the property holds for _almost every_ point in the set with respect to the particular measure. For example, assuming the existence of a natural measure, we say that the limit in (3.13) yields the same value \(\mu_{i}\) for 'almost every point in the basin with respect to Lebesgue measure,' and we call such points _typical_. ### Measure and the spectrum of \(D_{q}\) dimensions The box counting dimension gives the scaling of the number of cubes needed to cover the attractor. For strange attractors, however, it is commonly the case that the frequency with which different cubes are visited can be vastly different from cube to cube. In fact, for very small \(\varepsilon\) it is common that only a very small percentage of the cubes needed to cover the chaotic attractor contain the vast majority of the natural measure on the attractor. That is, typical orbits will spend most of their time in a small minority of those cubes that are needed to cover the attractor. The box counting dimension definition counts all cubes needed to cover the attractor equally, without regard to the fact that, in some sense, some cubes are much more important (i.e., much more frequently visited) than others. To take into account the different natural measures of the cubes it is possible to introduce another definition of dimension which generalizes the box counting dimension. This definition of dimension was formulated in the context of chaotic dynamics by Grassberger (1983) and Hentschel and Procaccia (1983). These authors define a dimension \(D_{q}\) which depends on a continuous index \(q\), \[D_{q}=\frac{1}{1-q}\lim_{\varepsilon\to 0}\frac{\ln I(q,\,\varepsilon)}{\ln(1/ \varepsilon)}\,, \tag{3.14}\] where \[I(q,\,\varepsilon)=\sum_{i=1}^{\tilde{N}(\varepsilon)}\mu_{i}^{q}\,,\] and the sum is over all the \(\tilde{N}(\varepsilon)\) cubes in a grid of unit size \(\varepsilon\) needed to cover the attractor (see also Renyi (1970)). The point is that for \(q>0\) cubes with larger \(\mu_{i}\) have a greater influence in determining the value of \(D_{q}\). Note that for \(q=0\) we have \(I(0,\,\varepsilon)=\tilde{N}(\varepsilon)\), and we recover the box counting dimension definition. In the special case where all the \(\mu_{i}\) are equal, we have \(\mu_{i}=1/\tilde{N}(\varepsilon)\), \(\ln I(q,\,\varepsilon)=(1-q)\ln\tilde{N}(\varepsilon)\), and we recover the box counting dimension independent of \(q\). Defining \(D_{1}\) by \(D_{1}=\lim_{q\to 1}D_{q}\), we have from \(\mathrm{L}\)'Hospital's rule and Eq. (3.14) \[D_{1}=\lim_{\varepsilon\to 0}\frac{\sum_{i=1}^{\tilde{N}(\varepsilon)}\mu_{i} \ln\mu_{i}}{\ln\varepsilon}\,. \tag{3.15}\] The quantity \(D_{1}\) is, as we shall see, of particular interest (Balatoni and Renyi, 1956) and is called the _information dimension_. Another property of the \(D_{q}\) is that they generally decrease with increasing \(q\) (except for the exceptional case where the measure is fairly homogeneously spreadthrough the attractor so that \(\mu_{i}\)\(1/\tilde{N}(\varepsilon)\) for all boxes, in which case all the \(D_{q}\) are equal, \(D_{q}=D_{0}\)). In general, it can be shown that \[D_{q_{1}}\)\(D_{q_{2}}\) if \(q_{1}>q_{2}. \tag{3.16}\] Thus, for example, \(D_{2}\) provides a lower bound for \(D_{1}\), and \(D_{1}\) provides a lower bound for \(D_{0}\). Although, in this chapter, we are primarily interested in the \(D_{q}\) for the natural measure of chaotic attractors, we emphasize that (3.14) can be applied to any measure. Determinations of the fractal dimensions of strange attractors occur ring in numerical experiments have been done for a large number of systems; the earliest example is the paper by Russell _et al._ (1980) who examined the box counting dimension for a number of different systems which yield strange attractors (one of which was the Henon attractor, Figure 1.12). In doing such numerical experiments to determine \(D_{q}\) one typically generates a long orbit of length \(T\) on the attractor and examines the fraction of time the orbit spends in cubes of an \(\varepsilon\) grid. This gives an approximation to \(\mu_{i}\) for each cube from which an approximation \(I_{T}(q,\,\varepsilon)\) to \(I(q,\,\varepsilon)\) is obtained. An approximation to the dimensions \(D_{q}\) can then be obtained by plotting \(\ln\,I_{T}(q,\,\varepsilon)\) versus \(\ln\varepsilon\). If one is not too unlucky, this plot will yield points that appear to fall approximately on a straight line for some appreciable range of \(\ln\varepsilon\). One can then fit a straight line to these points and determine the slope of the line. The approximate \(D_{q}\) is then \((q-1)^{-1}\) times this slope (see Eq. (3.14)). The range of \(\varepsilon\) over which such a fitting can be meaningful is limited at large \(\varepsilon\) by the requirement that \(\varepsilon\) be sufficiently small compared to the attractor size and at small \(\varepsilon\) by statistical fluctuations in determining the \(\mu_{i}\) (due to the necessarily finite amount of data). Statistical problems at small \(\varepsilon\) can be less severe at larger \(q\) (e.g., \(D_{2}\) is, in general, easier to calculate than \(D_{0}\)), since \(D_{q}\) for larger \(q\) values is determined by higher probability cubes for which the statistics is necessarily better. As an example of a use of a fractal dimension, consider generating a chaotic orbit on a digital computer. Computers represent numbers as binary decimals of a certain limited length (the computer 'roundoff'). Thus there is only a finite amount of represented numbers. Hence, any computer generated orbit (on the Henon map, for example) must even tually repeat exactly, thus artificially producing a periodic orbit when, in fact, the orbit should be nonperiodic. This is not necessarily a problem if orbits are run for a time less than the typical computer roundoff induced period. However, in situations where very long orbits are examined this can be a problem, and an estimate of the period is consequently desirable. This problem was studied in a paper by Grebogi _et al._ (1988c). They found that the period scaled as a power of the roundoff with the exponent given by the dimension \(D_{2}\) of the attractor. Specifically, they found that, if the roundoff level is \(\delta\), then the typical roundoff induced periodicity length scales as \(\delta^{-D_{2}/2}\). To show how nonuniform the natural measure on an attractor can be, consider the following example due to Sinai (1972), \[\begin{array}{l}x_{n+1}=(x_{n}+y_{n}+\Delta\cos 2\pi y_{n})\mbox{ modulo 1,}\\ y_{n+1}=(x_{n}+2y_{n})\mbox{ modulo 1.}\end{array} \tag{3.17}\] For small \(\Delta\), Sinai shows that the attractor is the entire square \([0,\,1]\times[0,\,1]\). Thus a typical orbit comes arbitrarily close to any point in the square if we wait long enough. Hence, for any grid of boxes of edge length \(\varepsilon\), all boxes are visited with some nonzero frequency, and \(\tilde{N}(\varepsilon)=\varepsilon^{-2}\). Now consider the orbit points of a typical trajectory of the map Eq. (3.17) shown in Figure 3.6_(a)_. We see that the density of points (natural measure) is highly concentrated along diagonal bands, and, if a small piece of the attractor is magnified (Figure 3.6_(b)_), similar structures of high concentration are evident. In fact, Sinai shows that, for \(\Delta\) small enough, for any \(\xi>0\) there is a collection of small squares whose total area is less that \(\xi\) such that the collection of squares contains a natural measure of \(1-\xi\). Thus it apparently takes an arbitrarily small area to cover most of the natural measure (e.g., take \(\xi=10^{-3}\)). This extreme type of behavior is not a property peculiar to Sinai's example. Indeed it is typical of chaotic attractors and is, for example, present in the generalized baker's map as we shall show in the next section. Figure 3.6_(a)_ 80 000 iterates of the map Eq. (3.17) starting from \(x_{0}=y_{0}=0.5\). With \(\Delta=0.1\). (b) A blow up of the strip marked in (a) (Farmer _et al._, 1983). ### 3.4 Dimension spectrum for the generalized baker's map From the action of the generalized baker's map as illustrated in Figure 3.4, it is evident that an initial density distribution which has no dependence on \(y\) is mapped to one which also has no dependence on \(y\), and this holds for all subsequent iterates. In fact the natural invariant measure can be shown to be uniform in \(y\). Thus the natural measure of the attractor in \(0\)\(y\)\(\alpha\) (the lower portion of the unit square in Figure 3.4(\(a\))) is just \(a\), while the natural measure in \(a\)\(y\)\(1\) is \(\beta=1-a\). Mapping these regions forward in time and noting that the natural measure is invariant to application of the map, we then find that the natural measure of the strip \(0\)\(x\)\(\lambda_{a}\) is \(\alpha\), and the natural measure of the strip \((1-\lambda_{b})\)\(x\)\(1\) is \(\beta\) (cf. Figure 3.4). Since the natural measure of the attractor is uniform in \(y\), we can express \(D_{q}\) as \[D_{q}=1+\hat{D}_{q} \tag{3.18}\] (analogous to Eq. (3.9)), where \(\hat{D}_{q}\) is the dimension in the horizontal direction and is defined as in Eq. (3.14), but with \(I(q,\,\varepsilon)\) replaced by \[\hat{I}(q,\,\varepsilon)=\mathop{\hat{i}}_{i=1}^{\tilde{N}(\varepsilon)}\hat{ \mu}_{i}^{q}. \tag{3.19}\] Here we assume a uniform \(\varepsilon\) spacing along the \(x\) axis. For each interval, we determine the total attractor measure in the vertical strip, \(0\)\(y\)\(1\), in that interval. Counting only those strips for which the measure is not zero, we perform the sum Eq. (3.19), where \(\hat{\mu}_{i}\) is the measure in strip \(i\). We now express \(\hat{I}(q,\,\varepsilon)\) as \[\hat{I}(q,\,\varepsilon)=\hat{I}_{a}(q,\,\varepsilon)+\hat{I}_{b}(q,\, \varepsilon), \tag{3.20}\] where \(\hat{I}_{a}\) is the contribution to the sum in (3.19) for \(0\)\(x\)\(\lambda_{a}\), and \(\hat{I}_{b}\) is the contribution from \((1-\lambda_{b})\)\(x\)\(1\). If we magnify the interval \(0\)\(x\)\(\lambda_{a}\) and its \(\varepsilon\) grid of small intervals in \(x\) by the factor \(1/\lambda_{a}\), we get a picture similar to the whole attractor in \(0\)\(x\)\(1\), with the \(x\) axis partitioned by a uniform grid of intervals of lengths \(\varepsilon/\lambda_{a}\). In addition, since the measure in \(0\)\(x\)\(\lambda_{a}\) is \(\alpha\), we have \[\hat{I}_{a}(q,\,\varepsilon)=\alpha^{q}\hat{I}(q,\,\varepsilon/\lambda_{a}).\] (3.21a) Similarly, we obtain from consideration of the interval \[(1-\lambda_{b})\] \(x\)\(1\), \[\hat{I}_{b}(q,\,\varepsilon)=\beta^{q}\hat{I}(q,\,\varepsilon/\lambda_{b}). \tag{3.21b}\] From Eq. (3.14), we take \(\hat{I}(q,\,\varepsilon)\) to have the small \(\varepsilon\) dependence \[\hat{I}(q,\,\varepsilon)\)\(Ke^{(q-1)\hat{D}_{q}}. \tag{3.22}\]Putting Eqs. (3.21) and (3.22) in (3.20) we obtain a transcendental equation for \(\hat{D}_{q}\) \[\alpha^{q}\lambda_{a}^{(1-q)\hat{D}_{q}}+\beta^{q}\lambda_{b}^{(1-q)\hat{D}_{q}} =1. \tag{3.23}\] For \(q=0\) this equation reduces to the box counting dimension result Eq. (3.12). Expanding Eq. (3.23) for small (\(q-1\)), we obtain an explicit expression for the information dimension \(D_{1}=1+\hat{D}_{1}\), \[D_{1}=1+\frac{\alpha\ln(1/\alpha)+\beta\ln(1/\beta)}{\alpha\ln(1/\lambda_{a})+ \beta\ln(1/\lambda_{b})}. \tag{3.24}\] Also the transcendental equation, Eq. (3.23), can be explicitly solved for the case \(\lambda_{a}=\lambda_{b}\), \[D_{q}=1+\frac{1}{q-1}\,\frac{\ln(\alpha^{q}+\beta^{q})}{\ln\lambda_{a}}. \tag{3.25}\] The information dimension \(D_{1}\) plays a key role. In the next section we verify using the generalized baker's map that \(D_{1}\) has the following remarkable property. Consider a subset of the attractor which has a fraction \(0<\theta\) 1 of the natural measure of the attractor. We can, in principle, calculate the box counting dimension of this set. In fact there will be many ways of choosing sets which cover a given fraction \(\theta\) of the attractor measure. We choose from all these the set with the smallest box counting dimension and denote its dimension \(D_{0}(\theta)\). For \(0<\theta<1\) this set is one which is on those regions of the attractor with the greatest concentration of orbit points (e.g., the dark bands in Figure 3.6(\(a\))). The result from the next section is that \[D_{0}(\theta)=D_{1} \tag{3.26}\] for any \(0<\theta<1\) (e.g., \(\theta=0.99\)). (For \(\theta=1\) the entire attractor must be covered by the set, and \(D_{0}(1)=D_{0}\).) Thus \(D_{1}\) is essentially the dimension of the core region of high natural measure of the attractor. Sinai's result for the attractor of Eq. (3.17) arises because \(D_{0}=2\), while \(D_{1}<2\) for this attractor. That is, the core is fractal while the attractor itself is simply the area (nonfractal) \(0\) \(y\) 1, \(0\) \(x\) 1. ### 3.5 Character of the natural measure for the generalized baker's map We have seen in Section 3.4 that the natural measure of the strip of width \(\lambda_{a}\) is \(\alpha\) and that of the strip of width \(\lambda_{b}\) is \(\beta\). Applying the map to this situation, we find the natural measures of the four strips in Figure 3.5 are as follows: the natural measure of the strip of width \(\lambda_{a}^{2}\) is \(\alpha^{2}\); the natural measure of the strip of width \(\lambda_{b}^{2}\) is \(\beta^{2}\); and the natural measures of the two strips of width \(\lambda_{a}\lambda_{b}\) are both \(\alpha\beta\). As noted in Section 3.2, applying the map \(n\) times, we generate \(2^{n}\) strips, of which \(Z(n,\,m)=n!(n-m)!m!\) have width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) for \(m=0,\,1,\,2,\,\ldots,\,n\). From the above, we see that each strip of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) has a natural measure equal to \(\alpha^{m}\beta^{n-m}\). Thus the natural measure contained in all strips of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) is \[W(n,\,m)=\alpha^{m}\beta^{n-m}\,Z(n,\,m). \tag{3.27}\] (Note that, as it should, the sum \(\sum_{m=0}^{n}W(n,\,m)\) is 1, since, by virtue of \(Z(n,\,m)\) being the binomial coefficient, \((\alpha+\beta)^{n}=\Sigma\alpha^{m}\beta^{n-m}Z(n,\,m)\), and \(\alpha+\beta\equiv 1\).) Using Stirling's approximation, \[\ln\rho!=(\rho+{{1\over 2}})\ln(\rho+1)-(\rho+1)+\ln(2\pi)^{1/2}+O( \rho^{-1}), \tag{3.28}\] we obtain from (3.8) \[\ln\,Z (n+{{1\over 2}})\ln(n+1)-(m+{{1\over 2}})\ln(m+1) \tag{3.29}\] \[-(n-m+{{1\over 2}})\ln(n-m+1)-\ln(2\pi)^{1/2}+1.\] Expanding this expression in a Taylor series around its maximum value, \(m=n/2\), yields \[Z(n,\,m)\quad{{2^{n}}\over{(2\pi)^{1/2}}}\left({{4}\over{n}}\right)^{1/2}\exp \quad-{{1\over 2}}\left[4n\left({{m}\over{n}}-{{1}\over{2}}\right.^{2}\right\}. \tag{3.30}\] Similarly, from Eq. (3.27) \[W(n,\,m)\quad{{1}\over{(2\pi n\alpha\beta)^{1/2}}}\exp\left[-{{n(m/n-\alpha)^ {2}}\over{2\alpha\beta}}\right]. \tag{3.31}\] Note that, since these expressions for \(Z\) and \(W\) are obtained by Taylor series expansions of \(\ln\,Z\) and \(\ln\,W\) about their maxima, they are only valid for \(|m/n-{{1\over 2}}|\ll 1\) and \(|m/n-\alpha|\ll 1\), respectively. However, since the widths in \(m/n\) of these Gaussians are \(O(n^{-1/2})\), we see that for large \(n\) Eq. (3.30) is valid for most of the strips and Eq. (3.31) is valid for most of the natural measure. Figure 3.7 shows schematic plots of \(Z\) and \(W\). It is clear from this figure that, for large \(n\), almost all of the natural measure is contained in a very small fraction of the total number of strips (i.e., a value \(k\) can be chosen so that \(\int_{\alpha-k}^{\alpha+k}nW\mbox{d}(m/n)\) can be close to 1, while \(\int_{\alpha-k}^{\alpha+k}nZ\mbox{d}(m/n)\) can be very small compared to \(2^{n}\), the number of strips). Furthermore, this situation becomes more and more accentuated as \(n\) gets larger, since the widths of the Gaussians decrease as \(n^{-1/2}\) (\(Z\) and \(W\) become delta functions for \(n\to\infty\)). These properties seem to be typical of chaotic attractors. To proceed we now take \(\lambda_{a}=\lambda_{b}\)\(\frac{1}{2}\) and examine coverings of the projection of the attractor onto the \(x\) axis by small intervals of length \(\varepsilon_{n}=\lambda_{a}^{n}\). In this case \(\lambda_{a}^{m}\lambda_{b}^{n-m}=\lambda_{a}^{n}\) independent of \(m\). (There is still a distinction to be made for different \(m\), however, since (although all the widths are all the same) the strips have different natural measures \(\alpha^{m}\beta^{n-m}\).) As an example, let us use the result Eq. (3.31) to calculate the information dimension \(D_{1}\) in the case \(\lambda_{a}=\lambda_{b}\) and \(\varepsilon=\lambda_{a}^{n}\). We convert the sum over \(i\) in Eq. (3.15) to a sum over \(m\) by noting that there are \(Z(n,\ m)\) intervals of length \(\varepsilon=\lambda_{a}^{n}\) which each have the measure \(\alpha^{m}\beta^{n-m}\). Thus, \(\hat{\mu}_{i}\ln\hat{\mu}_{i}=\)\({}_{m}\alpha^{m}\beta^{n-m}Z(n,\ m)\ln(\alpha^{m}\beta^{n-m})=\)\({}_{m}W(n,\ m)\ln(\alpha^{m}\beta^{n-m})=\)\({}_{m}W(n,\ m)[(m/n)\ln\alpha+(1-m/n)\ln\beta]\). For large \(n\), we see from Figure 3.7 that \(W\) becomes sharply peaked about \(m/n=\alpha\). Hence, in the limit \(n\to\infty\) we obtain \({}_{i}\hat{\mu}_{i}\ln\hat{\mu}_{i}=n[\alpha\ln\alpha+\beta\ln\beta]\), and from Eq. (3.15) the information dimension projected onto the \(x\) axis is \[\hat{D}_{1}=(\alpha\ln\alpha+\beta\ln\beta)/(\ln\lambda_{a}), \tag{3.32}\] in agreement with Eq. (3.24) for \(\lambda_{a}=\lambda_{b}\). We now wish to calculate \(\hat{D}_{0}(\theta)\), the dimension of the smallest set containing a natural measure \(\theta\), for the generalized baker's map with \(\lambda_{a}=\lambda_{b}\)\(\frac{1}{2}\). We will find the important result that \(D_{0}(\theta)=D_{1}\) for all \(\theta\) in \(0<\theta<1\). Assuming \(\beta>\alpha\), the larger measures (i.e., larger \(\alpha^{m}\beta^{n-m}\)) correspond to smaller \(m\). Thus the smallest number of intervals of length \(\varepsilon=\lambda_{a}^{n}\) needed to cover a fraction \(\theta\) of the measure (projected to the \(x\) axis) is \[\hat{N}(\varepsilon,\,\theta)=\mathop{\vphantom{m}\smash{m_{\theta}}}_{m=0}Z(n,\, m), \tag{3.33}\] where \(m_{\theta}\) is the largest integer such that \[\mathop{\vphantom{m}\smash{m_{\theta}}}_{m=0}W(n,\,m)\quad\quad\theta. \tag{3.34}\] Using (3.31) and approximating the sum by an integral, we have \[\theta\approx\frac{1}{(2\pi\alpha\beta n)^{1/2}}\mathop{\vphantom{m}\smash{m_ {\theta}}}_{0}\exp\ -\frac{(m-\alpha n)^{2}}{2n\alpha\beta}\ \ \mathrm{d}m, \tag{3.35}\] from which we obtain \[\frac{m_{\theta}}{n}\approx\alpha+\ \ \frac{\alpha\beta}{n}\mathop{\vphantom{m} \smash{\mathrm{erfc}}}^{1/2}\mathrm{erfc}^{-1}(\theta), \tag{3.36}\] where \(\mathrm{erfc}(x)=(2\pi)^{-1/2}\mathop{\vphantom{x}\smash{\mathrm{erfc}}}_{- \infty}\exp(-x^{2}/2)\,\mathrm{d}x\) and we have assumed \(\theta<1\). (Because the width of the maximum of \(W\) is small for large \(n\) we can replace the lower limit of integration in (3.35) by \(-\infty\).) Now consider Eq. (3.33). For large \(n\), the principal contribution to the sum will come from \(m\) values very close to \(m_{\theta}\), since for large \(n\) the quantity \(Z\) decreases rapidly as \(m\) decreases through \(m_{\theta}\) (cf. Fig. 3.8). Since \(|m_{\theta}/n-\alpha|\sim\ O(1/n^{1/2})\), we cannot use the approximation Eq. (3.30) in (3.33). Rather, we divide Eq. (3.31) by \(\alpha^{m}\beta^{n-m}\) to approximate \(Z\) near \(m=m_{\theta}\), Figure 3.8: The principal contribution to the sum in Eq. (3.33) comes from \(m\) values near and slightly below \(m_{\theta}\). \[Z(n,\ m)\quad\frac{\beta^{-n}(\beta/\alpha)^{m}}{(2\pi n\alpha\beta)^{1/2}}\exp\ -\frac{(m-\alpha n)^{2}}{2n\alpha\beta}\ \.\] The term \((\beta/\alpha)^{m}\) decreases as \(m\) decreases away from \(m_{\theta}\), and this decrease is much more rapid than the variation of the term \(\exp[-(m-\alpha n)^{2}/2n\alpha\beta]\). Thus, in performing the sum in Eq. (3.33) we replace \(m\) by \(m_{\theta}\) in this term. Hence the only significant \(m\) dependence in the sum is from the term \((\beta/\alpha)^{m}\). Using \(\stackrel{{ m_{\theta}}}{{m=0}}(\beta/\alpha)^{m}=(\beta/\alpha)^{m_{\theta}}\beta/(\beta-\alpha)\), we obtain \[\hat{N}(\varepsilon,\theta)\quad\beta^{-(n-m_{\theta})}\alpha^{-m_{\theta}}n^{ -1/2}.\] From Eq. (3.1) with \((m_{\theta}/n)=\alpha+O(n^{-1/2})\) (Eq. (3.36)) and \(\varepsilon=\lambda_{a}^{n}\) we obtain \[\hat{D}_{0}(\theta)=(\alpha\ln\alpha+\beta\ln\beta)/(\ln\lambda_{a}),\] which is the same as Eq. (3.32). Hence we see that the information dimension may be thought of as the box counting dimension of the smallest set which contains most of the attractor measure. Furthermore, since \(D_{1}\) is in general less than \(D_{0}\) (at most they can be equal and this is not typical), we see that, on any covering of the attractor by small cubes, the vast majority of them taken together have only a small part of the measure5 (\(\tilde{N}(\varepsilon,\theta)\quad\varepsilon^{-D_{1}}\ll\varepsilon^{-D_{0} }\quad\tilde{N}(\varepsilon,\,1)\) for \(\theta<1\)). ### The pointwise dimension Another concept of dimension which is useful for the study of strange attractors and other invariant sets is the _pointwise dimension_\(D_{p}(\mathbf{x})\). If \(B_{\varepsilon}(\mathbf{x})\) denotes an \(N\) dimensional ball of radius \(\varepsilon\) centered at a point \(\mathbf{x}\) in an \(N\) dimensional phase space, then the pointwise dimension of a probability measure \(\mu\) at \(\mathbf{x}\) is defined as (Young, 1982) \[D_{p}(\mathbf{x})=\lim_{\varepsilon\to 0}\frac{\ln\mu(B_{\varepsilon}(\mathbf{x}))}{\ln\varepsilon}. \tag{3.37}\] We argue below that, if the measure \(\mu\) is _ergodic_, then \(D_{p}(\mathbf{x})\) assumes a single common value \(\overline{D}_{p}\) for all locations \(\mathbf{x}\) except possibly for a set of \(\mathbf{x}\) containing zero \(\mu\) measure. Basically, the \(\mathbf{x}\) values yielding this common value constitute the 'core region' of the measure whose box counting dimension is \(D_{0}(\theta)=D_{1}(0<\theta<1)\). Results of Young (1982) imply that the value \(\overline{D}_{p}\) of \(D_{p}(\mathbf{x})\) assumed for 'almost every' \(\mathbf{x}\) with respect to the measure \(\mu\) (i.e., all \(\mathbf{x}\) except for a set of \(\mu\) measure zero) is \(D_{0}(\theta)(0<\theta<1)\), \[\overline{D}_{p}=D_{0}(\theta)=D_{1}. \tag{3.38}\](This is relatively easy to show for the generalized baker's map example.[6]) To obtain the result that \(D_{p}(\mathbf{x})\) is the same value for almost every \(\mathbf{x}\) with respect to \(\mu\), recall from Section 2.3.3 that an ergodic measure is an invariant probability measure which cannot be decomposed such that \[\mu=p\mu_{1}+(1-p)\mu_{2},\,1>p>0,\] with \(\mu_{1}\neq\mu_{2}\) being two other invariant probability measures. The natural measure on a chaotic attractor is of particular interest here, and we note that if it exists (and we assume it does), it is necessarily ergodic by virtue of the fact that it can be constructed from the long time limit of the frequency that a single typical orbit visits regions of phase space. To show that \(D_{p}(\mathbf{x})\) assumes a single common value for almost every \(\mathbf{x}\) with respect to the ergodic measure \(\mu\) we first argue that \[D_{p}(\mathbf{x})=D_{p}(\mathbf{x}^{\prime}); \tag{3.39}\] where \(\mathbf{x}^{\prime}=\mathbf{M}(\mathbf{x})\); that is, the pointwise dimension at \(\mathbf{x}\) and at its first iterate under the map are the same. Since the measure is invariant, we have for invertible \(\mathbf{M}\) that \(\mu(B_{\varepsilon}(\mathbf{x}))=\mu(\mathbf{M}(B_{\varepsilon}(\mathbf{x})))\). Assuming the map to be smooth and \(\varepsilon\) to be small, the region \(B_{\varepsilon}(\mathbf{x})\) is mapped by \(\mathbf{M}\) to an ellipsoidal region about \(\mathbf{x}^{\prime}=\mathbf{M}(\mathbf{x})\). Thus, as shown in Figure 3.9, we can define constants \(r_{1}>r_{2}\) such that the ball \(B_{r_{1}\varepsilon}(\mathbf{x}^{\prime})\) contains \(\mathbf{M}(B_{\varepsilon}(\mathbf{x}))\) which contains \(B_{r_{1}\varepsilon}(\mathbf{x}^{\prime})\). Thus \[\mu(B_{r_{1}\varepsilon}(\mathbf{x}^{\prime}))\approx\mu(\mathbf{M}(B_{ \varepsilon}(\mathbf{x})))=\mu(B_{\varepsilon}(\mathbf{x}))\approx\mu(B_{r_{ 2}\varepsilon}(\mathbf{x}^{\prime})). \tag{3.40}\] Figure 3.9: Map of a small ball \(B_{\varepsilon}(\mathbf{x})\). Since (3.37) yields \[D_{p}(\mathbf{x}^{\prime})=\lim_{\varepsilon\to 0}\,\frac{\ln[\mu(B_{\tau, \varepsilon}(\mathbf{x}^{\prime}))]}{\ln(r_{1,2}\varepsilon)}=\lim_{ \varepsilon\to 0}\,\frac{\ln[\mu(B_{\tau,\varepsilon}(\mathbf{x}^{\prime}))]}{ \ln\varepsilon},\] we immediately obtain from (3.40), \(D_{p}(\mathbf{x}^{\prime})\)\(D_{p}(\mathbf{x})\)\(D_{p}(\mathbf{x}^{\prime})\) or \(D_{p}(\mathbf{x})=D_{p}(\mathbf{x}^{\prime})\). To show that \(D_{p}(\mathbf{x})\) is the same for almost every \(\mathbf{x}\) with respect to the ergodic measure \(\mu\), first assume that it is not. Then there is some value \(d_{p}\) such that there is a set \(S_{-}\) such that \(D_{p}(\mathbf{x})\)\(d_{p}\) for \(\mathbf{x}\) in \(S_{-}\), and there is another disjoint set \(S_{+}\) such that \(D_{p}(\mathbf{x})>d_{p}\) for \(\mathbf{x}\) in \(S_{+}\), and further the natural measures of \(S_{+}\) and \(S_{-}\) are not zero, \(\mu(S_{\pm})>0\). By Eq. (3.39) this implies that the sets \(S_{+}\) and \(S_{-}\) are invariant. Hence the measure is divided into two parts, orbits on one part never visiting the other part. This, however, is not possible because we assume the measure is ergodic. Hence \(D_{p}(\mathbf{x})\) is the same for almost every \(\mathbf{x}\) with respect to the measure. Henceforth, we consider \(\mu\) to be the natural measure on a chaotic attractor. As we have stated above, \(D_{p}(\mathbf{x})\) assumes the value \(D_{1}=D_{0}(\theta)\) (\(0<\theta<1\)) for \(\mathbf{x}\) values in the core region of the attractor. Since, how ever, \(D_{0}\) is typically greater than \(D_{1}\), there is a set on the attractor which is relatively large (in the sense of having a larger box counting dimension \(D_{0}>D_{0}(\theta)\) (\(0<\theta<1\))) which is not on the core, and for which we consequently expect \(D_{p}(\mathbf{x})\neq D_{1}\). In our discussion of fractal dimension we started by defining the box counting dimension, which give the dimension of a _set_. We then introduced the spectrum of dimensions \(D_{q}\) which assigns 'dimension' values to a _measure_ (for each value of \(q\)). Measures for which \(D_{q}\) is not a constant with \(q\) are often called _multifractal measures_. Our statement above that there are points on the attractor for which \(D_{p}(\mathbf{x})\neq D_{1}\) is another consequence of multifractality. Indeed, there are a number of intriguing further aspects of multifractal measures that arise. However, these are of a somewhat more advanced nature. For now we drop the discussion of multifractals, but will take it up again in Chapter 9 which will be devoted entirely to the discussion of this interesting topic. ### 3.7 Implications and determination of fractal dimension in experiments One of the important issues confronting someone who is examining an experimental dynamical process is the question of how many scalar variables are necessary to model the process. For example, if we model the process using differential equations of the form \(\mathrm{d}\mathbf{x}/\mathrm{d}t=\mathbf{F}(\mathbf{x})\), how large does the dimensionality of \(\mathbf{x}\) have to be? Clearly a lower bound to this dimensionality is the fractal dimension of the attractor \(D_{0}\). If the dimension of the vector \(\mathbf{x}\) is less than \(D_{0}\), the structure of the attractor cannot be reproduced by the model, and one anticipates that important features of the dynamics will be lost. For this reason considerable interest has attached to the problem of determining the dimension of experimental strange attractors. Typically experiments determine \(D_{2}\) or \(\overline{D}_{p}=D_{1}\) which are useful since they are lower bounds on \(D_{0}\) and hence are also lower bounds on the system dimensionality. Guckenheimer and Buzyna (1983) present measurements of the point wise dimension of a presumed chaotic attractor in a rotating differentially heated annulus of fluid. In their technique they first choose a number of variables which they regard as their phase space. For these variables they use the temperature readings of 27 thermistors each located at a different point in the fluid. The number 27 was arbitrarily chosen to be large enough to represent the dimensions of the attractors they expected to find. The justification for using thermistor readings at different locations as phase space variables is that these readings are determined by the system state, and thus, like the delay coordinates discussed in Section 1.6, they may be viewed as smooth functions of any other vector variable \(\mathbf{x}\) specifying the state. The calculation of the pointwise dimension proceeds as follows. Consider the vector \(\mathbf{z}(t)=(\xi_{1}(t),\,\xi_{2}(t),\,\ldots,\,\xi_{27}(t))\), where \(\xi_{j}(t)\) is the temperature reading on the \(j\)th thermistor. Then a large number of points on the attractor are obtained by sampling \(\mathbf{z}(t)\) at discrete time intervals \(T\); \(\mathbf{z}_{0}=\mathbf{z}(t_{0})\), \(\mathbf{z}_{1}=\mathbf{z}(t_{0}+T)\), \(\mathbf{z}_{2}=\mathbf{z}(t_{0}+2T)\),..., \(\mathbf{z}_{K}=\mathbf{z}(t_{0}+KT)\). One then selects one of the \(\mathbf{z}_{j}\)s as a reference point, call it \(\mathbf{z}_{*}\), and calculates the distances \(d_{k}=|\mathbf{z}_{k}-\mathbf{z}_{*}|\) from \(\mathbf{z}_{*}\) to the \(K\) other points \(\mathbf{z}_{k}\). The \(K\) distances \(d_{k}\) are then ordered according to their size on a list with the smallest \(d_{k}\) first. The \(i\)th distance value on the list gives a value of \(\varepsilon\) (namely \(\varepsilon=d_{k}\)) such that \[\mu(B_{\varepsilon}(\mathbf{x}_{*}))\hskip 14.226378pti/K.\] The quantity \(\ln(i/K)\) is then plotted as a function of \(\ln\varepsilon\). The points are observed to lie approximately on a straight line in some range of \(\varepsilon\) values, and the dimension is estimated as the slope of a straight line fitted to the data. Problems can occur due to lack of sufficient data, noise, etc., but useful results were nevertheless obtained. For the experiment, the authors found that as the fluid was driven more strongly into the unstable regime, the dimension of the attractor rose, indicating the excitation of more and more active modes of motion and a consequent transition toward turbulence. Similar results were obtained by Brandstater and Swinney (1987) for an experiment on Couette Taylor flow. In Couette Taylor flow one has a fluid contained between two vertical coaxial cylinders (Figure 3.10(_a_)) and the cylinders are rotated at different angular velocities. In Brandstater and Swinney's experiment the outer cylinder was stationary and the behavior of the system was examined as a function of the rotation rate \(\Omega\) of the inner cylinder. Figure 3.10(_b_) shows the computed dimension for this experiment as a function of the rotation rate. In this case improved statistics for the pointwise dimension were obtained by averaging the quantity \(\ln\mu(B_{\varepsilon}(2\ast))\) over different reference points \(\mathbf{z}\ast\) taken from points on the orbit on the attractor. In performing the dimension computation these authors used delay coordinated (Section 1.6), \(V(t)\), \(V(t-\tau)\), \(V(t-2\tau)\),..., as their phase space variables, where \(V(t)\) is the radial component of the fluid velocity measured at a particular point \(\mathbf{z}\ast\). midway between the inner and outer cylinders. Referring to Figure 3.10(_b_), we see that the measured dimension is apparently close to 2 in the range \(\Omega/\Omega_{\rm c}\) between 10 and 11.8. (The quantity \(\Omega_{\rm c}\) is the theoretical critical rotation rate at which the fluid first develops spatial structure in the vertical direction.) In this range the authors verify that the dynamics is nonchaotic and lies on a two dimensional toroidal surface (as discussed in Chapter 6 this corresponds to two frequency quasiperiodic motion). As \(\Omega/\Omega_{\rm c}\) is increased past this range, the motion becomes chaotic and the dimension of the attractor steadily rises. In addition to the pointwise dimension, it has been emphasized by Grassberger and Procaccia (1983) that the 'correlation dimension' \(D_{2}\) is particularly suited for relatively easy experimental determination. To calculate the correlation dimension, one must estimate the quantity (cf. Eq. (3.14)) for different values of \(\varepsilon\). Say we have a set of orbit points on the attractor \({\bf z}_{k}\) (\(k=0,\,1,\,2,\,\ldots,\,K\)). Then we compute the 'correlation integral' where \(U(\cdot)\) is the unit step function. (The sum in (3.42) gives the number of point pairs that are separated by a distance less than \(\varepsilon\).) The quantity \(C(\varepsilon)\) may be shown to scale with \(\varepsilon\) in the same way as \(I(2,\,\varepsilon)\) scales with \(\varepsilon\). Thus \[D_{2}=\lim_{\varepsilon\to 0}\frac{\ln C(\varepsilon)}{\ln\varepsilon}. \tag{3.43}\] To see why \(C(\varepsilon)\) and \(I(2,\,\varepsilon)\) have the same scaling we write (3.42) as Now refer to Eq. (3.41) for \(I(2,\,\varepsilon)\). Noting that \(\mu(B_{x}({\bf z}))\)\(\mu_{i}\) if \({\bf z}\) is in cube \(i\), and replacing one of the \(\mu_{i}\) in Eq. (3.41) by \(\mu(B_{x}({\bf z}))\), we see that (3.41) is also roughly an average of \(\mu(B_{x}({\bf z}))\) over the natural measure. Hence \[I(2,\,\varepsilon)\ \ \ \ C(\varepsilon),\] and Eq. (3.43) follows. Equation (3.43) provides a useful means of estimating \(D_{2}\) since \(C(\varepsilon)\) from (3.42) can be estimated by using a finite but large \(K\) value.7 This method for calculating \(D_{2}\) and the method of calculating \(\overline{D}_{p}\) by averaging experimentally determined values of \(D_{p}({\bf z}_{*})\) over many reference points \({\bf z}*\) on an orbit on the attractor require similar computational power and data quality, although one might expect better statistics for \(D_{2}\) since it more heavily weights higher measure regions. Brandstater and Swinney in their paper report that they have used both methods. Since \(D_{p}\) typically is equal to \(D_{1}\) which typically exceeds \(D_{2}\) (Eq. (3.16)), one would expect that the pointwise dimension values might be larger than the correlation dimension values. Brandstater and Swinney find, however, that the accuracy of their measurements is insufficient to distinguish the two. Thus Figure 3.10(\(b\)) can be regarded as applying to both \(D_{p}\) and \(D_{2}\). Footnote 7: The \(\mu(B_{x}({\bf z}))\) is the \(\mu(B_{x}({\bf z}))\) value of \(B_{x}({\bf z})\). In the measurement of fractal dimension in experiments it is often important to consider the effect of noise. If we assume the noise is white (i.e., it has a flat frequency power spectrum), then we can regard it as essentially fattening (or 'fuzzing') the attractor by an amount of order \(\eta\), where \(\eta\) represents the typical noise amplitude (see, for example, Ott and Hanson (1981), Ott _et al._ (1985), Jung and Hanggi (1990)). Thus, for observations of the attractor characteristics on scales \(\varepsilon\) greater than the noise level \(\eta\), the attractor appears to be fractal, while for scales \(\varepsilon<\eta\) the attractor appears to be an \(N\) dimensional volume, where \(N\) is the dimen vision of the space in which the attractor lies. Figure 3.11 shows an Figure 3.11: Log\({}_{2}\)\(C(\varepsilon)\) versus \(\log_{2}\varepsilon\) for the Henon map embedded in three dimensions without (curve 1) and with (curves 2 and 3) noise. Noise for curve 3 is greater than noise for curve 2 (Ben Mizrachi _et al._, 1984). illustration of this effect from numerical experiments by Ben Mizrachi _et al._ (1984). The figure shows \(\log_{2}C(\varepsilon)\) versus \(\log_{2}\varepsilon\) for numerical experi ments on the Henon map embedded in three dimensions (a delay coordi nate vector \(\mathbf{y}_{n}=(x_{i}^{(1)},\,x_{n-1}^{(1)},\,x_{n-2}^{(1)})\) was used; see Eq. (1.16). Curve 1 is for the map without noise and yields \(D_{2}\) 1.25 (the slope of the fitted line). Curve 2 is for the map with a random amount of noise added at each iterate. Curve 3 is similar but with large noise. For \(\varepsilon\approx\eta\) the results for all three agree. For \(\varepsilon\approx\eta\) the noise causes the slope of the fitted line to be 3, the dimension of the embedding space. Thus white noise effectively limits the smallest size \(\varepsilon\) that can be used in dimension determinations. ### A direct experimental observation of fractal attractors Consider particles floating on a fluid surface where \(\mathbf{v}(x,\,y,\,z,\,t)\) denotes the fluid velocity at a point \((x,\,y,\,z)\) in the fluid. In most cases, to a very good approximation, the velocity of the floating particles is the fluid velocity \(\mathbf{v}\) evaluated on the surface. We denote this velocity \(\mathbf{\bar{v}}(x,\,y,\,t)\). An important point is that even though the flow \(\mathbf{v}\) may be incompressible, the two dimensional motion of floating particles in the fluid surface may be compressible. In particular, the dynamical system \(\mathrm{d}\mathbf{\bar{x}}/\mathrm{d}t=\mathbf{\bar{v}}(\mathbf{\bar{x}},\,t)\), where \(\mathbf{\bar{x}}=(x,\,y)\), may have attractors, even though (due to incompressibility of the fluid, \(\mathbf{\bar{v}}\cdot\mathbf{v}=0\)) the dynamical system \(\mathrm{d}\mathbf{x}/\mathrm{d}t=\mathbf{v}(\mathbf{x},\,t)\), where \(\mathbf{x}=(x,\,y,\,z)\), cannot have attractors. For a simple illustration of this see Figure 3.12 in which the arrows indicate the direction of a steady incompressible flow. Floating particles initially distributed on the surface are compressed and tend toward point \(A\) as time increases. Thus, \(A\) is an attractor for the dynamical system \(\mathrm{d}\mathbf{\bar{x}}/\mathrm{d}t=\mathbf{\bar{v}}\). The important thing to note, however, is that, with slightly more complicated time dependent flows, the cloud can eventually coalesce on to a fractal; i.e., the attractor for \(\mathrm{d}\mathbf{\bar{x}}/\mathrm{d}t=\mathbf{\bar{v}}\) can be a fractal chaotic attractor. Note that the fractal attractors we have dealt with so far typically exist in an abstract phase space. Here, on the other hand, the phase space is the actual physical coordinates of Figure 3.12: Floating particles are attracted to \(A\). surface particles. Consequently, the fractal in this case is a real physical object and is, in principal, accessible to direct visual inspection. This was demonstrated in a fluid experiment (Sommerer and Ott, 1993a) with an irregular pulsatile flow. Figure 3.13 shows a schematic diagram of the setup of the experiment. Sucrose solution flows upward in an outer cylindrical region, across the annular sill, and downward in the central region. The flow is periodically pulsed, and fluid instabilities lead to strong azimuthal dependence of the flow velocity **v** (although the container is essentially rotation symmetric). The information dimension of the resulting particle distribution was determined by taking the intensity of light measured in a given camera pixel as proportional to the number of particles. Normalizing then gives an approximation to the natural measure on the attractor. Using boxes of varying size \(\epsilon\), the numerator of Eq. (3.15) was then plotted versus the denominator. The result was very well fitted by a straight line down to the smallest accessible value of \(\epsilon\) (namely, the pixel size), and \(D_{1}\) was then estimated as the slope of this line. For this distribution \(D_{1}\) was estimated to be 1.73 \(\pm\) 0.03. Figure 3.13: Schematic of the experiment of Sommerer and Ott (1993a). ### Embedding Say we use delay coordinates (as in Section 1.6) to construct a \(d\) dimensional vector \(\mathbf{y}=(\,g(t)\), \(g(t-\tau)\), \(g(t-2\tau)\),..., \(g[t-(d-1)\tau])\); see Eq. (1.16). In principle, if \(d\) is large enough, and the attractor is finite dimensional, then there exists some dynamical system describing the evolution of the vector \(\mathbf{y}\). The question we now wish to address is how large must \(d\) be for this to be so. We assume that an actual smooth low dimensional system which describes the dynamics exists,[9] \[\mathrm{d}\mathbf{x}/\mathrm{d}t=\mathbf{F}(\mathbf{x}),\] where \(\mathbf{x}\) has some dimensionality \(d^{\prime}\). The observed quantity \(g(t)\) may (as discussed in Section 1.6) be regarded as a smooth function of the state variable \(\mathbf{x}\). Hence \(\mathbf{y}\) is a function of \(\mathbf{x}\), \[\mathbf{y}=\mathbf{H}(\mathbf{x}).\] The state of the system is given by \(\mathbf{x}\), and knowledge of \(\mathbf{x}\) at any time is sufficient to evolve the system into the future by \(\mathrm{d}\mathbf{x}/\mathrm{d}t=\mathbf{F}(\mathbf{x})\). For there to be a dynamical system that evolves delay coordinate vectors \(\mathbf{y}=\mathbf{H}(\mathbf{x})\) forward in time, it is sufficient that the function \(\mathbf{y}=\mathbf{H}(\mathbf{x})\) be such that if \(\mathbf{x}_{0}\) denotes a system state and \(\mathbf{y}_{0}=\mathbf{H}(\mathbf{x}_{0})\), then there is no other possible system state \(\mathbf{x}_{0}^{\prime}=\mathbf{x}_{0}\) satisfying \(\mathbf{y}_{0}=\mathbf{H}(\mathbf{x}_{0}^{\prime})\). Hence, \(\mathbf{x}_{0}\) determines \(\mathbf{y}_{0}\) and vice versa. Thus given delay coordinates \(\mathbf{y}_{0}=\mathbf{H}(\mathbf{x}_{0})\), the state \(\mathbf{x}_{0}\) is uniquely determined and can be evolved forward any amount in time by \(\mathrm{d}\mathbf{x}/\mathrm{d}t=\mathbf{F}(\mathbf{x})\) to a new state, which can then be transformed to the \(\mathbf{y}\) variable by the function \(\mathbf{H}\). This, in principle, defines a dynamical system evolving \(\mathbf{y}\) forward in time. The key point is that the function \(\mathbf{H}\) must satisfy the condition that \(\mathbf{x}\neq\mathbf{x}^{\prime}\) implies \[\mathbf{H}(\mathbf{x})\neq\mathbf{H}(\mathbf{x}^{\prime}).\] If this is so, then we say that \(\mathbf{H}\) is an _embedding_ of the \(d^{\prime}\) dimensional \(\mathbf{x}\) space into the \(d\) dimensional \(\mathbf{y}\) space. As an example, say our dynamical system in \(\mathbf{x}\) is the simple one dimensionsal dynamical system \(\mathrm{d}x/\mathrm{d}t=\omega\) and \(x\) is an angle variable (i.e., in our previous notation, \(x=\theta\), and \(x\) and \(x+2\pi\) are identified as equivalent). If we were to use a three dimensional embedding (\(d=3\)), then we can write \(\mathbf{y}\) as \(\mathbf{y}=[G(x(t))\), \(G(x(t-\tau))\), \(G(x(t-2\tau))]\), and \(G(x)\) is \(2\pi\) periodic in \(x\). Thus, the orbit would be expected to be a limit cycle lying on a closed curve in \(\mathbf{y}\) space as shown in Figure 3.14(\(a\)). Now say we use a two dimensional embedding, \(\mathbf{y}=[G(x(t))\), \(G(x(t-\tau))]\). The picture might look something like that shown in Figure 3.14(\(b\)). That is, the mapping of \(\mathbf{x}\) values to \(\mathbf{y}\) space might produce a curve which intersects itself. We cannot now have a dynamical system in \(\mathbf{y}\), because at \(\mathbf{y}\) values at these intersections there are two possible \(\mathbf{x}\) values. Hence, specification of \(\mathbf{y}\) does not, in principle,allow us to determine the future system evolution. Thus the question is how large does \(d\) typically have to be to ensure that we avoid self intersections of the \(d^{\prime}\) dimensional \(\mathbf{x}\) space when we attempt to embed it in a \(d\) dimensional delay coordinate \(\mathbf{y}\) space. Takens (1980) addresses this question and obtained the result that generically \[d\ \ \ \ 2d^{\prime}+1 \tag{3.45}\] is sufficient. For our example above \(d^{\prime}=1\), and Eq. (3.45) says that intersections are generically absent if \(d=3\) (as in Figure 3.14(\(a\))) or larger. We now give a heuristic discussion and justification for (3.45), follow which we discuss some applications of embedding. Say we have a smooth surface of dimension \(d_{1}\) and another of dimension \(d_{2}\), both lying in an \(N\) dimensional Cartesian space. If these surfaces intersect in a _generic intersection_, then the dimension \(d_{0}\) of the intersection set is \[d_{0}=d_{1}+d_{2}-N. \tag{3.46}\] If Eq. (3.46) yields \(d_{0}<0\), then the two sets do not generically intersect. Figure 3.15 illustrates this equation for several cases. Figure 3.15(\(a\)) shows a generic intersection of two curves in a two dimensional space. In this case \(d_{1}=d_{2}=1\), \(N=2\) and hence by Eq. (3.46) the dimension of the intersection is zero; the intersection set is two points. Figure 3.15(\(b\)) shows a nongeneric intersection of two curves in a two dimensional space. Here the intersection is one dimensional. It is nongeneric because it can be destroyed by an arbitrarily small smooth perturbation. For example, rigidly shifting one of the curves by an arbitrarily small amount can either convert the one dimensional intersection to a zero dimensional generic inter section (Figure 3.15(\(c\))), or, if the curve is shifted in the other direction, Figure 3.14: Embedding of a limit cycle in (\(a\)) a three dimensional delay coordinate space and (\(b\)) a two dimensional delay coordinate space. then there is no intersection at all. In contrast, the generic intersections of Figure 3.15(_a_) cannot be qualitatively altered by small smooth changes in the curves. Figure 3.15(_d_) shows a case where Eq. (3.46) yields a negative value (\(d_{1}=d_{2}=1\), \(N=3\)) indicating that two one dimensional curves do not generically intersect in spaces of dimension \(N\) 3. Figure 3.15(_e_) shows a curve (\(d_{1}=1\)) and a two dimensional surface (\(d_{0}=2\)) in a three dimensional space (\(N=3\)) generically intersecting at a point with \(d_{0}=0\) as predicted by Eq. (3.46). Figure 3.15(_f_) shows two two dimensional surfaces (\(N=3\)) and \(N=3\). Figure 3. surfaces (\(d_{1}=d_{2}=2\)) in a three dimensional space generically intersect in a curve so that \(d_{0}=1\), again as predicted by Eq. (3.46). In Takens' result Eq. (3.45) we were concerned with self intersections (e.g., Figure 3.14). Thus Eq. (3.45) follows from (3.46) by requiring \(d_{0}<0\) and setting \(d_{1}=d_{2}=d^{\prime}\) and \(N=d\) with the delay coordinate transformation function \({\bf H}\) regarded as being a typical function (so that only generic inter sections are expected). Recently, methods have been proposed whereby one can determine a map from experimental data which may be noisy. In principle, this can be done without any knowledge of the physical processes determining the evolution. The only knowledge necessary is that the experimentally observed time evolution results from a low dimensional dynamical sys tem. (Information on the system dimensionality can be obtained by means urement of the fractal dimension as discussed in Section 3.7.) One way to proceed is as follows. First one forms a delay coordinate vector \({\bf y}\) of sufficient dimensionality. Then a surface of section is used to obtain a large amount of data giving a discrete trajectory \(\chi_{1}\), \(\chi_{2}\), \(\chi_{3}\),..., for points in the surface of section. (Alternatively \({\bf y}(t)\) can be sampled at finite time intervals to give an orbit for a time \(T\) map as discussed at the end of Section 1.3.) Next one attempts to fit this data to a map, \(\chi_{n+1}={\bf F}(\chi_{n})\). That is, one attempts to find the function \({\bf F}\). This may not be possible unless the dimensionality \(d\) of the original delay coordinate vector \({\bf y}\) is large enough (e.g., Eq. (3.45) is a sufficient condition for this to be so). One method is to approximate \({\bf F}\) as a locally linear function. That is, in a small region of \(\chi\) space one can approximate \({\bf F}\) as \(\chi_{n+1}={\bf A}\cdot\chi_{n}+{\bf b}\) where \({\bf A}\) and \({\bf b}\) are a matrix and a vector. The matrix \({\bf A}\) and the vector \({\bf b}\) are obtained by least squares fitting to all the experimental observations \(\chi_{i}\) and \(\chi_{i+1}\) such that \(\chi_{i}\) falls near \(\chi_{n}\). By using many data pairs, \(\chi_{i}\) and \(\chi_{i+1}\), the least squares fitting has the effect of averaging out random noise contamination of the data. By piecing together results from many such small regions, we get a global approximation to \({\bf F}\). Hence, we obtain a dynamical system for the experimental process. Possible uses for such an approach include the prediction of the future evolution of the system (Farmer and Sidorowich, 1987; Casdagli, 1989; Abarbanel _et al._, 1990; Poggio and Girosi, 1990; Linsay, 1991), removing noise from chaotic data (Kostelich and Yorke, 1988; Hammel, 1990), obtaining Lyapunov expo nents from experimental data (Eckmann and Ruelle, 1985; Eckmann _et al._, 1986; Sano and Sawada, 1985; Wolf _et al._, 1985; Bryant _et al._, 1990; cf. Chapter 4), finding unstable periodic orbits embedded in a chaotic attractor (Gunaratne _et al._, 1989; Lathrop and Koslich, 1989; Sommerer _et al._, 1991a), and controlling chaotic dynamical systems by application of small controls (Ott _et al._, 1990a,b; Shinbrot _et al._, 1990; Ditto _et al._, 1990a; Dressler and Nitsche, 1992; cf. Chapter 4). ### Fat fractals In this chapter we have primarily been discussing the fractal dimension of sets of zero Lebesgue measure in the phase space. There are, however, Cantor sets with nonzero Lebesgue measure, and such sets often appear in nonlinear dynamics, as we shall see. Farmer (1985) has called these kinds of sets _fat fractals_. Grebogi _et al._ (1985b) define a set lying in an \(N\) dimensional Euclidian space to be a fat fractal if, for every point **x** in the set and every \(\varepsilon>0\), a ball of radius \(\varepsilon\) centered at the point **x** contains a nonzero volume (Lebesgue measure) of points in the set, as well as a nonzero volume outside the set.[10] (If \(N=1\) the 'ball of radius \(\varepsilon\)' is the interval \([x-\varepsilon\), \(x+\varepsilon]\).) We have already seen examples of fat fractals in Chapter 2, namely the set \(S*\) discussed in Section 2.2 and the set of \(r\) values for which the attractor of the logistic map is chaotic. Other examples will be the set of parameter values yielding quasiperiodic motions (see Chapter 6), and the regions of phase space of a nonintegrable Hamiltonian system on which there is nonchaotic motion on KAM tori (see Chapter 7). Since fat fractals have positive Lebesgue measure, their box counting dimension is the same as the dimension of the space in which they lie, \(D_{0}=N\). Thus the box counting dimension of these sets says nothing about the infinitely fine scaled structure that they possess. One would like to have a quantitative way of characterizing this structure analogous to the box counting dimension of fractals with zero Lebesgue measure ('skinny fractals'). One way of doing this is by the _exterior dimension_ definition of Grebogi _et al._ (1985b). These authors begin by noting that, given an ordinary skinny fractal set \(S_{0}\), the box counting definition Eq. (3.1) is equivalent to \[D_{0}=N-\lim_{\varepsilon\to 0}\ln V[S(\varepsilon)]/\ln\varepsilon, \tag{3.47}\] where \(S(\varepsilon)\) is obtained by fattening the original set \(S_{0}\) by an amount \(\varepsilon\) (i.e., the set \(S(\varepsilon)\) consists of the original set \(S_{0}\) plus all points within a distance \(\varepsilon\) from \(S_{0}\) and so \(S_{0}\equiv S(0)\)), and \(V[S(\varepsilon)]\) is the \(N\) dimensional volume of this set. To see how Eq. (3.47) arises, say we cover \(S_{0}\) with \(\tilde{N}(\varepsilon)\) cubes from an \(N\) dimensional grid. The volume of all these cubes is \(\varepsilon^{N}\tilde{N}(\varepsilon)\), and this volume scales in roughly the same way with \(\varepsilon\) as \(V[S(\varepsilon)]\). That is, \[V[S(\varepsilon)]\quad\varepsilon^{N}\tilde{N}(\varepsilon).\] Putting this estimate in Eq. (3.47) immediately reproduces the box counting dimension definition Eq. (3.1). We now define the exterior dimension of \(S_{0}\) as \[D_{x}\equiv N-\lim_{\varepsilon\to 0}\ln V[\tilde{S}(\varepsilon)]/\ln\varepsilon, \tag{3.48}\]where \(\bar{S}(\varepsilon)=S(\varepsilon)-S_{0}\) is what remains if the original set is deleted from the fattened set (hence the name _exterior_ dimension). (We assume the set \(S_{0}\) is closed.) For skinny fractals \(V[S_{0}]=0\), and we thus have \(V[\bar{S}(\varepsilon)]=V[S(\varepsilon)]\) so that the exterior dimension reduces to the box counting dimension, \(D_{x}=D_{0}\). However, unlike the box counting dimension, \(D_{x}\) gives nontrivial results for fat fractals. Note from the definition (3.48) that \[V[\bar{S}(\varepsilon)]\quad\varepsilon^{N-D_{x}}. \tag{3.49}\] One appealing way of interpreting the exterior dimension of a fat fractal is in terms of an _uncertainty exponent_. Say we consider some point \(\mathbf{z}\) in bounded region of space \(R\) that also contains the fat fractal set \(S_{0}\). We are asked to determine whether or not \(\mathbf{z}\) is in \(S_{0}\), but we are also told that the values given for the coordinates of \(\mathbf{z}\) have an uncertainty \(\varepsilon\). Thus we do not know \(\mathbf{z}\) precisely; rather we only know that it lies somewhere in the ball of radius \(\varepsilon\) centered at the coordinate values (call them \(\mathbf{y}\)) that we have been given. We can evaluate whether the point \(\mathbf{y}\) lies in \(S_{0}\) or does not lie in \(S_{0}\). If we say that \(\mathbf{z}\) lies in \(S_{0}\) because we examine \(\mathbf{y}\) and find that it lies in \(S_{0}\), we may be wrong. Specifically, if \(\mathbf{z}\) lies in \(\bar{S}(\varepsilon)\), then we will commit an error. Now say we choose \(\mathbf{z}\) at random in the bounded region \(R\) containing \(S_{0}\). What is the probability that we commit an error by saying \(\mathbf{z}\) lies in \(S_{0}\) when \(\mathbf{y}\) is determined to lie in \(S_{0}\)? This probability is pro portional to \(V[\bar{S}(\varepsilon)]\) which, according to Eq. (3.49), scales as \(\varepsilon^{\overline{\alpha}}\), where \[\overline{\alpha}=N-D_{x}. \tag{3.50}\] We call \(\overline{\alpha}\) the uncertainty exponent. If \(\overline{\alpha}\) is small, then a large improve ment in accuracy (i.e., reduction in \(\varepsilon\)) leads to only a relatively small improvement in the ability to determine whether \(\mathbf{z}\) lies in \(S_{0}\). Thus, it becomes difficult to improve accuracy in the determination of whether \(\mathbf{z}\) lies in \(S_{0}\) by improving the accuracy of the coordinates if \(D_{x}\) is close to the dimension of the space \(N\). (For further discussion of uncertainty exponents see Chapter 5.) The above discussion provides a means of evaluating \(\overline{\alpha}\) and hence \(D_{x}\) in certain cases. As an example, we consider the measure of the parameter \(A\) for which the quadratic map \(x_{n+1}=A-x_{n}^{2}\) is chaotic. (Since the quadratic map and the logistic map may be related by a simple change of variables, \(D_{x}\) for the set of values of the parameter yielding chaos for the two maps is the same.) The procedure used by Grebogi _et al._ (1985b) to estimate \(D_{x}\) for this set is as follows. First they choose a value of \(A\) at random in the chaotic range using a random number generator. They then perturb this value to \(A-\varepsilon\) and \(A+\varepsilon\). If all three choices yield the same kind of attractor (i.e., all three periodic as indicated by all three having negative Lyapunov exponents, or all three chaotic as indicated by all three having positive Lyapunov exponents), then they say that the \(A\) value is 'certain' for this value of \(\varepsilon\). If not (i.e., one of the exponents has a different sign from the other two), then they say it is 'uncertain'. They repeat this procedure for a large number of \(A\) values and evaluate \(f(\varepsilon)\), the fraction of these random choices that are uncertain at the given value of \(\varepsilon\). They then vary \(\varepsilon\) over a large range, and determine \(f(\varepsilon)\) at several chosen values of \(\varepsilon\). Figure 3.16 shows results of numerical experiments on the scaling of \(f(\varepsilon)\) with \(\varepsilon\). The data on the log log plot are well fit by a straight line of slope \(0.413\pm 0.0005\) indicating a power law behavior \(f\quad Ke^{\overline{\alpha}}\) with an uncertainty exponent of \(\overline{\alpha}=0.413\pm 0.0005\) corresponding to \(D_{x}=0.587\pm 0.005\). To within numerical accuracy the same value was obtained using a different procedure by Farmer (1985) for a quantity related to, but some what different from, the exterior dimension definition given here. Farmer conjectured that the fat fractal dimension of the set of chaotic parameter values is a universal number independent of the function form of the map for a wide class of one dimensional maps. Figure 3.16: Log log plot of \(f(\varepsilon)\) versus \(\varepsilon\) for the quadratic map (Grebogi _et al._, 1985b). ## Appendix: Hausdorff dimension In this appendix we shall define and discuss the definition of dimension originally given by Hausdorff (1918) and called the _Hausdorff dimension_. This dimension definition is somewhat more involved than the box counting dimension definition, Eq. (3.1), but it has some notable advan tages. For example, the set whose elements are the infinite sequence of points on the real line, \(1\), \(\frac{1}{2}\), \(\frac{1}{3}\),..., has a positive box counting dimension (see Problem 4). It may be viewed as a deficiency of the box counting dimension definition that it does not yield \(D_{0}=0\) for this example which is just a discrete set of points. The Hausdorff dimension, however, yields zero for this set (Problem 19). For typical invariant sets encountered in practice in chaotic dynamics, the box counting and Hausdorff dimensions are commonly thought to be equal. In order to define the Hausdorff dimension we first introduce the Hausdorff measure. Let \(A\) be a set in an \(N\) dimensional Cartesian space. We define the _diameter_ of \(A\), denoted \(|A|\), as the largest distance between any two points \(\mathbf{x}\) and \(\mathbf{y}\) in \(A\), \[|A|=\sup_{\mathbf{x},\mathbf{y}\in A}|\mathbf{x}-\mathbf{y}|.\] Let \(S_{i}\) denote a countable collection of subsets of the Cartesian space such that the diameters \(\varepsilon_{i}\) of the \(S_{i}\) are all less than or equal to \(\delta\), \[0<\varepsilon_{i}\quad\ \ \delta,\] and such that the \(S_{i}\) are a covering of \(A\), \(A\subset\cup_{i}S_{i}\). Then we define the quality \(\Gamma_{\mathrm{H}}^{d}(\delta)\), \[\Gamma_{\mathrm{H}}^{d}(\delta)=\inf_{S_{i}}\quad\varepsilon_{i}^{d}.\] (3.51a) That is, we look for that collection of covering sets \[S_{i}\] with diameters less than or equal to \[\delta\] which minimizes the sum in ( 3.51a ) and we denote that minimized sum \[\Gamma_{\mathrm{H}}^{d}(\delta)\]. The \[d\] dimensional Hausdorff measure is then defined as \[\Gamma_{\mathrm{H}}^{d}=\lim_{\delta\to 0}\Gamma_{\mathrm{H}}^{d}(\delta).\] (3.51b) The Hausdorff measure generalizes the usual notions of the total length, area and volume of simple sets. For example, if the set \[A\] is a smooth surface of finite area situated in a three dimensional Cartesian space, then \[\Gamma_{\mathrm{H}}^{2}\] is just the area of the set, while \[\Gamma_{\mathrm{H}}^{d}\] for \[d<2\] is \[+\infty\], and \[\Gamma_{\mathrm{H}}^{d}\] for \[d>2\] is zero. In general, it can be shown that \(\Gamma_{\mathrm{H}}^{d}\) is \(+\infty\) if \(d\) is less than some critical value and is zero if \(d\) is greater than that critical value. We denote that critical value \(D_{\mathrm{H}}\) and call it the Hausdorff dimension of the set; seeFigure 3.17. The value of \(\Gamma_{\rm H}^{d}\) at \(d=D_{\rm H}\) can be zero, \(+\infty\) or a finite positive number. For instance, in our example above of the smooth surface \(D_{\rm H}=2\), and \(\Gamma_{\rm H}^{D_{\rm H}}\) is a finite positive number, the area of the surface. Now let us consider the relationship of the box counting dimension to the Hausdorff dimension. We cover the set \(A\) with a particular covering consisting of cubes from a grid of unit size \(\varepsilon\) in the \(N\) dimensional space. Denoting the cubes \(\widetilde{S}_{i}\) we have \(|\widetilde{S}_{i}|=\varepsilon_{i}=\varepsilon\surd N\). Using these cubes in the sum in (3.51a) (and consequently not carrying out the minimization prescribed by the inf in (3.51a)), we have \[\mathop{\varepsilon^{d}}_{i}=\widetilde{N}(\varepsilon)\varepsilon^{d}N^{d/2} \equiv\overline{\Gamma}_{\rm H}^{d}(\delta)\] with \(\delta=\varepsilon N^{1/2}\). From (3.1) we assume that \(\widetilde{N}(\varepsilon)\)\(\varepsilon^{-D_{0}}\) for small \(\varepsilon\). We then have \[\overline{\Gamma}_{\rm H}^{d}(\delta)\quad\varepsilon^{d-D_{0}}. \tag{3.52}\] Thus, \(\overline{\Gamma}_{\rm H}^{d}\equiv\lim_{\delta\to 0}\varepsilon^{d-D_{0}}\) is \(+\infty\) if \(d<D_{0}\) and is zero if \(d>D_{0}\). Since, in calculating \(\overline{\Gamma}_{\rm H}^{d}\), we do not carry out the minimization over all possible coverings, \[\overline{\Gamma}_{\rm H}^{d}(\delta)\quad\Gamma_{\rm H}^{d}(\delta).\] Thus, \(\overline{\Gamma}_{\rm H}^{d}\) must be as shown schematically by the dashed line in Figure 3.17. That is \(D_{0}\) is an upper bound on \(D_{\rm H}\), \[D_{0}\quad D_{\rm H}. \tag{3.53}\] As an example, we now calculate the Hausdorff dimension of the chaotic attractor of the generalized baker's map, Figure 3.4. From the definition of the Hausdorff dimension it can be shown that (3.9) also holds for \(D_{\rm H}\). Thus, we need only calculate the Hausdorff dimension \(\hat{D}_{\rm H}\) of the intersection of the attractor with the \(x\) axis. We denote this intersection \(\hat{A}\) and divide it into two disjoint pieces \[\hat{A}=\hat{A}_{a}\cup\hat{A}_{b},\] where \(\hat{A}_{a}\) is in the interval \([0,\,\lambda_{a}]\), and \(\hat{A}_{b}\) is in the interval \([(1-\lambda_{b}),\,1]\). Thus, \(\Gamma^{d}_{\rm H}(\delta)\) for the set \(\hat{A}\) can be written \[\Gamma^{d}_{\rm H}(\delta)=\Gamma^{d}_{\rm Ha}(\delta)+\Gamma^{d}_{\rm Hb}( \delta), \tag{3.54}\] where \(\Gamma^{d}_{\rm Ha}(\delta)\) and \(\Gamma^{d}_{\rm Hb}(\delta)\) denote terms in the sum (3.51a) used to cover \(\hat{A}_{a}\) and \(\hat{A}_{b}\) respectively. Noting the similarity property of the attractor we have \[\Gamma^{d}_{\rm Ha}(\delta)=\lambda_{a}^{d}\Gamma^{d}_{\rm H}( \delta/\lambda_{a}), \tag{3.55}\] with a similar result holding for \(\Gamma^{d}_{\rm Hb}(\delta)\). On the basis of the limiting behavior, Figure 3.17, we can assume \(\Gamma^{d}_{\rm H}(\delta)\) to have the following behavior[11] for small \(\delta\) \[\Gamma^{d}_{\rm H}(\delta)\quad\ K\delta^{-(D_{\rm H}-d)}. \tag{3.56}\] Combining (3.54) (3.56) we obtain \[1=\lambda_{a}^{\hat{D}_{\rm H}}+\lambda_{b}^{\hat{D}_{\rm H}}. \tag{3.57}\] Comparing (3.57) with (3.12) we see that \(\hat{D}_{0}=\hat{D}_{\rm H}\). Hence, the Hausdorff dimension and the box counting dimension are identical for the case of the attractor of the generalized baker's map. Thus, Eq. (3.53) holds with the equality applying. It has been widely conjectured that this is the case for the dimensions of typical chaotic attractors. (Although sets for which \(D_{0}=D_{\rm H}\) can be easily constructed (e.g., see Problems 4 and 19), sets for which \(D_{0}=D_{\rm H}\) do not seem to arise among invariant sets of typical dynamical systems.) ## Problems 1. What is the box-counting dimension of the Cantor set obtained by removing the middle interval of length one half (instead of one third as in Figure 3.2) of the intervals on the previous stage of the construction? 2. Derive Eq. (3.6). 3. What are the box-counting dimensions of the sets, the first few stages of whose constructions are illustrated 1. in Figure 3.18, 2. in Figure 3.19, 3. in Figure 3.20?* Consider the set whose elements are the infinite sequence of points \(1,\frac{1}{2},\frac{1}{3},\frac{1}{2},\ldots\). What is the box-counting dimension of this set?12 * What is the box-counting dimension of the invariant set in \([0,1]\) for the one-dimensional map given by \[x_{n+1}=\begin{array}{ll}4x_{n}&\text{for }-\infty<x_{n}&\frac{1}{2}\\ 2(x_{n}-\frac{1}{2})&\text{for }\frac{1}{2}<x_{n}<+\infty?\end{array}\] * What is the box-counting dimension of the invariant set in \([0,1]\) for the one-dimensional map illustrated in Figure 3.21? * Consider the Cantor set constructed by the following infinitely iterated procedure. Start with the interval \([0,1]\). Remove the open middle \(\frac{1}{3}\) of this interval (i.e., remove \((\frac{1}{3},\frac{1}{3})\)). In each of the remaining two intervals remove from its middle an open interval \(\frac{1}{4}\) the length of the interval. At the next stage remove from the middles of \(2^{2}=4\) remaining intervals an interval of length \(\frac{1}{5}\) of the interval. And so on. * Show that the length (Lebesgue measure) of this Cantor set is zero. * What is the capacity dimension \(D_{0}\) of the Cantor set? Figure 3.18: Construction of the fractal for Problem 3(\(\alpha\)). 8. For the map shown in Figure 2.29 find the capacity dimension \(D_{0}\) of the invariant set in \((0,\frac{1}{3})\bigcup\{\frac{2}{3},\ 1\}\) that never visits \((\frac{1}{3},\frac{2}{3})\). 9. The numbers in the interval \([0,\ 1]\) can be represented as a ternary decimal \[x=\sum_{i=1}^{\infty}3^{-i}a_{i}=0.a_{1}a_{2}a_{3}\ \ldots\] where \(a_{i}=0,\ 1\) or \(2\). Show that the middle third Cantor set is the subset of numbers in \([0,\ 1]\) such that \(a_{i}=1\) never appears in their ternary decimal representation (i.e., only zeros and twos appear). 10. Consider the fractal of Problem 3(_a_) whose construction is illustrated in Figure 3.18. We put a measure on this fractal as follows. Let \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) be positive numbers such that \(\alpha+\beta+\gamma+\delta=1\). At the first stage of construction the box at the upper right has \(\alpha\) of the measure, the box at the lower right has \(\beta\) of the measure, the box at the lower left has \(\gamma\) of the measure, and the box at the upper left has \(\delta\) of the measure. At the next stage of construction each of the four boxes at the first stage splits into four smaller boxes. The sum of the measures in the four smaller boxes is equal to the measure of the larger box containing them at the previous stage. We apportion this measure as before. That is, the fraction of the measure of the box on the previous stage that is assigned to the upper right smaller box which it contains is Figure 3.19: First stages of the construction of the fractal of Problem 3(_b_). This fractal is called a ‘Koch curve.’ \(\alpha\), to the lower right smaller box \(\beta\), to the lower left smaller box \(\gamma\), and to the upper left smaller box \(\delta\). The same prescription is followed on subsequent stages. What is \(D_{q}\) for this measure? 11. Consider the fractal Problem 3(\(c\)) whose construction is illustrated in Figure 3.20. We put a measure on this fractal in a manner similar to that described in Problem 10. Let \(\alpha\) and \(\beta\) be positive numbers satisfying \(a+4\beta=1\). At the first stage of construction we assign a measure \(\alpha\) to the middle box of edge length \(\frac{1}{2}\) and we assign a measure \(\beta\) to each of the four boxes of edge length \(\frac{1}{4}\). On subsequent stages of the construction we apportion the measure of the box on the previous stage Figure 3.20: First two stages of the construction for the fractal of Problem 3(\(c\)). amongst the five smaller boxes it contains in a similar way (cf. Problem 10). What is \(D_{q}\) for this measure? 12. Write a computer program to take iterates of the generalized baker's map. Choose \(\lambda_{a}=\lambda_{b}=\frac{1}{b}\), \(a=0.4\), initial condition \((x_{0},\ y_{0})=(1/\surd 2,\ 1/\surd 2)\), iterate the map 20 times, and then plot the next 1000 iterates to get a picture of the attractor. 13. Derive Eq. (3.24) from Eq. (3.23). 14. Show that the number of strips of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) for the generalized baker's map is given by the binomial coefficient, Eq. (3.8). 15. Derive Eq. (3.16). (Hint: Show that \(\mathrm{d}/\mathrm{d}q[(1-q)^{-1}\ln\quad_{\mu}\mu_{i}^{q}]\quad\quad 0\). In doing this the following general result may be of use: If \(\mathrm{d}^{2}F(x)/\mathrm{d}x^{2}\quad\quad 0\), then for any set of numbers \(p_{i}\quad\quad 0\) which satisfies \({}_{i}p_{i}=1\) and any other set of numbers \(x_{i}\), we have \(\langle F(x)\rangle\quad\quad F(\langle x\rangle)\) where \(\langle F(x)\rangle\equiv\quad_{i}p_{i}F(x_{i})\) and \(\langle x\rangle\equiv\quad_{i}p_{i}x_{i}\).) 16. Derive Eqs. (3.29) (3.31). 17. Consider an attractor lying in an \(N\)-dimensional Cartesian space. Let \(C_{t}(\mathbf{x})\) denote an \(N\)-dimensional cube of edge length \(2e\) centered at the point \(\mathbf{x}\). Show that \[\lim_{\varepsilon\to 0}\frac{\ln\mu(B_{\varepsilon}(\mathbf{x}))}{\ln \varepsilon}=\lim_{\varepsilon\to 0}\frac{\ln\mu(C_{\varepsilon}(\mathbf{x}))}{\ln \varepsilon}.\] (Hint: Consider the ball of radius \(\varepsilon\), \(B_{\varepsilon}(\mathbf{x})\) contained in \(C_{\varepsilon}(\mathbf{x})\) and the ball of radius \(\varepsilon N^{1/2}\) which contains \(C_{\varepsilon}(\mathbf{x})\). Thus, \(D_{\mu}(\mathbf{x})\) can be defined using a cube \(C_{\varepsilon}(\mathbf{x})\) rather than a ball \(B_{\varepsilon}(\mathbf{x})\).) 18. Using the result of Problem 17, calculate \(D_{p}\) for the generalized baker's map at the point \((x,\,y)=(0,\frac{1}{2})\). Is the pointwise dimension at this point the same as the information dimension, and what does this imply about the point \((0,\frac{1}{2})\)? (Hint: In taking the limit as \(\varepsilon\) goes to zero use a subsequence, \(\varepsilon_{k}=\lambda_{a}^{k}\), and let the integer \(k\) go to infinity.) 19. Show that the Hausdorff dimension of the set in Problem 4 is zero. 20. Find the Hausdorff dimension for the set in Problem 5. 21. Consider the set the first two stages of whose construction is shown in Figure 3.22. 1. Show that this set is uncountable and that its Lebesgue measure in the plane (roughly its area) is zero. 2. Find its box-counting dimension. 3. If we put a probability measure on the set by equally dividing the measure in a shaded box at any stage between the four boxes that it contains on the next stage (regardless of their sizes), what is \(D_{q}\) for this measure? Plot it for \(0\quad q\quad 2\) and verify that it decreases with increasing \(q\). 4. For the measure in \((c)\) what is the pointwise dimension at \((x,\,y)=(\frac{1}{2},\,\frac{1}{2})\)? What is it at \((x,\,y)=(1,\,1)\)? 22. The generalized baker's map (Figure 3.4) with \(\alpha=\frac{1}{2}\), \(\lambda_{a}=\lambda_{b}=\frac{1}{3}\) has a period two orbit alternately visiting the \((x,\,y)\) points \((\frac{1}{2},\,\frac{2}{3})\), \((\frac{3}{7},\,\frac{1}{3})\). What is the pointwise dimension of the natural measure at the point \((x,\,y)=(\frac{1}{7},\,\frac{1}{3})\)? Figure 3.22: First two stages of the construction for the fractal of Problem 21. ## Notes 1. The box-counting dimension may be thought of as a simplified version of the Hausdorff dimension (Hausdorff, 1918), a notion which we will discuss in the appendix to this chapter. 2. Henceforth, whenever we write a limit, as in Eq. (3.1), it is to be understood that we are making the assumption that the limit exists. 3. The blow-ups of the Henon attractor in Figures 1.12(\(b\)) and (\(c\)) show apparent self-similarity. We emphasized, however, that this is only because these blow-ups are made about a fixed point of the map that lies on the attractor. Choosing a more typical point on the attractor about which to perform magnification, successive magnification would always reveal a structure looking like many parallel lines as in Figure 1.12, but the picture would not repeat on successive magnifications as in Figures 1.12(\(b\)) and (\(c\)). 4. This map was introduced by Farmer _et al._ (1983) as a model for the study of the dimension of strange attractors. The baker's map (as opposed to the generalized baker's map) is an area preserving map of the unit square corresponding to \(\alpha=\beta=\lambda_{a}=\lambda_{b}=\frac{1}{2}\) in Eqs. (3.7). 5. Sinai's example, Figure 3.6, corresponds to \(D_{0}=2\) and \(D_{1}<2\). This occurs for the generalized baker's map when \(\lambda_{a}=\lambda_{b}=\frac{1}{2}\) and \(\alpha\neq\frac{1}{2}\) (if \(\lambda_{a}=\lambda_{b}=\alpha=\frac{1}{2}\), then \(D_{1}=2\)). 6. See Farmer _et al._ (1983). 7. In calculating \(C(\varepsilon)\) for finite \(k\) one should restrict the sum in (3.42) by requiring that \(|i-j|\) exceed some minimum value dependent on the data set. This is necessary to eliminate dynamical correlations, thus only leaving the geometric correlations that \(D_{2}\) attempts to characterize. 8. While the dimension obtained for a white noise process is the dimension of the embedding space, this need not be true for other noise processes. In particular, Osborne and Provenzale (1989) have emphasized that 'colored noise' (i.e., noise with a power law frequency power spectrum can yield fractal correlation dimension spectra under some circumstances. 9. In infinite-dimensional systems or systems with very large dimension it is often the case that one can show that there exists a low-dimensional manifold (the so-called _inertial manifold_) to which the orbit tends and on which the attractor (or attractors) lies. In this case we can regard \(\mathbf{y}\) as specifying points on the inertial manifold and the equation \(\mathbf{dx}/\mathbf{d}t=\mathbf{F}(\mathbf{x})\) as giving the dynamics on the inertial manifold. For material on inertial manifolds see, for example Constantin _et al._ (1989) and references therein. 10. For other work on fat fractals see also Umberger and Farmer (1985), Hanson (1987) and Eykholt and Umberger (1988). 11. The use of the scaling ansatz (3.56) is a 'quick and dirty' way of getting the result for \(D_{\rm H}\). A similar comment applies for our use of (3.22) to obtain \(D_{q}\). See Farmer _et al._ (1983) for a more rigorous treatment. 12. It may be viewed as a deficiency of the box-counting dimension definition that it does not yield \(d_{0}=0\) for this example which is just a discrete set of points. The Hausdorff dimension, defined in the appendix, yields zero for this set. For typical fractal sets encountered in chaotic dynamics, the box-counting and Hausdorff dimensions are commmonly thought to be equal. ## Chapter 4 Dynamical properties of chaotic systems In Chapter 3 we have concentrated on geometric aspects of chaos. In particular, we have discussed the fractal dimension characterization of strange attractors and their natural invariant measures, as well as issues concerning phase space dimensionality and embedding. In this chapter we concentrate on the time evolution dynamics of chaotic orbits. We begin with a discussion of the horseshoe map and symbolic dynamics. ### 4.1 The horseshoe map and symbolic dynamics The horseshoe map was introduced by Smale (1967) as a motivating example in his development of _symbolic dynamics_ as a basis for under standing a large class of dynamical systems. The horseshoe map \(\mathbf{M}_{h}\) is specified geometrically in Figure 4.1. The map takes the square \(S\) (Figure 4.1(_a_)), uniformly stretches it vertically by a factor greater than 2 and uniformly compresses it horizontally by a factor less then \(\frac{1}{2}\) (Figure 4.1(_b_)). Then the long thin strip is bent into a horseshoe shape with all the bending deformations taking place in the cross hatched regions of Figures 4.1(_b_) and (_c_). Then the horseshoe is placed on top of the original square, as shown in Figure 4.1(_d_). Note that a certain fraction, which we denote \(1-f\), of the original area of the square \(S\) is mapped to the region outside the square. If initial conditions are spread over the square with a distribu which is uniform in the vertical direction, then the fraction of initial conditions that generate orbits that do not leave \(S\) during \(n\) applications of the map is just \(f^{\,n}\). This is because a vertically uniform distribution in \(S\) remains vertically uniform on application of \(\mathbf{M}_{h}\). Since \(f^{\,n}\to 0\) as \(n\to\infty\), almost every initial condition with respect to Lebesgue measure eventually leaves the square. (Thus there is no at tractor contained in the square.[1]) We are interested in characterizing the invariant set \(\Lambda\) (which is of Lebesgue measure zero) of points which never leave the square. horseshoe with the square represents the regions that points in the square map to if they return to the square on one iterate. These regions are the two cross hatched vertical strips labeled \(V_{0}\) and \(V_{1}\) in Figure 4.2(\(a\)). We now ask, where did these strips come from? To answer this question we follow the horseshoe construction in Figures 4.1(\(a\)) (\(d\)) backward in time (i.e., from (\(d\)) to (\(c\)) to (\(b\)) to (\(a\)) in Figure 4.1). Thus, we find that the two vertical strips \(V_{0}\) and \(V_{1}\) are the images of two horizontal strips \(H_{0}=\mathbf{M}_{h}^{-1}(V_{0})\) and \(H_{1}=\mathbf{M}_{h}^{-1}(V_{1})\), as shown in Figure 4.2(\(b\)). Figure 4.2(\(c\)) shows what happens if we apply the horseshoe map to the vertical strips \(V_{0}\) and \(V_{1}\). Thus, taking the intersection of \(\mathbf{M}_{h}(V_{0})\) and \(\mathbf{M}_{h}(V_{1})\) with \(S\) (Figure 4.2(\(d\))), we see that points originating in the square which remain in the square for two iterates of \(\mathbf{M}_{h}\) are mapped to the four vertical strips labeled \(V_{00}\), \(V_{01}\), \(V_{10}\), \(V_{11}\) in Figure 4.2(\(d\)). The subscripts on these strips \(V_{ij}\) are such that \(V_{ij}\) is contained in \(V_{j}\) and \(\mathbf{M}_{h}^{-1}(V_{ij})\) is contained in \(V_{i}\). Figure 4.2(\(e\)) shows the four horizontal strips that the vertical strips \(V_{ij}\) came from two iterates previous, \(H_{ij}=\mathbf{M}_{h}^{-2}(V_{ij})\). Now consider the invariant set \(\Lambda\) and the horizontal and vertical strips, \(H_{0}\), \(H_{1}\), \(V_{0}\), \(V_{1}\). Since points in \(\Lambda\) never leave \(S\), the forward iterate of \(\Lambda\)must be in the square. Hence \(\Lambda\) is contained in \(H_{0}\cup H_{1}\) and is also contained in \(V_{0}\cup V_{1}\). Thus, \(\Lambda\) is contained in the intersection, \[(H_{0}\cup H_{1})\cap(V_{0}\cup V_{1}).\] This intersection consists of four squares as shown in Figure 4.3(_a_). Similarly \(\Lambda\) must also lie in the intersection \[(H_{00}\cup H_{01}\cup H_{11}\cup H_{10})\cap(V_{00}\cup V_{01}\cup V_{11}\cup V _{10})\] shown in Figure 4.3(_b_). This intersection consists of 16 squares, four of which are contained in each of the four squares of Figure 4.3(_a_). Proceeding in stages of this type, at each successive stage, each square is replaced by four smaller squares that it contains. Taking the limit of repeating this construction an infinite number of times we obtain the invariant set \(\Lambda\). This set is the intersection of a Cantor set of vertical lines (the _V_s in the limit of an infinite number of iterations) with a Cantor set of horizontal lines (the \(H\)s in the limit of an infinite number of iterations). Let \(\mathbf{x}\) be a point in the invariant set \(\Lambda\). Then we claim that we can specify it by a bi infinite symbol sequence \(\mathbf{a}\), \[\mathbf{a}=\ldots\,a_{-3}a_{-2}a_{-1}\cdot a_{0}a_{1}a_{2}\,\ldots \tag{4.1}\] and each symbol \(a_{i}\) is a function of \(\mathbf{x}\) specified by \[a_{i}=\left\{\begin{array}{ll}0&\mbox{if $\mathbf{M}^{i}_{h}(\mathbf{x})$ is in $H_{0}$,}\\ 1&\mbox{if $\mathbf{M}^{i}_{h}(\mathbf{x})$ is in $H_{1}$.}\end{array}\right. \tag{4.2}\] The above represents a correspondence between bi infinite symbol se quences \(\mathbf{a}\) and points \(\mathbf{x}\) in \(\Lambda\). We denote this correspondence \[\mathbf{a}=\boldsymbol{\phi}(\mathbf{x}). \tag{4.3}\] In Figure 4.3(\(b\)) we label the 16 rectangles by the symbols \(a_{-2}a_{-1}\cdot a_{0}a_{1}\) that correspond to the four middle symbols in (4.1) that all points in \(\Lambda\) that fall in that rectangle must have. The correspondence given by Eqs. (4.1) (4.3) may be shown to be one to one and continuous (with a suitable definition of a metric on the space of bi infinite symbol sequences). Define the _shift_ operation, \[\mathbf{a}^{\prime}=\sigma(\mathbf{a}),\] where \(a^{\prime}_{i}=a_{i+1}\). That is, \(\mathbf{a}^{\prime}\) is obtained from \(\mathbf{a}\) by moving the decimal point in Eq. (4.1) one place to the right. From Eq. (4.2) we have \[a^{\prime}_{i}=\left\{\begin{array}{ll}0&\mbox{if $\mathbf{M}^{i+1}_{h}( \mathbf{x})=\mathbf{M}^{i}_{h}(\mathbf{M}_{h}(\mathbf{x}))$ is in $H_{0}$,}\\ 1&\mbox{if $\mathbf{M}^{i+1}_{h}(\mathbf{x})=\mathbf{M}^{i}_{h}(\mathbf{M}_{h}( \mathbf{x}))$ is in $H_{1}$.}\end{array}\right.\] Hence, \(\mathbf{a}^{\prime}\) is the symbol sequence corresponding to \(\mathbf{M}_{h}(\mathbf{x})\), or \(\sigma(\mathbf{a})=\boldsymbol{\phi}(\mathbf{M}_{h}(\mathbf{x}))\). We represent the situation schematically in Figure 4.4. Thus, the shift on the bi infinite symbol space is equivalent to the horseshoe map applied to the invariant set \(\Lambda\), \[\mathbf{M}_{h|\Lambda}=\boldsymbol{\phi}^{-1}\cdot\sigma\cdot\boldsymbol{\phi}, \tag{4.4}\] Figure 4.4: Equivalent of the shift operation \(\sigma\) and the horseshoe map. where \({\bf M}_{h\,\Lambda}\) symbolizes the restriction of \({\bf M}_{h}\) to the invariant set \(\Lambda\). Thus, to obtain \({\bf x}_{n+1}\) from \({\bf x}_{n}\) we can either apply \({\bf M}_{h}\) to \({\bf x}_{n}\), or else we can obtain \({\bf a}_{n}=\) (\({\bf x}_{n}\)), shift the decimal point to the right to get \({\bf a}_{n+1}\) and then obtain \({\bf x}_{n+1}\) from \({\bf x}_{n+1}=\)\(\;are always mapped by \(\tilde{\mathbf{M}}\) to \(H_{0}\), and not to \(H_{1}\) or \(H_{2}\) (\(\tilde{\mathbf{M}}(H_{2})\) intersects \(H_{0}\) but does not intersect either \(H_{1}\) or \(H_{2}\)). Thus, the possible allowable transitions are as shown in Figure 4.5(\(c\)). This means that whenever a 2 appears in our symbol sequence it is immediately followed by a zero (i.e., \(a_{i}=2\) implies \(a_{i+1}=0\)). We call the symbolic dynamics corresponding to \(\tilde{\mathbf{M}}\) a _shift of finite type_ on three symbols (the phrase shift of finite type signifies a restriction on the allowed sequences), while we call the sym bolic dynamics corresponding to the horseshoe map a _full shift_ on two symbols (the word full signifying that there is no restriction on the allowed sequences). As an application of symbolic dynamics, we mention the work of Levi (1981), who has analyzed a model periodically forced van der Pol equation (i.e., Eq. (1.13) with a periodic function of time on the right hand side). (Levi modifies the equation to facilitate his analysis.) Levi shows that the map obtained from the stroboscopic surface of section obtained by sampling at the forcing period (cf. Chapter 1) possesses an invariant set on which the dynamics is described by a shift of finite type on four symbols. Figure 4.5(\(d\)) shows the allowed transitions for Levi's problem. ### Linear stability of steady states and periodic orbits Consider a system of real first order differential equations \({\rm d}{\bf x}/{\rm d}t={\bf F}({\bf x})\). A steady state for this system is a point \({\bf x}={\bf x}_{*}\) at which \[{\bf F}({\bf x}_{*})=0.\] We wish to examine the behaviour of orbits near \({\bf x}_{*}\). Thus we set \[{\bf x}(t)={\bf x}_{*}+\mathbf{\eta}(t),\] where we assume \(\mathbf{\eta}(t)\) is small. Substituting this into \({\rm d}{\bf x}/{\rm d}t={\bf F}({\bf x})\), we expand \({\bf F}({\bf x})\) to first order \(\mathbf{\eta}(t)\), \[{\bf F}({\bf x}_{*}+\mathbf{\eta})={\bf F}({\bf x}_{*})+{\bf D}{\bf F }({\bf x}_{*})\ \ \mathbf{\eta}\ +O(\mathbf{\eta}^{2}),\] where, since \({\bf x}_{*}\) is a steady state, \({\bf F}({\bf x}_{*})=0\), and \({\bf D}{\bf F}\) denotes the Jacobian matrix of partial derivatives of \({\bf F}\). That is, if we write \[{\bf x}=\left[\begin{array}{c}x^{(1)}\\ x^{(2)}\\ x^{(N)}\end{array}\right],\,{\bf F}({\bf x})=\left[\begin{array}{ccc}F^{(1)}(x ^{(1)},\,x^{(2)},&,\,x^{(N)})\\ F^{(2)}(x^{(1)},\,x^{(2)},&,\,x^{(N)})\\ F^{N}(x^{(1)},\,x^{(2)},&,\,x^{(N)})\end{array}\right],\] then \[{\bf D}{\bf F}({\bf x})=\left[\begin{array}{ccc}\partial F^{(1)}/\partial x^ {(1)}&\partial F^{(1)}/\partial x^{(2)}&\partial F^{(1)}/\partial x^{(N)}\\ \partial F^{(2)}/\partial x^{(1)}&\partial F^{(2)}/\partial x^{(2)}&\partial F^ {(2)}/\partial x^{(N)}\\ \\ \partial F^{(N)}/\partial x^{(1)}&\partial F^{(N)}/\partial x^{(2)}&\partial F^ {(N)}/\partial x^{(N)}\end{array}\right].\] We obtain the following equation for the time dependence of the perturba of \({\bf x}\) from the steady state \[{\rm d}\mathbf{\eta}/{\rm d}t={\bf D}{\bf F}({\bf x}_{*})\ \ \mathbf{\eta}\ +O(\mathbf{\eta}^{2}). \tag{4.5}\] The linearized stability problem is obtained by neglecting terms of order \(\mathbf{\eta}^{2}\) in (4.5) and is of the general form \[{\rm d}{\bf y}/{\rm d}t={\bf A}\ \ {\bf y}, \tag{4.6}\] where \({\bf y}\) is a real \(N\) dimensional vector and \({\bf A}\) is a real time independent \(N\times N\) matrix. If we seek solutions of Eq. (4.6) of the form \({\bf y}(t)={\bf e}\exp(st)\), then (4.6) becomes the eigenvalue equation \[{\bf A}\ \ \ {\bf e}=s{\bf e}, \tag{4.7}\] which has nontrivial solutions for values of \(s\) satisfying the \(N\)th order polynomial equation \[D(s)={\rm det}[{\bf A}-s{\bf l}]=0, \tag{4.8}\]where **l** denotes the \(N\times N\) identity matrix. For our purposes it suffices to consider only the case where \(D(s)=0\) has \(N\)_distinct_ roots \(s=s_{k}\) for \(k=1\), \(2\),, \(N\), (i.e., \(s_{k}\neq s_{j}\) if \(k\neq j\)). For each such root there is an eigenvector \(\mathbf{e}_{k}\), and any time evolution can be represented as \[\mathbf{y}(t)=\sum_{k=1}^{N}A_{k}\mathbf{e}_{k}\exp(s_{k}t), \tag{4.9}\] where the \(A_{k}\) are constant coefficients (that may be complex) determined from the initial condition \(\mathbf{y}(0)=\sum_{k=1}^{N}A_{k}\mathbf{e}_{k}\). Since the coefficients of the polynomial \(D(s)\) are real, the eigenvalues \(s_{k}\) are either real or else occur in complex conjugate pairs. In the case of complex conjugate pairs of eigenvalues \(s_{j}=s_{j+1}^{*}=\sigma_{j}-\mathrm{i}\omega_{j}\), we can also take \(\mathbf{e}_{j}=\mathbf{e}_{j+1}^{*}=\mathbf{e}_{j}^{\mathrm{R}}+\mathrm{i} \mathbf{e}_{j}^{\mathrm{I}}\), where the * denotes complex conjugate, and \(\sigma_{j}\), \(\omega_{j}\), \(\mathbf{e}_{j}^{\mathrm{R}}\) and \(\mathbf{e}_{j}^{\mathrm{I}}\) are all real. Combining the two solutions, \(j\) and \(j+1\), we obtain two linearly independent _real_ solutions, \[\mathbf{g}_{j}(t) =\tfrac{1}{2}[\mathbf{e}_{j}\exp(s_{j}t)+\mathbf{e}_{j+1}\exp(s_{ j+1}t)]\] \[=\mathbf{e}_{j}^{\mathrm{R}}\exp(\sigma_{j}t)\cos(\omega_{j}t)+ \mathbf{e}_{j}^{\mathrm{I}}\exp(\sigma_{j}t)\sin(\omega_{j}t), \tag{4.10a}\] \[\mathbf{g}_{j+1}(t) =\frac{1}{2\mathrm{i}}[\mathbf{e}_{j}\exp(s_{j}t)-\mathbf{e}_{j+ 1}\exp(s_{j+1}t)]\] \[=\mathbf{e}_{j}^{\mathrm{I}}\exp(\sigma_{j}t)\cos(\omega_{j}t)- \mathbf{e}_{j}^{\mathrm{R}}\exp(\sigma_{j}t)\sin(\omega_{j}t). \tag{4.10b}\] If \(s_{j}\) is real (\(s_{j}=\sigma_{j}\)), then we write \(\mathbf{g}_{j}(t)=\mathbf{e}_{j}\exp(\sigma_{j}t)\) (where \(\mathbf{e}_{j}\) is real). Equation (4.9) thus becomes \[\mathbf{y}(t)=\sum_{j=1}^{N}B_{j}\mathbf{g}_{j}(t), \tag{4.11}\] where (in contrast with the coefficients of \(A_{k}\) of Eq. (4.9)) all the \(B_{j}\) are _real_. By use of a similarity transformation \[\mathbf{z}(t)=\mathbf{T}\ \ \ \mathbf{y}(t), \tag{4.12}\] where \(\mathbf{T}\) is a real \(N\times N\) matrix, we can recast Eq. (4.6) as \[\mathrm{d}\mathbf{z}/\mathrm{d}t=\mathbf{C}\ \ \ \mathbf{z},\ \mathbf{C}=\mathbf{T}\ \ \ \mathbf{A}\ \ \mathbf{T}^{-1}, \tag{4.13}\] where, if there are \(K\) real eigenvalues, \(\sigma_{1}\), \(\sigma_{2}\),, \(\sigma_{K}\) and \(N-K\) complex conjugate eigenvalues, then the real \(N\times N\) matrix \(\mathbf{C}\) has the following _canonical form_, \[\mathbf{C}=\begin{bmatrix}\mathbf{\Sigma}&\mathbf{O}\\ \mathbf{O}&\mathbf{\Lambda}\end{bmatrix}, \tag{4.14}\] where \(\mathbf{\Sigma}\) is a \(K\times K\) diagonal matrix \[=\left|\begin{array}{cccc}\sigma_{1}&0&0&0\\ 0&\sigma_{2}&0&0\\ 0&0&\sigma_{3}&0\\ \end{array}\right|, \tag{4.15}\] and is a real matrix of \(2\times 2\) blocks along its diagonal \[=\left|\begin{array}{cc}^{1}&^{2}\\ &^{3}\end{array}\right| \tag{4.16}\] with the \(2\times 2\) matrix block \({}_{m}\) having the form \[{}_{m}=\begin{array}{cc}\sigma_{m}&\omega_{m}\\ -\omega_{m}&\sigma_{m}\end{array}, \tag{4.17}\] and being \(2\times 2\) matrices of zeros. For \(\mbox{Re}(s_{j})=\sigma_{j}<0\), the corresponding solution \(\mathbf{g}_{j}(t)\) approaches the origin asymptotically in time, either spiraling in the plane spanned by \(\mathbf{e}_{j}^{\mathrm{R}}\) and \(\mathbf{e}_{j}^{\mathrm{I}}\) if \(\mbox{Im}(s_{j})\neq 0\), or else by moving along the line through the origin in the direction of the real eigenvector \(\mathbf{e}_{j}\) if \(\mbox{Im}(s_{j})=0\). For \(\mbox{Re}(s_{j})=\sigma_{j}>0\), the corresponding solution \(\mathbf{g}_{j}(t)\) diverges from the origin exponentially in time, either (as for \(\sigma_{j}<0\)) by spiraling or by moving linearly. We call solutions which move away from the origin exponentially with time _unstable_ and those that move exponentially toward the origin _stable_. The situation is as illustrated in Figure 4.6. Note that for the case \(\omega_{j}=0\) any initial condition purely in the subspace spanned by the vectors \({\bf e}_{j}^{\rm R}\) and \({\bf e}_{j}^{1}\) remains in that subspace for all time. Hence, that subspace is invariant under the flow \({\rm d}{\bf y}/{\rm d}t={\bf A}\ \ {\bf y}\). Similarly, if the eigenvalue is real (\(s_{j}=\sigma_{j}\)), then an initial condition on the ray from the origin along \({\bf e}_{j}\) remains on that ray, and hence the set of vectors that are scalar multiples of \({\bf e}_{j}\) is an invariant subspace. We collect all the independent vectors spanning invariant subspaces corresponding to unstable solutions (\(\sigma_{j}>0\)) and denote them \[{\bf u}_{1},\ {\bf u}_{2},\ \ \ \ \,\ {\bf u}_{n_{\rm s}}.\] Similarly, we collect all the independent vectors spanning invariant subspaces corresponding to stable solutions (\(\sigma_{j}<0\)) and denote them \[{\bf v}_{1},\ {\bf v}_{2},\ \ \ \ \,\ {\bf v}_{n_{\rm s}}.\] If there are eigenvalues whose real parts are zero (\(\sigma_{j}=0\)), we denote the corresponding set of independent vectors spanning this subspace \[{\bf w}_{1},\ {\bf w}_{2},\ \ \ \ \,\ {\bf w}_{n_{\rm c}}.\] All of the \({\bf u}\)s, \({\bf v}\)s and \({\bf w}\)s, taken together, span the whole phase space. Thus \[n_{\rm u}+n_{\rm s}+n_{\rm c}=N.\] We define the _unstable subspace_ as \[E^{\rm u}={\rm span}[{\bf u}_{1},\ {\bf u}_{2},\ \ \ \,\ {\bf u}_{n_{\rm u}}],\] (i.e., the space spanned by the vectors \({\bf u}_{1},\ {\bf u}_{2},\ {\bf u}_{3},\ \ \ \,\ {\bf u}_{n_{\rm u}}\)), the _stable subspace_ as \[E^{\rm s}={\rm span}[{\bf v}_{1},\ {\bf v}_{2},\ \ \ \,\ {\bf v}_{n_{\rm s}}],\] and the _center subspace_ as \[E^{\rm c}={\rm span}[{\bf w}_{1},\ {\bf w}_{2},\ \ \ \,\ {\bf w}_{n_{\rm c}}].\] Figure 4.7 illustrates some cases of stable and unstable subspaces and the corresponding orbits (\(n_{\rm c}=0\) in Figure 4.7: (\(a\))\(N=2\), \(n_{\rm u}=n_{\rm s}=1\); (\(b\))\(N=3\), \(n_{\rm u}=1\) and \(n_{\rm s}=2\), where the stable space corresponds to two real eigenvalues; (\(c\))\(N=3\), \(n_{\rm u}=1\) and \(n_{\rm s}=2\), where the stable subspace corresponds to a pair of complex conjugate eigenvalues; and (\(d\))\(N=3\), \(n_{\rm s}=1\) and \(n_{\rm u}=2\), where the unstable subspace corresponds to a pair of complex conjugate eigenvalues). We now turn from the study of the linear stability of a steady state \({\bf x}={\bf x}_{\rm*}\), to the study of the stability of a periodic orbit, \[{\bf x}(t)={\bf X}_{\rm*}(t)={\bf X}_{\rm*}(t+T),\] where \(T\) denotes the period. As for the case of the steady state, we write \[{\bf x}(t)={\bf X}_{\rm*}(t)+\mathbf{\eta}(t)\]and expand for small \(\eta(t)\). We obtain \[\mathrm{d}\eta/\mathrm{d}t=\mathbf{D}\mathbf{F}(\mathbf{X}_{*}(t))\;\;\;\eta+O( \eta^{2}), \tag{4.18}\] which is similar to (4.5) except that now the matrix \(\mathbf{D}\mathbf{F}(\mathbf{X}_{*}(t))\) varies periodically in time, whereas \(\mathbf{D}\mathbf{F}(\mathbf{x}_{*})\) in (4.5) is independent of time. The linearized stability problem is of the form \[\mathrm{d}\mathbf{y}/\mathrm{d}t=\mathbf{A}(t)\;\;\;\mathbf{y}, \tag{4.19}\] where \(\mathbf{y}\) is a real \(N\) dimensional vector and \(\mathbf{A}(t)\) is a real time periodic \(N\times N\) matrix, \[\mathbf{A}(t)=\mathbf{A}(t+T).\] Solutions of (4.19) can be sought in the Floquet form, \[\mathbf{e}(t)\exp(st),\] where \(\mathbf{e}(t)\) is periodic in time \(\mathbf{e}(t)=\mathbf{e}(t+T)\). This defines an eigenvalue problem for eigenvalues \(s_{j}\) and vector eigenfunctions \(\mathbf{e}_{j}(t)\). A development parallel to that for Eq. (4.6) goes through, and stable, unstable, and center subspaces can be analogously defined, although the solution of the Floquet Figure 4.7: Stable and unstable subspaces. problem is much more difficult. One result for the system (4.18) is immediate, however. Namely, Eq. (4.18) has a solution corresponding to a zero eigenvalue (\(s=0\)). To see this, differentiate the equation \({\rm d}{\bf X}_{*}(t)/{\rm d}t={\bf F}({\bf X}_{*}(t))\) with respect to time. This gives an equation of the form of Eq. (4.19), \({\rm d}{\bf e}_{0}(t)/{\rm d}t={\bf D}{\bf F}({\bf X}_{*}(t))\)\({\bf e}_{0}(t)\), where \({\bf e}_{0}(t)\equiv{\rm d}{\bf X}_{*}(t)/{\rm d}t\). This zero eigenvalue solution can be interpreted as saying that, if the perturbation \(\mathbf{\eta}(t)\) puts the perturbed orbit on the closed phase space curve followed by \({\bf X}_{*}(t)\) but slightly displaced from \({\bf X}_{*}(t)\), then \(\mathbf{\eta}(t)\) varies periodically in time (\(s=0\)). This is illustrated in Figure 4.8(\(a\)). Instead of pursuing the Floquet solutions further, we employ a surface of section to reduce the problem \({\rm d}{\bf x}/{\rm d}t={\bf F}({\bf x})\) to a map \(\hat{\bf x}_{n+1}={\bf M}(\hat{\bf x}_{n})\), where \(\hat{\bf x}_{n}\) has lower dimensionality than \({\bf x}\) by one. As shown in Figure 4.8(\(b\)), we assume that the periodic solution \({\bf X}_{*}(t)\) results in a fixed point \(\hat{\bf x}_{*}\) of the map. Linearizing the map around \(\hat{\bf x}_{*}\) by writing \(\hat{\bf x}_{n}=\hat{\bf x}_{*}+\hat{\mathbf{\eta}}_{n}\) with \(\hat{\mathbf{\eta}}_{n}\) small, we obtain \[\hat{\mathbf{\eta}}_{n+1}={\bf D}{\bf M}(\hat{\bf x}_{*})\ \ \ \hat{\mathbf{\eta}}_{n}+{\cal O}(\hat{\mathbf{\eta}}_{n}^{2}), \tag{4.20}\] which yields a linearized problem of the form \[\hat{\bf y}_{n+1}=\hat{\bf A}\ \ \ {\bf y}_{n}. \tag{4.21}\] Seeking solutions \(\hat{\bf y}_{n}=\lambda^{n}\hat{\bf e}\), we obtain the eigenvalue equation \[\hat{\bf A}\ \ \hat{\bf e}=\lambda\hat{\bf e}. \tag{4.22}\] Figure 4.8: (\(a\)) The zero eigenvalue solution of (4.18). (\(b\)) Surface of section of a periodic orbit. Again we assume eigenvalue solutions \(\lambda_{j}\) of the determinantal equation, \[\hat{D}(\lambda)=\det[\hat{\mathbf{A}}-\lambda\mathbf{I}]=0, \tag{4.23}\] and denote the corresponding eigenvectors \(\hat{\mathbf{e}}_{j}\). Directions corresponding to \(\lambda_{j}>1\) are unstable; directions corresponding to \(\lambda_{j}<1\) are stable. Again, we can identify unstable, stable and center subspaces, \(E^{\mathbf{u}}\), \(E^{\mathbf{s}}\) and \(E^{\mathbf{c}}\), for the map (e.g., the stable subspace is spanned by the real and imaginary parts of all those vectors \(\hat{\mathbf{e}}_{j}\) for which \(\lambda_{j}\,<1\)). The map eigenvalues and the Floquet eigenvalues are related by \[\lambda_{j}=\exp(s_{j}T), \tag{4.24}\] and all the \(s_{j}\) of the Floquet problem are included except for the zero eigenvalue illustrated in Figure 4.8(\(a\)). The zero eigenvalue is not included because a perturbation \(\eta\) that displaces the orbit along the closed curve path followed by \(\mathbf{X}_{*}(t)\) results in no perturbation of the orbit's surface of section piercing at \(\hat{\mathbf{x}}=\hat{\mathbf{x}}_{*}\) (cf. Figure 4.8). We remark that we have assumed in the above that the periodic orbit results in a fixed point of the surface of section map. If instead it results in a period \(p\) orbit (as shown in Figure 4.9 for \(p=3\)), \(\hat{\mathbf{x}}^{*}_{0}\rightarrow\hat{\mathbf{x}}^{*}_{1}\rightarrow\hat{ \mathbf{x}}^{*}_{p}=\hat{\mathbf{x}}^{*}_{0}\), then we select one of the \(p\) points \(\hat{\mathbf{x}}^{*}_{j}\) on the map orbit and examine the \(p\)th iterate of the map \(\mathbf{M}^{p}\). For the map \(\mathbf{M}^{p}\) the point \(\hat{\mathbf{x}}^{*}_{j}\) is a fixed point, \[\hat{\mathbf{x}}^{*}_{j}=\mathbf{M}^{p}(\hat{\mathbf{x}}^{*}_{j}). \tag{4.25}\] Linearizing about this point, we again have a problem of the form of Figure 4.9: Period three orbit of the surface of section map (4.21), but now **A** is identified with \(\mathbf{DM}^{p}(\hat{\mathbf{x}}^{*}_{j})\). We note that the chain rule for differentiation yields \[\mathbf{DM}^{p}(\hat{\mathbf{x}}^{*}_{j})=\mathbf{DM}(\hat{\mathbf{x}}^{*}_{j-1}) \mathbf{DM}(\hat{\mathbf{x}}^{*}_{j-2})\qquad\mathbf{DM}(\hat{\mathbf{x}}^{*}_ {0})\mathbf{DM}(\hat{\mathbf{x}}^{*}_{p-1})\qquad\mathbf{DM}(\hat{\mathbf{x}}^{* }_{j}). \tag{4.26}\] This is a matrix version of the one dimensional map result Eq. (2.7). Finally, we wish to stress that, although we have been discussing the map **M** as arising from a surface of section of a continuous time system, our discussion connected with Eqs. (4.20) (4.23) and (4.25) (4.26) applies to maps in general, whether they arise via a surface of section or not. (As an example of the latter, Problem 6 deals with the stability of a periodic orbit of the Henon map.) ### Stable and unstable manifolds We define stable and unstable manifolds of steady states and periodic orbits of smooth dynamical systems as follows. The _stable manifold_ of a steady state or periodic orbit is the set of points **x** such that the forward orbit starting from **x** approaches the steady state or the closed curve traced out by the periodic orbit. Similarly, the _unstable manifold_ of a steady state or periodic orbit is the set of points **x** such that the orbit going backward in time starting from **x** approaches the steady state or the closed curve traced out by the periodic orbit (this assumes invertibility if we are dealing with a map). The existence and smoothness of these manifolds can be proven under very general conditions. Furthermore, stable and unstable manifolds, \(W^{s}\) and \(W^{u}\), of a steady state or periodic orbit, have the same dimensionality as the linear subspace \(E^{s}\) and \(E^{u}\) and are tangent to them, \[\dim(W^{s}) = n_{s},\] \[\dim(W^{u}) = n_{u}.\] Figure 4.10(\(a\)) illustrates the situation for a two dimensional map with a fixed point \(\gamma\) that has one stable and one unstable direction. Figure 4.10(\(b\)) applies for a situation where \(n_{s}=2\) and \(n_{u}=1\) for a fixed point \(\gamma\) of a flow. Also, in Figure 4.10(\(b\)), we show an orbit in \(W^{s}(\gamma)\) spiraling into the fixed point \(\gamma\) for the case where the two stable eigenvalues are complex conjugates. For specificity, in what follows we will only be considering the case of periodic orbits of an invertible map where the orbit period is 1 (i.e., fixed points of the map). We now show that stable manifolds cannot intersect stable manifolds and that unstable manifolds cannot intersect unstable manifolds. For the case of self intersections of an unstable manifold, this follows from the following considerations. Very near the fixed point \(\gamma\), say within a distance \(\varepsilon\), the unstable manifold is a small section of an \(n_{\rm u}\) dimensional surface tangent to \(E^{\rm u}\). Call this small piece of the unstable manifold \(W^{\rm u}_{x}(\gamma)\). Since \(W^{\rm u}_{x}(\gamma)\) lies close to the \(n_{\rm u}\) dimensional plane \(E^{\rm u}\), it does not intersect itself. Now, continually mapping the small surface \(W^{\rm u}_{x}(\gamma)\) forward in time, it expands in all its \(n_{\rm u}\) unstable directions filling out the whole unstable manifold of \(\gamma\), \(W^{\rm u}(\gamma)\). Since we assume the map is invertible, two distinct points cannot be mapped to the same point. Thus, \(W^{\rm u}(\gamma)\) cannot intersect itself. Now consider two distinct fixed points \(\gamma_{1}\) and \(\gamma_{2}\) with unstable manifolds \(W^{\rm u}(\gamma_{1})\) and \(W^{\rm u}(\gamma_{2})\). These cannot intersect each other because, if they did, then a backward orbit starting at \(\gamma_{1}\) and \(\gamma_{2}\) is a point of \(\gamma_{1}\). Thus, the only possible way to obtain a \(n_{\rm u}\) dimensional surface is to consider the \(n_{\rm u}\) dimensional plane \(E^{\rm u}\). Figure 4.10: Stable and unstable manifolds. an intersection point would have to approach both \(\gamma_{1}\) and \(\gamma_{2}\) in the limit of an infinite number of backwards iterates. However, \(\gamma_{1}=\gamma_{2}\); so this is impossible. Hence there can be no intersections of unstable manifolds, and, applying a similar argument, with the direction of time reversed, there can be no intersection of stable manifolds. We note, however, that stable and unstable manifolds can intersect each other. In Figure 4.10(_c_) we show an intersection of stable and unstable manifolds of a fixed point \(\gamma\) of a two dimensional map. This is called a _homoclinic intersection_. In Figure 4.10(_d_) we show intersections of the stable and unstable manifolds of one fixed point \(\gamma_{1}\) with those of another fixed point \(\gamma_{2}\). This is called a _heteroclinic intersection_. The complexity of these diagrams stems from the fact that, if a stable and unstable manifold intersect once, then they must intersect an infinite number of times. To see this, we have labeled one of the intersections \(O\) in Figure 4.10(_c_). Since \(O\) is on \(W^{s}(\gamma)\) and \(W^{u}(\gamma)\) its subsequent iterates, both forward and backward in time, must also be on \(W^{s}(\gamma)\) and \(W^{u}(\gamma)\), because \(W^{s}(\gamma)\) and \(W^{u}(\gamma)\) are invariant sets by their construction. Thus intersection points map into intersection points. Iterating the point \(O\) forward in time, it approaches \(\gamma\) along the stable manifold, successively mapping to the points labeled \(1\), \(2\), and \(3\) in Figure 4.10(_c_). Iterating the point \(O\) backward in time, it approaches \(\gamma\) along the unstable manifold, successively mapping to the points labeled \(-1\), \(-2\) and \(-3\). The compli cated nature of Figures 4.10(_c_) and (_d_) suggests complicated dynamics when homoclinic or heteroclinic intersections are present. Indeed this is so. Smale (1967) shows that a homoclinic intersection implies horseshoe type dynamics for some sufficiently high iterate of the map. To see this consider the homoclinic intersection for the fixed point \(\gamma\) shown in Figure 4.11(_a_). The manifolds \(W^{s}(\gamma)\) and \(W^{u}(\gamma)\) intersect at point \(\xi\). Choosing a small rectangle \(J\) about the point \(\gamma\) and mapping it forward in time a sufficient number of iterates \(q_{+}\) we obtain \(\mathbf{M}^{q_{+}}(J)\). Similarly mapping \(J\) backward \(q_{-}\) iterates, we obtain the region \(\mathbf{M}^{-q_{-}}(J)\equiv S\). (See Figure 4.11(_b_).) Thus we have the picture shown in Figure 4.11(_c_) which shows that \(\mathbf{M}^{q}\), where \(q=q_{+}+q_{-}\), maps \(S\) to a horseshoe as in Figure 4.1. Hence, \(\mathbf{M}^{q}\) is a horseshoe map on the long thin rectangle \(S\) and has an invariant set in \(S\) on which the dynamics is equivalent to a full shift on two symbols. We note that, although we have drawn the shapes of \(W^{s}(\gamma)\) and \(W^{u}(\gamma)\) in Figure 4.11(_a_) to make the horseshoe shape obvious (Figure 4.11(_c_)), the result stated above and proved by Smale depends only on the existence of a homoclinic intersection.2 Furthermore, a similar result applies in the heteroclinic case, Figure 4.10(_d_). Footnote 2: The \(\mathbf{M}^{q_{+}}\) is a horseshoe map, and the \(\mathbf{M}^{q_{-}}\) is a horseshoe map. In Section 4.2 we have discussed the linearized map \(\mathbf{y}_{n+1}=\mathbf{A}\)\(\mathbf{y}_{n}\) about a fixed point \(\mathbf{x}_{+}\), and the splitting of the vector space in which \(\mathbf{y}\) lies into subspaces \(E^{s}\), \(E^{u}\) and \(E^{c}\) that are invariant under the matrix \(\mathbf{A}\). Wecall the vectors \(\mathbf{y}\)_tangent vectors_. We call the space in which they lie the _tangent space_ of the map at \(\mathbf{x}=\mathbf{x}_{*}\), and we denote this space \(T_{\mathbf{x}_{*}}\). We say the fixed point \(\mathbf{x}_{*}\) is _hyperbolic_ if there is no center subspace \(E^{\mathsf{c}}\). That is, if all the magnitudes of the eigenvalues \(\lambda_{j}\) are either greater than 1 or less than 1, and \(n_{\mathsf{u}}+n_{\mathsf{s}}\) is the dimension of \(\mathbf{y}\). In this case, we say that the tangent space \(T_{\mathbf{x}_{*}}\) has a direct sum decomposition into \(E^{\mathsf{s}}\) and \(E^{\mathsf{u}}\), \(T_{\mathbf{x}_{*}}=E^{\mathsf{s}}\oplus E^{\mathsf{u}}\). That is, vectors in the space \(T_{\mathbf{x}_{*}}\) can be uniquely specified as the sum of two component vectors, one in the subspace \(E^{\mathsf{s}}\) and one in the subspace \(E^{\mathsf{u}}\). There is a notion of hyperbolicity not only for fixed points, but also for more general invariant sets of a map. Such an invariant set might be, for Figure 4.11: Construction of a horseshoe from a homoclinic intersection. example, a strange attractor (the strange attractor of the generalized baker's map is hyperbolic), or it may not be an attractor (like the invariant set of the horseshoe map). We say that an invariant set \(\Sigma\) is _hyperbolic_ if there is a direct sum decomposition of \(T_{\mathbf{x}}\) into stable and unstable subspace \(T_{\mathbf{x}}=E^{\mathrm{s}}_{\mathbf{x}}\oplus E^{\mathrm{u}}_{\mathbf{x}}\) for all \(\mathbf{x}\) in \(\Sigma\), such that the splitting into \(E^{\mathrm{s}}_{\mathbf{x}}\) and \(E^{\mathrm{u}}_{\mathbf{x}}\) varies continuously with \(\mathbf{x}\) in \(\Sigma\) and is invariant in the sense that \(\mathbf{DM}(E^{\mathrm{s,u}}_{\mathbf{x}})=E^{\mathrm{s,u}}_{\mathbf{M}( \mathbf{x})}\), and there are some numbers \(K>0\) and \(0<\rho<1\) such that the following hold. (\(a\)) If \(\mathbf{y}\) is in \(E^{\mathrm{s}}_{\mathbf{x}}\) then \[\mathbf{DM}^{n}(\mathbf{x})\ \ \mathbf{y}\ <K\rho^{n}\ \mathbf{y}. \tag{4.27a}\] (\(b\)) If \(\mathbf{y}\) is in \(E^{\mathrm{u}}_{\mathbf{x}}\) then \[\mathbf{DM}^{-n}(\mathbf{x})\ \ \mathbf{y}\ <K\rho^{n}\ \mathbf{y}. \tag{4.27b}\] Consider \(\mathbf{x}\) as an initial condition. Conditions (\(a\)) and (\(b\)) basically say that the orbit originating from another initial condition infinitesimally displaced from \(\mathbf{x}\) exponentially approaches the orbit \(\mathbf{M}^{n}(\mathbf{x})\) or exponenttially diverges from it if the infinitesimal displacement is in \(E^{\mathrm{s}}_{\mathbf{x}}\) or \(E^{\mathrm{u}}_{\mathbf{x}}\), respectively. Hyperbolic invariant sets are mainly of interest because the property of hyperbolicity allows many interesting mathematically rigorous results to be obtained. Much of what is rigorously known about the structure and dynamics of chaos is only known for cases which satisfy the hyperbolicity conditions. (See, for example, the text by Guckenheimer and Holmes (1983).) Some of these results are the following (some restrictions in addition to hyperbolicity are also required for some of these statements). (1) Stable and unstable manifolds at \(\mathbf{x}\) in \(\Sigma\), denoted \(W^{\mathrm{s}}(\mathbf{x})\) and \(W^{\mathrm{u}}(\mathbf{x})\), can be locally defined. Two points on the same stable manifold, for example, approach each other exponentially in time as illustrated in Figure 4.12. Note that \(W^{\mathrm{s,u}}(\mathbf{x})\) is tangent to \(E^{\mathrm{s,u}}_{\mathbf{x}}\) at \(\mathbf{x}\). (2) If small noise is added to a hyperbolic system with a chaotic attractor, then a resulting noisy orbit perturbed from the chaotic attractor can be'shadowed' by a 'true' orbit of the noiseless system such that the true orbit closely follows the noisy orbit (see Section 1.5 and Problem 3 of Chapter 2 for other discussions of shadowing). (3) The dynamics on the invariant set can be represented via symbolic dynamics as a full shift or a shift of finite type on a bi infinite symbol sequence (as illustrated in Figure 4.4). (4) If the invariant hyperbolic set is an attractor, then a natural measure (as defined in Chapter 2) exists. (5) The invariant set and its dynamics are _structurally stable_ in the sense that small smooth perturbations of the map preserve the dynamics. Inparticular, if **m**(**x**) is a smooth function of **x**, then there exists some positive number \(\varepsilon_{0}\) such that the perturbed map, **M**(**x**) \(+\)\(\varepsilon\)**m**(**x**), can be transformed to the original map **M** by a one to one change of variables for all \(\varepsilon\) satisfying \(\varepsilon\,<\)\(\varepsilon_{0}\). (This change of variables is continuous but may not be differentiable.) In particular, in the range \(\varepsilon\,<\)\(\varepsilon_{0}\), the perturbed map and the original map have the same number of periodic orbits for any period, and have the same symbolic dynamics. An example of chaotic attractors that are apparently not structurally stable are those occurring for the logistic map \(x_{n+1}\)\(=\)\(r\,x_{n}(1\,-\,x_{n})\). In this case, we saw in Section 2.2 that \(r\) values yielding attracting periodic orbits are thought to be dense in \(r\). Thus, for the case where \(r\) is such that there is a chaotic attractor, an arbitrarily small change of \(r\) (which can also be said to produce an arbitrarily small change in the map) can completely change the character of the attractor\({}^{3}\) (i.e., from chaotic to periodic). As mentioned previously, the generalized baker's map and the horse shoe map yield examples of hyperbolic sets. For the generalized baker's map, Eq. (3.7), the Jacobian matrix is \[\mathbf{DM(x)}=\begin{array}{cc}\lambda_{x}(y)&0\\ 0&\lambda_{y}(y)\end{array}, \tag{4.28}\] where \(\lambda_{x}(y)=\lambda_{a}\) or \(\lambda_{b}\) for \(y<\alpha\) and \(y>\alpha\), respectively, and \(\lambda_{y}(y)=\alpha^{-1}\) or \(\beta^{-1}\) for \(y<\alpha\) and \(y>\alpha\), respectively. Since \(\lambda_{x}(y)<1\) and \(\lambda_{y}(y)>1\), the unstable manifolds are vertical lines and the stable manifolds are horizontal lines. Similar considerations apply for the horse shoe, where the stable and unstable manifolds are also horizontal and vertical lines. (In fact they are Cantor sets of horizontal and vertical lines whose intersection is the invariant set.) Another example is the Anosov map, Figure 4.12: Illustration of the stable manifold. \[\begin{array}{ccccc}x_{n+1}&=&1&1&x_{n}\\ y_{n+1}&=&1&2&y_{n}\end{array}\quad\mbox{modulo 1}. \tag{4.29}\] Since \(x\) and \(y\) are taken modulo 1, they may be viewed as angle variables, and this map is a map acting on the two dimensional surface of a torus. The coordinates specifying points on this surface are the two angles \(x\) and \(y\), one giving the location the long way around the torus, the other giving the location the short way round the torus, as shown in Figure 4.13\((a)\). (Here one circuit around is signified by increasing the corresponding angle variable by one, rather than by \(2\pi\).) The map is continuous (i.e., two points near each other on the toroidal surface are mapped to two other points that are near each other) by virtue of the fact that the entries of the matrix are integers (note the modulo 1 in (4.29)). This map is hyperbolic and structurally stable. To see this, we note that, by virtue of the linearity of (4.29), the Jacobian matrix **DM(x)** is the same as the matrix in (4.29) specifying the map. The eigenvalues of the matrix (4.29) are \(\lambda_{1}=(3+\sqrt{5})/2>1\) and \(\lambda_{2}=(3-\sqrt{5})/2<1\). Thus, there are one dimenional stable and unstable directions that are just the directions parallel to the eigenvectors of the matrix which are (1, \(\lambda_{1,2}-1\)). For typical initial conditions the map (4.29) generates orbits which eventually come arbitrarily close to any point on the toroidal surface. Furthermore, the typical orbit visits equal areas with equal frequency and hence the natural invariant measure is uniform on the toroidal surface. Note that this map is area preserving, \[\begin{array}{ccccc}\mbox{det}&1&1\\ 1&2\end{array}\quad=1.\] Figure 4.13: The cat map. Note the mixing action of the map. The book by Arnold and Avez (1968) contains the illustration of the action of the map (4.29) which we reproduce in Figure 4.13(_b_). A picture of the face of a cat is shown on the surface before the map is applied. Neglecting the modulo 1 operations, the square is mapped to a stretched out parallelogram which is returned to the square when the modulo 1 is taken. Because of this picture, (4.29) has been called the 'cat map.' The map Eq. (3.17) considered by Sinai is a perturbation of Eq. (4.29) (the perturbation is the term \(\Delta\cos(2\pi y_{n})\)). Thus, by the structural stability of (4.29), if \(\Delta\) is not too large, the attractor for (3.17) is also hyperbolic and structurally stable (we do not know whether this is so for the value \(\Delta=0.1\) used for the plot in Figure 3.6). While hyperbolic sets are very convenient mathematically, it is un fortunately the case that much of the chaotic phenomena seen in systems occurring in practice is nonhyperbolic and apparently not structurally stable. This seems to be the case for almost all practically interesting chaotic attractors examined to date. On the other hand, in cases of _nonattracting_ chaotic sets, such as those arising in problems of chaotic scattering and fractal basin boundaries (see Chapter 5) hyperbolicity seems to be fairly common. As an example of a nonhyperbolic chaotic Figure 4.14: The Hénon attractor is shown together with a numerically calculated finite length segment of the stable manifold of a point on the attractor (You, 1991). Since almost every point in the basin of attraction goes to the attractor, the stable manifold segment will come arbitrarily close to any point in the basin area as the segment length is increased. Also, as the segment length is increased, more and more near tangencies with the unstable manifold are produced. Since other stable manifold segments are locally parallel to the calculated segment (they cannot cross), a near tangency for the calculated segment generally indicates an exact tangency for some other segment. attractor we mention the Henon attractor (Figure 1.12). The reason why the Henon attractor fails to be hyperbolic is that there are points **x** on the attractor at which the stable and unstable manifolds \(W^{*}(\textbf{x})\) and \(W^{u}(\textbf{r})\) are tangent. We can regard the attractor itself as being the closure of the unstable manifold of points on the attractor.4 Numerical calculations of stable manifolds of the attractor reveal the structure shown in Figure 4.14, which, according to the discussion in the caption, shows that there are tangencies of stable and unstable manifolds. We require for hyperbolicity that \(E^{*}_{\textbf{x}}\oplus E^{u}_{\textbf{x}}\) span the tangent space at every point **x** on the attractor. Since the tangents to \(W^{*}(\textbf{x})\) and \(W^{u}(\textbf{x})\) coincide for **x** at such points, \(E^{*}_{\textbf{x}}\) and \(E^{u}_{\textbf{x}}\) are the same at tangency points, and thus they do not span the two dimensional space of tangent vectors \(T_{\textbf{x}}\). Hence, the Henon attractor is not hyperbolic. Footnote 4: The Henon attractor is the hyperbolic attractor. ### Lyapunov exponents Lyapunov exponents give a means of characterizing the stretching and contracting characteristics of attractors and other invariant sets. First consider the case of a map **M**. Let \(\textbf{x}_{0}\) be an initial condition and \(\textbf{x}_{n}\) (\(n=0\), \(1\), \(2\), ) the corresponding orbit. If we consider an infinitesimal displacement from \(\textbf{x}_{0}\) in the direction of a tangent vector \(\textbf{y}_{0}\), then the evolution of the tangent vector, given by \[\textbf{y}_{n+1}=\textbf{DM}(\textbf{x}_{n})\ \ \textbf{y}_{n}, \tag{4.30}\] determines the evolution of the infinitesimal displacement of the orbit from the unperturbed orbit \(\textbf{x}_{n}\). In particular, \(\textbf{y}_{n}/\textbf{y}_{n}\) gives the direction of the infinitesimal displacement of the orbit from \(\textbf{x}_{n}\) and \(\textbf{y}_{n}/\textbf{y}_{0}\) is the factor by which the infinitesimal displacement grows (\(\textbf{y}_{n}>\textbf{y}_{0}\) ) or shrinks (\(\textbf{y}_{n}\ <\ \textbf{y}_{0}\) ). From (4.30), we have \(\textbf{y}_{n}=\textbf{DM}^{n}(\textbf{x}_{0})\ \ \textbf{y}_{0}\), where \[\textbf{DM}^{n}(\textbf{x}_{0})=\textbf{DM}(\textbf{x}_{n-1})\ \ \textbf{DM}( \textbf{x}_{n-2})\ \ \ \ \ \ \ \ \ \textbf{DM}(\textbf{x}_{0}).\] We define the Lyapunov exponent for initial condition \(\textbf{x}_{0}\) and initial orientation of the infinitesimal displacement given by \(\textbf{u}_{0}=\textbf{y}_{0}/\textbf{y}_{0}\) as \[h(\textbf{x}_{0},\textbf{u}_{0}) = \lim_{n\rightarrow\infty}\frac{1}{n}\ln(\textbf{y}_{n}\ /\textbf{y}_{0}) \tag{4.31}\] \[= \lim_{n\rightarrow\infty}\frac{1}{n}\ln\textbf{DM}^{n}(\textbf{x }_{0})\ \ \textbf{u}_{0}\.\] If the dimension of the map is \(N\), then there will be \(N\) or less distinct Lyapunov exponents for a given \(\textbf{x}_{0}\), and which one of these exponent values applies depends on the initial orientation \(\textbf{u}_{0}\). (In Chapter 2 we have already discussed Lyapunov exponents for one dimensional maps (\(N=1\)) in which case there is only one exponent.) To see why there are several possible values of the Lyapunov exponent depending on the orientation of \({\bf u}_{0}\), say \(n\) is large and approximate \(h({\bf x}_{0}\), \({\bf u}_{0})\) as \[h({\bf x}_{0},\,{\bf u}_{0}) \simeq \overline{h}_{n}({\bf x}_{0},\,{\bf u}_{0})\equiv\frac{1}{n}\,\ln \,{\bf DM}^{\,n}({\bf x}_{0})\,\,\,\,{\bf u}_{0} \tag{4.32}\] \[= \frac{1}{2\,n}\,\ln[{\bf u}_{0}^{\dagger}\,\,\,\,\,{\bf H}_{n}({ \bf x}_{0})\,\,\,\,{\bf u}_{0}],\] where \({\bf H}_{n}({\bf x}_{0})=[{\bf DM}^{\,n}({\bf x}_{0})]^{\dagger}{\bf DM}^{\,n} ({\bf x}_{0})\), and \(\dagger\) denotes the transpose. Since \({\bf H}_{n}({\bf x}_{0})\) is a real nonnegative hermitian matrix, its eigenvalues are real and nonnegative, and its eigenvectors may be taken to be real. Choosing \({\bf u}_{0}\) to lie in the direction of an eigenvector of \({\bf H}_{n}({\bf x}_{0})\), we obtain values of the approximate Lyapunov exponent corresponding to each eigenvector. We denote these values \(\overline{h}_{jn}({\bf x}_{0})=(2\,n)^{-1}\ln H_{jn}\), where \(H_{jn}\) denotes an eigen value of \({\bf H}_{n}({\bf x}_{0})\), and we order the subscript labeling of the \(\overline{h}_{jn}({\bf x}_{0})\) such that \(\overline{h}_{1n}({\bf x}_{0})\approx\overline{h}_{2n}({\bf x}_{0})\geq\)\(\overline{h}_{Nn}({\bf x}_{0})\). Thus \(\overline{h}_{1n}\) is the largest exponent and \(\overline{h}_{Nn}\) is the smallest (if \(\overline{h}_{Nn}<0\), then \(\overline{h}_{Nn}\) is the most negative exponent). Letting \(n\) approach infinity the approximations \(\overline{h}_{jn}({\bf x}_{0})\) ap proach the Lyapunov exponents, which we denote5 Footnote 5: The \(\overline{h}_{Nn}\) is the largest eigenvalue of \(\overline{h}_{Nn}\), and the smallest eigenvalue of \(\overline{h}_{Nn}\) is the smallest eigenvalue of \(\overline{h}_{Nn}\). (provided \(n\) is sufficiently large and \(a_{2}=0\)), which when placed in (4.32) yields \(\overline{h}_{2n}(\mathbf{x}_{0})\). Proceeding in this way one can imagine, at least in prin ciple, obtaining all the Lyapunov exponents. In practice, when attempting to calculate Lyapunov exponents numerically, special techniques are called for (Benettin _et al._, 1980). We shall discuss these at the end of this section. Say we sprinkle initial conditions in a small ball around \(\mathbf{x}_{0}\), and then evolve each initial condition under the map \(\mathbf{M}\) for \(n\) iterates. Considering the initial ball radius to be infinitesimal, the initial ball evolves into an ellipsoid. This is illustrated in Figure 4.15 for the case \(N=2\) and \(h_{1}(\mathbf{x}_{0})>0>h_{2}(\mathbf{x}_{0})\). In the limit of large time the Lyapunov exponents give the time rate of exponential growth or shrinking of the principal axes of the evolving ellipsoid. Oseledec's multiplicative ergodic theorem (1968) guarantees the ex isence of the limits used in defining the Lyapunov exponents under very general circumstances. In particular, if \(\mu\) is an ergodic measure (Section 2.3), the Lyapunov exponent values \(h_{i}(\mathbf{x}_{0})\) obtained from (4.30) and (4.31) are the same set of values of almost every \(\mathbf{x}_{0}\) with respect to the measure \(\mu\) (see, for example, Ruelle (1989)). For the case of the natural measure on an attractor, this implies that the Lyapunov exponents with respect to the measure are also the same set of values for all \(\mathbf{x}_{0}\) in the basin of attraction of the attractor, except for a set of Lebesgue measure zero. Henceforth, we will often drop the \(\mathbf{x}_{0}\) dependence of \(h_{i}(\mathbf{x}_{0})\) and write \(h_{i}\), with the understanding that the \(h_{i}\) are the Lyapunov exponents that apply for almost every \(\mathbf{x}_{0}\) with respect to Lebesgue measure in the basin of attraction of the attractor. Thus, we can speak of the Lyapunov exponents of an attractor without reference to a specific initial condition. We define the attractor to be _chaotic_ if it has a positive Lyapunov exponent (i.e., \(h_{1}>0\)). In this case typical, infinitesimally displaced, initial conditions separate from Figure 4.15: Evolution of an initial infinitesimal ball after \(n\) iterations of the map. each other exponentially in time, with the infinitesimal distance between them on average growing as \(\exp(nh_{1})\). Hence, we also refer to the condition \(h_{1}>0\) as implying _exponential sensitivity to initial conditions_ for the attractor. (Two initial conditions separated by a small distance \(\varepsilon>0\) (not infinitesimal) will initially separate exponentially, but (assuming the initial conditions lie on a bounded attractor) exponential separation only holds for distances small compared to the attractor size (as in Figure 1.15).) Consider a periodic orbit of a map \(\mathbf{M}\), \(\mathbf{x}_{0}^{*}\to\mathbf{x}_{1}^{*}\to\qquad\to\mathbf{x}_{p}^{*}=\mathbf{ x}_{0}^{*}\). The Lyapunov exponents for the periodic orbit are \[h_{i}=(1/p)\ln\lambda_{i}\,,\] where \(\lambda_{i}\) are the eigenvalues of the matrix \(\mathbf{DM}^{p}\) evaluated at one of the points \(\mathbf{x}=\mathbf{x}_{j}^{*}\). For a chaotic attractor we have seen that there can be infinitely many unstable periodic orbits embedded within the attractor (Section 2.1). Each of these unstable periodic orbits typically yields a set of Lyapunov exponents, \(h_{i}=(1/p)\ln\lambda_{i}\), that are different from those which apply for almost every initial condition with respect to Lebesgue measure in the basin. Thus, they (together with their stable manifolds) are part of the 'exceptional' zero Lebesgue measure set which does not yield the Lyapunov exponents of the chaotic attractor (assuming the chaotic attractor has a natural measure). As an example, let us calculate the Lyapunov exponents for the generalized baker's map, Eqs. (3.7). The Jacobian matrix of the map is diagonal (Eq. (4.28)) and hence so is \(\mathbf{H}_{n}(\mathbf{x})\), \[\mathbf{H}_{n}(\mathbf{x}) =\left[\begin{array}{cc}H_{x}(\mathbf{x})&0\\ 0&H_{y}(\mathbf{x})\end{array}\right],\] \[H_{x}(\mathbf{x}) =(\lambda_{a}^{n_{1}}\lambda_{b}^{n_{2}})^{2}<1,\] \[H_{y}(\mathbf{x}) =(\alpha^{-n_{1}}\beta^{-n_{2}})^{2}>1,\] where \(n_{1}\) and \(n_{2}\) are the number of times the orbit of length \(n=n_{1}+n_{2}\) which starts at \(\mathbf{x}\) falls below and above the horizontal line \(y=a\). From (4.31) choosing \(\mathbf{u}_{0}\) purely in the \(y\) direction and purely in the \(x\) direction, we have \[h_{1} =\lim_{n\to\infty}\left(\frac{n_{1}}{n}\ln\frac{1}{\alpha}+\frac{n _{2}}{n}\ln\frac{1}{\beta}\right),\] \[h_{2} =\lim_{n\to\infty}\left(\frac{n_{1}}{n}\ln\lambda_{a}+\frac{n_{2 }}{n}\ln\lambda_{b}\right).\] For a typical \(\mathbf{x}\) the quantity \(\lim_{n\to\infty}(n_{1}/n)\) is just the natural measure of the attractor in \(y<a\). Thus, \(\lim_{n\to\infty}(n_{1}/n)=a\). Similarly \(\lim_{n\to\infty}(n_{2}/n)\)is the natural measure in \(y>a\), and thus \(\lim_{a\to\infty}(n_{2}/n)=\beta\). The Lyapunov exponents for the generalized baker's map are thus \[h_{1} = a\ln\frac{1}{a}+\beta\ln\frac{1}{\beta}>0, \tag{4.35a}\] \[h_{2} = \alpha\ln\lambda_{a}+\beta\ln\lambda_{b}<0. \tag{4.35b}\] On the other hand, the point \(x=y=0\) is a fixed point on the attractor, and if we use this for the initial condition of our orbit, we obtain \(n_{1}/n=1\), \(n_{2}/n=0\). Thus, this 'exceptional' initial condition yields values of the exponents that are different from those in (4.35); namely, we obtain \(h_{1}=\ln\alpha^{-1}\) and \(h_{2}=\ln\lambda_{a}\). It has been conjectured by Kaplan and Yorke (1979b) that there is a relationship giving the fractal dimension of a typical chaotic attractor in terms of Lyapunov exponents (see also Farmer _et al._, 1983). Let \(K\) be the largest integer such that (recall our ordering Eq. (4.33)) \[\mathop{\rm}_{j=1}^{K}h_{j}\quad\quad 0.\] Define the quantity \(D_{\rm L}\) called the _Lyapunov dimension_, \[D_{\rm L}=K+\frac{1}{h_{K+1}}\mathop{\rm}_{j=1}^{K}h_{j}. \tag{4.36}\] The conjecture is that the Lyapunov dimension is the same as the information dimension of the attractor for 'typical attractors,' \[D_{1}=D_{\rm L}. \tag{4.37}\] In the case of a two dimensional map with \(h_{1}>0>h_{2}\) and \(h_{1}+h_{2}<0\) (e.g., the Henon map), \[D_{\rm L}=1+h_{1}/\ h_{2}. \tag{4.38}\] (The condition \(h_{1}+h_{2}<0\) says that on average areas are contracted by the map.) We can motivate (4.37) for this case as follows. Say we cover some fraction \(\theta<1\) of the attractor natural measure with \(N(\varepsilon,\,\theta)\) small boxes of edge length \(\varepsilon\), where we choose the boxes so as to minimize \(N(\varepsilon,\,\theta)\). Now consider one of these boxes and iterate it forward in time by \(n\) iterates. In the linear approximation, typically valid for \(\varepsilon\exp(nh_{1})\) much less than the size of the attractor, the box will be stretched in length by an amount of order \(\exp(nh_{1})\), and will be contracted in width by an amount of order \(\exp(nh_{2})\), becoming a long thin parallelogram as shown in Figure 4.16. Now consider covering the parallelogram with smaller boxes of edge \(\varepsilon\exp(nh_{2})\). This requires of the order of \(\exp[n(h_{1}-h_{2})]=\exp[n(h_{1}+\mid h_{2}\mid)]\) such boxes. Since the natural measure is invariant, the measures in the original square and in the parallelogram are the same. Thus, we obtain an estimate for the number of boxes of edge length \(\varepsilon\exp(nh_{2})\) needed to cover the fraction \(\theta\) of the natural measure, \[N(\varepsilon\exp(nh_{2}),\,\theta)\sim\exp[n(h_{1}+\mid h_{2}\,)]N(\varepsilon,\,\theta).\] Assuming \(D_{0}(\theta)=D_{1}\) and \(N(\varepsilon,\,\theta)\sim\varepsilon^{-D_{1}}\), the above gives \[[\varepsilon\exp(nh_{2})]^{-D_{1}}\sim\exp[n(h_{1}+\mid h_{2}\,)]\varepsilon^{ -D_{1}}\] from which, by taking logarithms, we immediately obtain (4.37). Note that (4.37) also holds for the generalized baker's map as can be checked by substituting \(h_{1}\) and \(h_{2}\) from (4.35) in (4.38) and comparing with the result Eq. (3.24) for the information dimension of the generalized baker's map. No rigorous proof of the Kaplan Yorke conjecture exists, but some rigorous results related to it have been obtained by Young (1982) and by Ladrappier (1981). Numerical evidence in the case of two dimensional maps (Russell _et al._, 1980) also supports it. One question that naturally arises in the above heuristic derivation of the conjecture \(D_{\rm L}=D_{1}\) is why not take \(\theta=1\), in which case we would have \(D_{\rm L}=D_{0}\). The point is that the Lyapunov exponents \(h_{1}\) and \(h_{2}\) of the attractor apply for almost every point with respect to the natural measure on the attractor. On the other hand, there is typically a natural measure zero set on a chaotic attractor which yields larger stretching than \(h_{1}\). For small \(\varepsilon\) this natural measure zero set makes \(N(\varepsilon,\,1)\) much larger than \(N(\varepsilon,\,\theta),\,\theta<1\). Thus points on the attractor with nontypical stretching dominate in determining \(D_{0}\). In Figure 4.16 we have assumed that the stretching experienced by a box is given by the 'typical' values \(h_{1}\) and \(h_{2}\), and this is appropriate for \(D_{1}\) but not \(D_{0}\). Indeed \(D_{\rm L}=D_{0}\) is contradicted by the example of the generalized baker's map. In all of the above we have been discussing the Lyapunov exponents of maps. For the case of a continuous time system in the form of an autonomous system of first order ordinary differential equations, Figure 4.16: Box stretched into a long thin parallelogram. \({\bf d}{\bf x}/{\rm d}t={\bf F}({\bf x})\), all of the above considerations carry through with Eq. (4.31) replaced by \[h({\bf x}_{0},\,{\bf u}_{0}) = \lim_{t\to\infty}\frac{1}{t}\ln\left({\bf y}(t)\,/\,{\bf y}_{0}\, \right)\mbox{,} \tag{4.39}\] \[= \lim_{t\to\infty}\frac{1}{t}\ln\,\,\,\hat{}\left({\bf x}_{0},\,t \right)\,\,\,{\bf u}_{0}\mbox{,}\] where \({\bf d}{\bf y}(t)/{\rm d}t={\bf DF}({\bf x}(t))\,\,\,{\bf y}(t)\), \({\bf x}_{0}={\bf x}(0)\), \({\bf y}_{0}={\bf y}(0)\), \({\bf u}_{0}={\bf y}_{0}/\,{\bf y}_{0}\), and \(\hat{}\left({\bf x}_{0},\,t\right)\) is the matrix solution of the equation \[{\rm d}\,\,\hat{}\left/{\rm d}t={\bf DF}({\bf x}(t))\right.\,\,\,\hat{}\] subject to the initial condition \[\hat{}\left({\bf x}_{0},\,0\right)={\bf l}\mbox{.}\] From the above we have \({\bf y}(t)=\hat{}\left({\bf x}_{0},\,t\right)\,\,\,{\bf y}_{0}\), and \(\hat{}\) plays the roles of \({\bf DM}^{n}\) in our treatment of maps. For a chaotic attractor of a flow there is one Lyapunov exponent which is zero, corresponding to an infinitesimal displacement along the flow (Section 4.2). Thus, for example, the Lorenz attractor (Eqs. (2.30)) has \(h_{1}>0\), \(h_{2}=0\), \(h_{3}<0\) and \(h_{1}+h_{2}+h_{3}<0\) yielding \[D_{\rm L}=2+\,h_{1}/\,\,h_{3}\mbox{.}\] A useful fact to keep in mind is that, in cases where there is uniform phase space contraction, there is a relationship among the Lyapunov exponents. For example, for the Henon map Eq. (1.14), the determinant of the Jacobian is the constant \(-B\) independent of the orbit position. Thus \({\bf H}_{n}({\bf x})\) has determinant \(B^{2n}\). Since the determinant of \({\bf H}_{n}({\bf x})\) is the product of its eigenvalues, Eq. (4.32) yields \[h_{1}+\,h_{2}=\ln\,\,B\] for the Henon map. Thus a numerical calculation of \(h_{1}\) immediately yields \(h_{2}\) by the above formula without need for an additional numerical calculation of \(h_{2}\). A similar situation arises for the Lorenz attractor Eqs. (2.30) for which phase space volumes contract at the exponential rate \(-(1+\tilde{\sigma}+\tilde{b})\) (cf. Section 2.4.1). In this case we therefore have \[h_{1}+\,h_{3}=-(1+\tilde{\sigma}+\tilde{b})\] with \(h_{2}=0\). A technique for numerically calculating Lyapunov exponents of chaotic orbits has been given by Benettin _et al._ (1980). First consider calcu lating the largest exponent \(h_{1}\). Choosing \({\bf y}_{0}\) arbitrarily (so that it has a component in the direction of maximum exponential growth), and iterat (4.30) for a long time, \({\bf y}_{n}\) typically becomes so large that one encounters computer overflow if \(h_{1}>0\). This problem can be over come by renormalizing **y** to 1 periodically. That is, at every time \(\tau_{j}=j\tau(j=0,\,1,\,2,\,3,\quad)\), where \(\tau\) is some arbitrarily chosen time in terval (not too large), we divide the tangent vector by its magnitude \(a_{j}\) to renormalize it to a vector of magnitude 1. Storing the \(\alpha_{j}\), we obtain the largest Lyapunov exponent as (cf. Eq. (4.31)) \[h_{1}=\lim_{l\to\infty}\frac{1}{l\tau}_{j=1}^{l}\,\ln\alpha_{j}, \tag{4.40}\] and we approximate \(h_{1}\) by \[h_{1}\quad\frac{1}{l\tau}_{j=1}^{l}\,\ln\alpha_{j}\] for some sufficiently large \(l\) such that numerically the result appears to have converged to within some acceptable tolerance. To calculate the second Lyapunov exponent we choose _two_ independent arbitrary starting vectors \(\mathbf{y}_{0}^{(1)}\) and \(\mathbf{y}_{0}^{(2)}\). Two such vectors define the two dimensional area \(A_{0}\) of a parallelogram lying in the the \(N\) dimensional phase space. Iterating these two vectors \(n\) times, we obtain \(\mathbf{y}_{n}^{(1)}\) and \(\mathbf{y}_{n}^{(2)}\) which define a paral lelogram of area \(A_{n}\). Since \(\mathbf{y}_{0}^{(1)}\) and \(\mathbf{y}_{0}^{(2)}\) are assumed to have components in the directions \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\), the original parallelogram will be distorted as in Figure 4.17 so that \(A_{n}\sim\exp[n(h_{1}+h_{2})]A_{0}\). Thus we have \[h_{1}+h_{2}=\lim_{n\to\infty}\frac{1}{n}\,\ln(A_{n}/A_{0}). \tag{4.41}\] Hence, if we have calculated an estimate of \(h_{1}\), then numerical calculation of an estimate of the right hand side of Eq. (4.41) yields an estimate of \(h_{2}\). There are two difficulties: (1) as before, \(\mathbf{y}_{n}^{(1)}\) and \(\mathbf{y}_{n}^{(2)}\) tend to become very large as \(n\) is increased, and (2) their orientations become more and more coincident (since, for large \(n\), their predominant growth is in the direction corresponding to \(h_{1}\)). Problem 2 implies that computer roundoff errors will eventually obliterate the difference between the direction \(\mathbf{y}_{n}^{(1)}/\,\mathbf{y}_{n}^{(1)}\) and \(\mathbf{y}_{n}^{(2)}/\,\mathbf{y}_{n}^{(2)}\) (cf. Figures 4.17). To circumvent these problems, the previous technique for calculating \(h_{1}\) can be naturally extended by generalization of the normalization procedure. At each time \(\tau_{j}\) we replace the evolving pair of vectors by two _orthnormal_ vectors in the two dimensional linear space spanned by the evolving vectors. We then obtain \[h_{1}+h_{2}\cong\frac{1}{l\tau}_{j=1}^{l}\,\ln\alpha_{j}^{(2)},\] where \(\alpha_{j}^{(2)}\) is the parallelogram area (before normalization) at time \(\tau_{j}\). For \(h_{k}(k=3,\,4,\quad)\) the procedure is basically the same. We evolve \(k\) vectors, keeping track of the \(k\) dimensional parallelepiped volume which they define, and normalizing the set of \(k\) evolved vectors at time \(\tau_{j}\). In this case at every time \(\tau_{j}\) we replace the evolved vectors by the corresponding orthonormal set defined through the Gram Schmidt orthonormalization procedure, which preserves the linear subspace spanned by the evolved vectors before normalization. Thus, we obtain systems. Both the metric entropy and the topological entropy have played a fundamental role in the mathematical theory of chaos. So far, however, these entropies have been less useful than the Lyapunov exponents for examining situations occurring in practice. This is because their values are generally much harder to determine. (The numerical determination of the topological entropy, while comparatively difficult, has been carried out for systems solved on computers in several cases (Newhouse, 1986; Chen _et al._, 1990a; Biham and Wenzel, 1989; Kovacs and Tel, 1990; Chen _et al._, 1991).) In the following discussion we only consider discrete time dynamical systems (i.e., maps). We begin by discussing the metric entropy. The metric entropy (Kolmogorov, 1958; Sinai, 1976, 1959) was intro duced by Kolmogorov and is also sometimes called the K S entropy after Kolmogorov and Sinai. The metric entropy can be thought of as a number measuring the time rate of creation of information as a chaotic orbit evolves. This statement is to be understood in the following sense. Due to the sensitivity to initial conditions in chaotic systems, nearby orbits diverge. If we can only distinguish orbit locations in phase space to within some given accuracy, then the initial conditions for two orbits may appear to be the same. As their orbits evolve forward in time, however, they will eventually move far enough apart that they may be distinguished as differ ent. Alternatively, as an orbit is iterated, by observing its location with the given accuracy that we have, initially insignificant digits in the specification of the initial condition will eventually make themselves felt. Thus, assuming that we can calculate exactly and that we know the dynamical equations giving an orbit, if we view that orbit with limited precision, we can, in principle, use our observations to obtain more and more information about the initial unresolved digits specifying the initial condition. It is in this sense that we say that a chaotic orbit creates information, and we shall illustrate this with a concrete example subsequently. The definition of the metric entropy is based on Shannon's formulation of the degree of uncertainty in being able to predict the outcome of a probabilistic event. Say an experiment has \(r\) possible outcomes, and let \(p_{1}\), \(p_{2}\),, \(p_{r}\) be the probabilities of each outcome. (Think of the experiment as spinning a roulette wheel with numbers 1, 2,, \(r\) as signed to \(r\) segments (possibly of unequal length) composing the periphery of the wheel and with the \(p_{i}\)s proportional to the length of their respective segments on the wheel.) The Shannon entropy gives a number which characterizes the amount of uncertainty that we have concerning which outcome will result. The number is \[H_{\rm S}=\sum_{i=1}^{r}p_{i}\ln(1/p_{i}) \tag{4.42}\](where we define \(p\ln(1/p)\equiv 0\) if \(p=0\)). For example, if \(p_{1}=1\) and \(p_{2}=p_{3}=:\quad=p_{r}=0\), then there is no uncertainty, since we know that event 1 always occurs. Thus, we can predict the outcome with complete confidence. In this case, Eq. (4.42) gives \(H_{\rm S}=0\). The most uncertain case is when all \(r\) events are equally probable, \(p_{i}=1/r\) for \(i=1\), 2,, \(r\). In this case, (4.42) gives \(H_{\rm S}=\ln r\). In general, by virtue of \(p_{1}+\quad+\ p_{r}=1\), the function of the \(p_{i}\)s given by \(H_{\rm S}\) defined in (4.42) will lie between 0 and \(\ln r\), and we say the outcome is more uncertain (harder to predict) if \(H_{\rm S}\) is larger (i.e., closer to \(\ln r\)). Say we have an invariant probability measure \(\mu\) for some dynamical system. The metric entropy for that measure is denoted \(h(\mu)\) and is defined as follows. Let \(W\) be a bounded region containing the probability measure which is invariant under a map \({\bf M}\). Let \(W\) be partitioned into \(r\) disjoint compo nents, \(W=W_{1}\cup W_{2}\cup\quad\cup\ W_{r}\). We can then form an entropy function for the partition \(\{W_{i}\}\), \[H(\{W_{i}\})=\sum_{i=1}^{r}\mu(W_{i})\ln[\mu(W_{i})]^{-1}.\] (4.43a) Now we construct a succession of partitions \(\{W_{i}^{(n)}\}\) of smaller and smaller size by the following procedure. We take our original partition and form the sets \({\bf M}^{-1}(W_{k})\). Then, for each pair of integers \(j\) and \(k\) (\(j\), \(k=1\), 2,, \(r\)), we form the \(r^{2}\) intersections \[W_{j}\cap{\bf M}^{-1}(W_{k}).\] Collecting all the _nonempty_ intersections thus formed, we obtain the partition \(\{W_{i}^{(2)}\}\). The next stage of partition \(\{W_{i}^{(3)}\}\) is obtained by forming the \(r^{3}\) intersections (\(j\), \(k\), \(l=1\), 2,, \(r\)) \[W_{j}\cap{\bf M}^{-1}(W_{k})\cap{\bf M}^{-2}(W_{i}),\] and so on, so that for \(\{W_{i}^{(n)}\}\) we form the intersections, \[W_{i_{1}}\cap{\bf M}^{-1}(W_{i_{2}})\cap{\bf M}^{-2}(W_{i_{3}})\cap\quad\cap \ {\bf M}^{-(n-1)}(W_{i_{n}}),\] \[i_{1}\] , \[i_{2}\] , \[i_{n}=1\] , \[2\] , \[r\] . Next we write \[h(\mu,\{W_{i}\})=\lim_{n\to\infty}\frac{1}{n}H(\{W_{i}^{(n)}\}). \tag{4.43b}\] The quantity \(h(\mu,\{W_{i}\})\) depends on the original partition \(\{W_{i}\}\). To obtain the metric entropy we maximize \(h(\mu,\{W_{i}\})\) over all possible initial partitions \(\{W_{i}\}\), \[h(\mu)=\sup_{\{W_{i}\}}h(\mu,\{W_{i}\}). \tag{4.43c}\]We now give a specific example of the construction of the successive stages of a partition. We use for this example the natural measure for the chaotic attractor of the generalized baker's map (see Figure 3.4). Figure 4.18(_a_) shows the unit square on which the generalized baker's map operates. For our initial partition we choose the two horizontal strips \(0\leq y\leq\alpha\) and \(\alpha\leq y\leq 1\) shown as \(W_{1}\) and \(W_{2}\) in the figure, where the vertical width of \(W_{1}\) is \(\alpha\) and the vertical width of \(W_{2}\) is \((1-\alpha)=\beta\). The set \(\mathbf{M}^{-1}(W_{1})\), which is the set mapping to \(W_{1}\) on one iterate of the generalized baker's map, is composed of the two strips (one of width \(\alpha^{2}\), the other of width \(\alpha\beta\)) shown crosshatched in Figure 4.18(_b_). Similarly, Figure 4.18(_c_) shows \(\mathbf{M}^{-1}(W_{2})\). Forming the intersections \(W_{j}\cap\mathbf{M}^{-1}(W_{k})(i,\ k=1,\ 2)\) we obtain the new partition \(W_{1}^{(2)}\), \(W_{2}^{(2)}\), \(W_{3}^{(2)}\), \(W_{4}^{(2)}\), shown in Figure 4.18(_d_). Continuing in this manner, one finds that \(\{W_{i}^{(n)}\}\) consists of \(2^{n}\) horizontal strips, \(Z(n,\ m)\) of which we have widths \(\alpha^{m}\beta^{n-m}\); \(m=0,\ 1,\ 2,\Doing the same for the partition \(\{W_{i}^{(2)}\}\) shown in Figure 4.18(\(d\)), we have \(H(\{W_{i}^{(2)}\})=\alpha^{2}\ln\alpha^{-2}+2\alpha\beta\ln(\alpha\beta)^{-1}+ \beta^{2}\ln\beta^{-2}=2H(\{W_{i}\})\). In fact for \(\{W_{i}^{(n)}\}\) we obtain \[H(\{W_{i}^{(n)}\})=n[\alpha\ln(1/\alpha)+\beta\ln(1/\beta)]=nH(\{W_{i}\})\] for all \(n\). (While in general \(H(\{W_{i}^{(n)}\})\)\(n\) for large \(n\), the result \(H(\{W_{i}^{(n)}\})=nH(\{W_{i}\})\) only applies in very special cases.) From (4.43b) this yields \(h(\mu,\,\{W_{i}\})=H(\{W_{i}\})\). It may be shown that the partition we have chosen is, in fact, optimal in the sense that it yields the maximum value prescribed by Eq. (4.43c). Thus, for example, we have for the metric entropy \[h(\mu)=\alpha\ln(1/\alpha)+\beta\ln(1/\beta). \tag{4.44}\] Say that we can observe whether the orbit of the generalized baker's map lies in \(W_{1}\) or \(W_{2}\) on any given iterate. Then the information as soicated with such a specification of the initial condition is \(H(\{W_{i}\})=\alpha\ln(1/\alpha)+\beta\ln(1/\beta)\). Now say we observe the orbit and its first iterate and find which of the components, \(W_{1}\) or \(W_{2}\) they lie in. These observa to the initial condition and its first iterate determine a more precise knowlege of the location of the initial condition; namely, they determine which of the four narrower strips \(W_{i}^{(2)}\) the initial condition lies in. (For example, if the initial condition is in \(W_{1}\) and the first iterate is in \(W_{2}\), then knowledge of the map as specified in Figure 3.4 determines that the initial condition is in \(W_{2}^{(2)}\).) The information concerning the initial condition obtainable in such specifications of the initial condition and its first iterate is \(H(\{W_{i}^{(2)}\})=2[\alpha\ln(1/\alpha)+\beta\ln(1/\beta)]\), and the information _gained_ by observing the first iterate is \(H(\{W_{i}^{(2)}\})-H(\{W_{i}\})=\alpha\ln(1/\alpha)+\beta\ln(1/\beta)\). In fact, for the generalized baker's map, we obtain for all \(n\) the result \(H(\{W_{i}^{(n+1)}\})-H(\{W_{i}^{(n)}\})=\alpha\ln(1/\alpha)+\beta\ln(1/\beta)\), which is the metric entropy. Thus, the metric entropy gives the gain of information concerning the location of the initial condition per iterate, assuming that we can only observe with limited accuracy (i.e., we can only observe whether the orbit lies in \(W_{1}\) or \(W_{2}\)). This interpretation holds in general, independent of the map considered, as we now show. Consider an initial condition and its first \(n-1\) iterates and assume that we observe which component of the initial partition \(\{W_{i}\}\) is visited on each iterate. Let \(W_{i_{n}}\) denote the component visited on iterate \(m\), \(\mathbf{x}_{m}=\mathbf{M}^{\,m}(\mathbf{x}_{0})\subset W_{i_{n}}\). Having made such a set of observations, we can specify the region in which the initial condition must lie if it visits the \(W_{i_{n}}(m=0,\,1,\quad\), \(n-1)\) in the observed order: it must lie in the components of \(\{W_{i}^{(n)}\}\) given by \[W_{i_{0}}\cap\mathbf{M}^{-1}(W_{i_{1}})\,\cap\quad\,\,\cap\,\,\mathbf{M}^{-(n -1)}(W_{i_{n}-I}).\] Thus, from (4.42), the information about the initial condition associated with such an observation of the first \(n-1\) iterates is \(H(\{W_{i}^{(n)}\})\). Using the following general result for limits of the form (4.43b), \[\lim_{n\to\infty}\ \frac{1}{n}\,H(\{W_{i}^{(n)}\})\ \ =\lim_{n\to\infty}[H(\{W_{i}^{(n)}\})-H(\{W_{i}^{(n-1)}\})],\] and we see that we can interpret \(h(\mu)\) as the information gained per iterate for large iterate number. Comparing (4.44) with the expression for the positive Lyapunov exponent of the generalized baker's map, Eq. (4.35a), we see that they are the same, \(h(\mu)=h_{1}\). In general, it has been proven that the metric entropy is at most the sum of the positive Lyapunov exponents (e.g., Ruelle, 1989), \[h(\mu)\ \ \ \ \ \ \ \ \ \ h_{i}.\] (4.45a) For the generalized baker's map we have found that (4.45a) holds with the equality applying, \[h(\mu)=h_{1}\] Pesin ( 1976 ) has shown that \[h(\mu)=\ \ \ \ \ \ h_{i}\] (4.45b) applies for typical Hamiltonian systems (where the measure of interest is just the volume fraction of the relevant chaotic ergodic region; see Chapter 7). Subsequently, it was shown (e.g., Ruelle, 1989) that (4.45b) applies for _Axiom A_ attractors of dissipative dynamical systems. (An attractor is said to satisfy _Axiom A_ if it is hyperbolic and if the periodic orbits are dense in the attractor. The generalized baker's map satisfies these conditions.) We note, however, that it is unknown whether or not (4.45b) holds even for nonhyperbolic cases as simple as the attractor of the Henon map (Figure 1.12). Young (1982) has obtained an interesting rigorous result which relates the metric entropy to the information dimension of an ergodic invariant measure \(\mu\) of a smooth two dimensional invertible map with Lyapunov exponents \(h_{1}>0>h_{2}\). Namely, Young proves that \[D_{1}=\,h(\mu)\ \ \frac{1}{h_{1}}+\frac{1}{h_{2}}\ \ . \tag{4.46}\] (Here, \(h_{1}\) and \(h_{2}\) are the Lyapunov exponents obtained for almost every \({\bf x}\) with respect to the measure \(\mu\).) Specializing to the case of the natural measure of a chaotic attractor, and comparing (4.46) with the Kaplan Yorke conjecture (4.38), we see that for the case of the natural measure of a two dimensional smooth invertible map with \(h_{1}>0>h_{2}\), the Kaplan Yorke conjecture reduces to the conjecture that \(h(\mu)=h_{1}\). Numerical experiments (e.g., on the Henon attractor) which tend to confirm the Kaplan Yorke conjecture thus may be taken as support of the equality of \(h(\mu)\) and \(h_{1}\). We now discuss the topological entropy (originally introduced by Adler, Konheim and McAndrew (1965)). The definition of the topological entropy for a map \(\mathbf{M}\) is based on the same construction of a succession of finer and finer partitions as was used in the definition of \(h(\mu)\). In particular, we start with some partition \(\{\,W_{i}\}\) and construct the succession of partitions \(\{\,W_{i}^{(n)}\}\) as before. Let \(N^{(n)}(\{\,W_{i}\})\) be the number of (non empty) components of the partition \(\{\,W_{i}^{(n)}\}\) derived from \(\{\,W_{i}\}\), and let \[h_{\mathrm{T}}(\mathbf{M},\,\{\,W_{i}\})=\lim_{n\to\infty}\ \frac{1}{n}\ln N^{(n)}(\{\,W_{i}\})\enspace.\] (4.47a) Now maximizing over all possible beginning partitions, we obtain the topological entropy of the map \[\mathbf{M}\], \[h_{\mathrm{T}}(\mathbf{M})=\sup_{\{\,W_{i}\}}h_{\mathrm{T}}(\mathbf{M},\,\{\,W_{i}\}).\] (4.47b) In particular, for the generalized baker's map example (Figure 4.18) we have \[N^{(n)}(\{\,W_{i}\})=2^{\,n}\] and \[h_{\mathrm{T}}=\ln 2\] ( \[h_{\mathrm{T}}=\ln 2\] also holds for the invar iant set of the horseshoe map). We note that the value \[h_{\mathrm{T}}=\ln 2\] is greater than or equal to \[h(\mu)=\alpha\ln(1/\alpha)+\beta\ln(1/\beta)\] (maximizing the expression \[\alpha\ln\alpha^{-1}+(1-\alpha)\ln(1-\alpha)^{-1}\] over \[\alpha\] yields \[\ln 2\] at \[\alpha=\tfrac{1}{2}\] ). In fact, it is generally true that \[h_{\mathrm{T}}\quad\quad h(\mu).\] Also, it can be shown that the topological entropies of \[\mathbf{M}\] and \[\mathbf{M}^{-1}\] are the same, \[h_{\mathrm{T}}(\mathbf{M})=\,h_{\mathrm{T}}(\mathbf{M}^{-1}). \tag{4.48}\] One of the key aspects of the topological entropy and the metric entropy is their invariance under certain classes of transformations of the map. Indeed, it was these invariance properties which originally motivated the introduction of these quantities. In particular, \(h_{\mathrm{T}}\) is the same for the map \(\mathbf{M}\) and for any map derived from \(\mathbf{M}\) by a continuous, invertible (but not necessarily differentiable) change of the phase space variables. In such a case, we say that the derived map is _topologically conjugate_ to \(\mathbf{M}\). As an example, the function in Eq. (4.3) gives a topological conjugacy between the horseshoe map and the shift map (operating on the space of bi infinite symbol sequences with two symbols). Hence, the two maps have the same topological entropy (namely, \(\ln 2\)). Thus, if, as was done for the horseshoe, we can reduce a system to a shift on an appropriate bi infinite symbol space, then the topological entropy of the original map is fully determined by its symbolic dynamics. Hence, we see that the topological entropy of a map is useful from the point of view that it restricts the symbolic dynamics representation the map might have. The metric entropy, on the other hand, is invariant under isomorphisms; that is, one to one changes of variables (not necessarily continuous). The introduction of the metric entropy solved an important open mathematical question of the time. Namely, it was unknown whether two dynamical systems like the full shift on two symbols with equal probability measures (\(\frac{1}{2}\) and \(\frac{1}{2}\)) on each of the two symbols and the full shift on three symbols with equal probabilities on each of the three symbols were isomorphic. With Kolmogorov's introduction of the metric entropy the answer became immediate: since their metric entropies are \(\ln 2\) and \(\ln 3\), the equal probability full two shift and the equal probability full three shift cannot be isomorphic. ### 4.6 Chaotic flows and magnetic dynamos: the origin of magnetic fields in the Universe In this section we consider a particularly interesting physical problem as an example illustrating the utility in physics of some of the concepts we have developed. One of the most basic observed facts of nature is the presence of magnetic fields wherever there is flowing electrically conducting matter. Magnetic fields are observed to be present in planets with liquid cores, in the Sun and stars, and in the Galaxy. It is natural to ask why this is so. This question motivates the kinematic dynamo problem: _Will a small seed magnetic field in an, initially unmagnetized, flowing electrically conducting fluid amplify exponentially in time?_ If the answer is yes, then it is unnatural for magnetic fields not to be present. (Note that the kinematic dynamo problem is essentially a problem of linear stability. Thus the structure of magnetic fields as they are currently observed is not directly addressed, since current fields presumably have evolved to a nonlinear saturated state.) The answer to the stability question posed by the kinematic dynamo problem depends on the flow field of the fluid, on the electrical conductivity of the fluid, and on boundary conditions. For a given flow field and boundary conditions one can, in principle, ask for the conductivity dependence of the long time exponential growth rate \(\Gamma\) of a magnetic field perturbation. Vainshtein and Zeldovich (1972) suggest a classification of kinematic dynamos based on the electrical conductivity dependence of \(\Gamma\). In particular, if \(\Gamma\) approaches a positive constant as the conductivity approaches infinity, then they call the dynamo a _fast_ dynamo. Otherwise they call it a _slow_ dynamo. This important distinction is illustrated schematically in Figure 4.19. The horizontal axis in Figure 4.19 is the magnetic Reynolds number, \(R_{m}\), which can be regarded as the dimension less electrical conductivity; \(R_{m}=\mu_{0}\nu_{0}L_{0}\sigma\), where \(\mu_{0}\) is the (mks) mag netic permittivity of vacuum (assuming that the fluid is nonmagnetic), \(\upsilon_{0}\) is a typical magnitude of the flow velocity, \(L_{0}\) is a typical length scale for spatial variation of the flow, and \(\sigma\) is the electrical conductivity of the fluid. In the Sun, for example, \(R_{m}>10^{8}\). Thus only fast kinematic dynamos are of interest in such cases. Here we shall be concerned only with fast dynamos. We adopt the simplest magnetohydrodynamic description. The basic equations are then Ampere's law for the magnetic field \(\mathbf{B}\) (\(\nabla\times\mathbf{B}=\mu_{0}\mathbf{J}\)), Faraday's law for the electric field \(\mathbf{E}\) (\(\nabla\times\mathbf{E}=-\partial\mathbf{B}/\partial t\)), Ohm's law for the current density \(\mathbf{J}\) (\(\mathbf{J}=\sigma\mathbf{E}^{\prime}=\sigma(\mathbf{E}+\mathbf{v}\times\mathbf{ B})\) where \(\mathbf{E}^{\prime}\) is the electric field in the frame moving with the fluid velocity \(\mathbf{v}(\mathbf{x}\), \(t)\)) and \(\nabla\)\(\mathbf{B}=0\). Combining these by substituting the relation for \(\mathbf{J}\) in Ampere's law, then using the resulting equation to solve for \(\mathbf{E}\) in terms of \(\mathbf{B}\), and inserting this into Faraday's law, we obtain a single equation governing the evolution of the magnetic field, \(\partial\mathbf{B}/\partial t=\nabla\times(\mathbf{v}\times\mathbf{B})+(\mu_ {0}\sigma)^{-1}\nabla^{2}\mathbf{B}\). Assuming (for convenience) incompressibility of the flow (\(\nabla\)\(\mathbf{v}=0\)), this gives \[\partial\mathbf{B}/\partial t+\mathbf{v}\ \ \nabla\mathbf{B}=\mathbf{B}\ \ \nabla\mathbf{v}+R_{m}^{-1}\nabla^{2}\mathbf{B}, \tag{4.50}\] where \(t\) has been normalized to \(\upsilon_{0}/L_{0}\), spatial scales have been normalized to \(L_{0}\), and \(\mathbf{v}\) has been normalized to \(\upsilon_{0}\). Note that, for the kinematic dynamo problem, Eq. (4.50) is a linear equation in \(\mathbf{B}\) (there is no linear response of the velocity, since the Lorenz force, \(\mathbf{J}\times\mathbf{B}=\mu_{0}^{-1}(\nabla\times\mathbf{B})\times\mathbf{B}\), is quadratic in \(\mathbf{B}\)). Thus we may regard \(\mathbf{v}\) as an 'equilibrium' field determined by factors (e.g., convection, stirring, rota tion) not appearing in Eq. (4.50). In the rest of this section we indicate through a simple model the relevance of chaos to this problem. In particular, we discuss the tendency of the magnetic field to concentrate on a fractal set, and we mention that the topological entropy provides an upper bound on the exponential growth rate of the magnetic field. Now consider our flowing electrically conducting fluid. The equation describing the position \(\mathbf{x}(t)\) of a fluid element is \[{\rm d}{\bf x}(t)/{\rm d}t={\bf v}[{\bf x}(t),\,t]. \tag{4.51}\] We say the flow is _Lagrangian chaotic_ if the differential separation \(\partial{\bf x}(t)\) between orbits from two differentially displaced typical initial conditions \({\bf x}(0)\) and \({\bf x}(0)+\delta{\bf x}(0)\) grows exponentially with time; i.e., if Eq. (4.51) has a positive Lyapunov exponent. The evolution of \(\partial{\bf x}\) follows from taking a differential variation of Eq. (4.51) \[{\rm d}\partial{\bf x}/{\rm d}t=\partial{\bf x}\ \ \ \ {\bf v}[{\bf x}(t),\,t]. \tag{4.52}\] Using Faraday's law and Stokes' theorem, it can be shown that, if one considers an arbitrary surface \(\Sigma(t)\) bounded by a closed curve \(C(t)\), where the surface \(\Sigma(t)\) is convected by the fluid, then \({\rm d}/{\rm d}t/\int_{\Sigma}{\bf B}\ \ {\rm d}{\bf S})=-\oint_{c}{\bf E}^{\prime}\ \ \ {\rm d}\ell\), where \(\int_{\Sigma}\) denotes a surface integral over \(\Sigma\) and \(\oint_{\Sigma}\) denotes a line integral over the closed curve \(C\). In the case of a perfectly conducting fluid (i.e., the electrical conductivity is infinite, \(\sigma=\infty\) or \(R_{m}=\infty\)), Ohm's law reduces to \({\bf E}^{\prime}={\bf E}+{\bf v}\times{\bf B}=0\). Thus for \(\sigma=\infty\) (referred to as the 'ideal limit') we have that the magnetic flux through any surface convected with the flow is conserved, \(\int_{\Sigma}{\bf B}\ \ {\rm d}{\bf S}={\rm const}\). It is said that the magnetic flux is 'frozen in' to the fluid in the ideal limit. Now consider Eq. (4.50) in the 'ideal limit' which corresponds to omitting the term \(R_{m}^{-1}\ \ {}^{2}{\bf B}\), \[{\rm d}\tilde{\bf B}/{\rm d}t\equiv\partial\tilde{\bf B}/\partial t+{\bf v}\ \ \ \ \tilde{\bf B}=\tilde{\bf B}\ \ \ \ \ {\bf v}, \tag{4.53}\] where we use the symbol \(\tilde{\bf B}\) for magnetic fields in the ideal limit. Comparing Eq. (4.52) for \(\partial{\bf x}\) and Eq. (4.53) for \(\tilde{\bf B}\), we see that the equations are the same. This is a consequence of the frozen in nature of the magnetic field at infinite conductivity, and means that the magnetic field grows in proportion to the stretching of magnetic field lines by the flow. This is illustrated in Figure 4.20 which shows a tube of fluid of length \(\ell\) and cross sectional area \(A\) containing a magnetic field \(B\) being stretched to a longer (length \(\ell^{\prime}\)) thinner (cross sectional area \(A^{\prime}\)) tube containing a magnetic field \(B^{\prime}\). By flux conservation \(BA=B^{\prime}A^{\prime}\). By Figure 4.20: Magnetic field amplification via field line stretching by the flow. incompressibility of the flow \(\ell A=\ell^{\prime}A^{\prime}\). Thus \(B^{\prime}=(\ell^{\prime}/\ell)B\), and the magnetic field is amplified in proportion to how much it is stretched. The connection between fast dynamos and chaos is now clear: chaos implies exponential growth of \(\delta{\bf x}\) in Eq. (4.52) and hence exponential field line stretching, and for a dynamo we need exponential growth of \({\bf B}\). There is a catch, however. In particular, the ideal equation (4.53) can never be fully justified even for very large \(R_{m}\). What typically happens for chaotic flows is that, as \(R_{m}\) becomes large, \({\bf B}\) develops more fine scale structure, so that \(R_{m}^{-1}\)\({}^{2}{\bf B}\) in Eq. (4.50) remains of the same order as the other terms in (4.50). This implies that \({\bf B}\) varies on small spatial scales of order \[\epsilon_{*}\ \ \ \ \ R_{m}^{-1/2}. \tag{4.54}\] (Recall that we use the normalizations introduced in Eq. (4.50) so that \({\bf v}\ \ \ 0(1)\) and the typical scale for spatial variation of \({\bf v}\) is also 0(1).) In spite of this the ideal treatment is still a powerful (and correct) indication that Lagrangian chaos is the key to fast dynamo action. This point was first explicitly made in the paper of Arnold \(et\)\(al.\), (1981) who considered a chaotic flow in an abstract space of constant negative geodesic curvature (not the usual Euclidean space of classical physics), and the point was subsequently (Bayly and Childress, 1988; Finn and Ott, 1988) made more physically relevant by considerations for flows in ordinary Euclidian space. By now this consideration is well developed. (For more material and additional references on fast dynamos see the review article by Ott (1998) and the book by Childress and Gilbert (1995).) An important property of kinematic dynamos is the result known as Cowling's antidynamo theorem which states that dynamo action is only possible if the magnetic field has three dimensional structure. While this result applies independent of \(R_{m}\), it is instructive for us to illustrate it in the large \(R_{m}\) limit by using the ideal equation (Eq. (4.53)). In particular, even though Lagrangian chaos is possible in (time dependent) two dimen sional flows, we now show that fast kinematic dynamos are not possible in two dimensions. Consider a fluid confined to a rectangular region with rigid, perfectly conducting walls as shown in Figure 4.21. The line shown in Figure 4.21(_a_) represents the initial configuration of a single field line. The dashed horizontal line segment \(S\) is crossed by the field line in the upward direction. After some time, during which the field line is stretched by the Lagrangian chaotic fluid flow (causing the field line length to increase exponentially with time), the configuration is as shown in Figure 4.21(_b_). Although the number of crossings of \(S\) by the field line has increased, due to cancellation, the \(net\) upward flux through \(S\) in Figure 4.21(_b_) is the same as in Figure 4.21(_a_). The two dimensional topological constraint that the field line cannot cross itself prevents net exponential flux growth through \(S\) even though the flux line is exponentially stretched. As shown below, the situation is fundamentally different in three dimen sions. In what follows we develop a series of very simplified map based models. These models, although apparently very far from realizable flows, are extremely useful for understanding and motivating considerations applicable to real flows. For example, numerical results by Reyl _et al._, (1996) using a smooth flow and solving Eq. (4.50) at a very high magnetic Reynolds number (e.g., \(R_{m}=10^{5}\)) illustrate the applicability to real flows of the general qualitative results obtained from the simple cartoon type models. To begin we consider the paradigmatic model fast dynamo introduced by Vainshtein and Zeldovich (1972) and illustrated in Figure 4.22. At \(t=0\) there is a toroidal flux tube with flux \(\Phi_{0}\) circulating through the tube (Figure 4.22(_a_)). The tube is then uniformly and incompressibly stretched to twice its original length. The surrounding medium is also incompresible and flows in such a way as to accommodate the deformation of the flux tube. Considering the conductivity of the fluid to be infinite, the magnetic flux is frozen in, and the circulating flux in the stretched toroidal tube in Figure 4.22(_b_) is still \(\Phi_{0}\). The tube is then twisted into a figure of 8 (Figure 4.22(_c_)) and folded back into its original volume (Figure 4.22(_d_)). The total flux through a surface transverse to the torus is now \(2\Phi_{0}\). Thus, the flux growth rate for the perfectly conducting case is \(\tilde{\Gamma}=(\ln 2)/T\), where \(T\) is the cycle time. That is, we imagine that the process in Figure 4.22 is repeated cyclically. With this in mind we note that Figure 4.22 corresponds to a geometric specification of a periodic flow \(\mathbf{v}(\mathbf{x},\ t)=\mathbf{v}(\mathbf{x},\ t+T)\): Since Figure 4.22 corresponds to an incomressible deformation of the torus there is an incompressible flow which corresponds to Figure 4.22. _Uniform_ stretching of the initial flux tube is not expected as a generic property of fluid flows. There is nothing to constrain the stretching experienced following one magnetic field line segment to be precisely the same as that following another segment located at some other point in space. To model the effect of such nonuniform stretching, we now imagine that we follow the process in Figure 4.22, but that we do the stretching in a nonuniform manner. We can schematically imagine (Finn and Ott, 1988) that we divide the initial torus into two unequal parts, one taking up a fraction \(\alpha<\frac{1}{2}\) of the circumference and the other a fraction \(\beta\equiv 1-\alpha>\frac{1}{2}\). This is illustrated in Figure 4.23(_a_). We then incompressibly stretch the lower piece by a factor \(1/\alpha\) and the upper piece by a factor \(1/\beta\). The lower and upper pieces are now each of the same length as the circumference of the original torus (Figure 4.23(_a_)). Note that the magnetic flux through any cross section of the stretched torus (Figure 4.23(_b_)) is still equal to the initial flux \(\Phi_{0}\). The stretched torus is now twisted and folded as before. The result is again a torus with twice the circulating flux as it had at the beginning of the cycle. Thus the nonuni form stretching does not alter the flux growth rate for this example, and we again have \(\tilde{\Gamma}=(\ln 2)/T\). The nonuniform stretching does, however, have an important effect on the distribution of the magnetic field. Inparticular, looking at a cross section of the torus after one cycle, there will be two regions, as shown in Figure 4.23(_c_), one of area \(\alpha A\) (where \(A\) is the original cross sectional area; cf. Figure 4.23(_a_)) and magnetic field \(B_{0}/\alpha(\Phi_{0}\equiv B_{0}A)\), and the other area \(\beta A\) and magnetic field \(B_{0}/\beta\). The areas each contain a frozen in flux \(\Phi_{0}\). Thus for uniform stretching (\(\alpha=\beta=\frac{1}{2}\)) the magnetic field in the torus is homogeneous, while in the case of nonuniform stretching it is inhomogeneous. As the process is iterated for more and more cycles, the inhomogeneity becomes more and more pronounced, and, as we shall see, the magnetic field tends to concentrate on a fractal. A convenient two dimensional model (Finn and Ott, 1988) that captures the essential features of the stretch twist fold dynamo discussed above is illustrated in Figure 4.24. Here, we consider a perfectly conducting square sheet with a frozen in magnetic field (Figure 4.24(_a_)). We can think of the \(y\) direction in the figure as the distance the long way around the torus in Figure 4.23(_a_). We then incompressibly deform the square by stretching the bottom section (\(0<y<\alpha\)) by a factor \(1/\alpha\) and the top section (\(\alpha<y<1\)) by a factor \(1/\beta\) (Figure 4.24(_b_)). We then separate the two resulting parts and imagine that we cut the magnetic field lines that connect them (Figure 4.24(_c_)). The two pieces are then reassembled into the original square as shown in Figure 4.24(_d_). The operation of cutting the magnetic field lines is nonphysical, but allows us to simulate the inherently three dimensional stretch twist fold operation Figure 4.23: Nonuniform stretch twist fold. (Using two discrete expansion rates, \(\alpha^{-1}\) and \(\beta^{-1}\), rather than a continuum of expansion rates, results in a discontinuity in the cross section in (_b_). Thus we do not mean the process in this figure to be taken too literally. We use it primarily as a heuristic motivation for the two dimensional map model, Figure 4.24.) of Figure 4.22 in two dimensions (the step in Figure 4.22 that required the third dimension is the twist, Figure 4.22(_c_)). Note that, although Cowling's antidynamo theorem rules out the existence of dynamo action in two dimensions, our model, Figure 4.22, will be a two dimensional dynamo by virtue of the nonphysical field line cutting in Figure 4.24(_c_). Since the flux is frozen in, the flux through a horizontal line (\(y=\text{const.}\)) across the \(\alpha\) width strip and the flux through a horizontal line across the \(\beta\) width strip in Figure 4.24(_d_) are each the same as the flux through a horizontal line across the entire square in Figure 4.24(_a_). Thus, as in the stretch twist fold (Figures 4.22 and 4.23), the flux across the whole square in Figure 4.24(_d_) is twice that in Figure 4.24(_a_). Hence we again obtain \(\tilde{\Gamma}=\left(\ln 2\right)/T\). Also, the ratio of the area containing high field to the area containing low field is \(\alpha/\beta\) as is the case for the cross section in Figure 4.23(_c_). Analytically, the map specified by Figure 4.24 is a special case of the generalized baker's map (\(\lambda_{a}=\alpha\), \(\lambda_{b}=\beta\) in Eq. (3.7)). As noted in the previous section the topological entropy for the generalized baker's map is \(h_{T}=\ln 2\). Thus, for this example, in the \(R_{m}\to\infty\) limit, \(h_{T}\) gives the long time exponential flux growth rate \(\Gamma\). Other solvable examples (Ott, 1998) show that it is possible to have \(\Gamma<h_{\rm T}\), and it has been conjectured (Finn and Ott, 1988) and subsequently rigorously proven (Klapper and Young, 1995) that \(h_{\rm T}\) provides an upper bound to the magnetic field exponential growth rate \(\Gamma\). Now consider the spatial distribution of the evolving magnetic field. In particular, imagine starting with a uniform upward magnetic field \(B_{0}\) and iterate our baker's map (Figure 4.24) \(n\) times. This results in a magnetic field \(B_{n}(x)\) which is again upward. We introduce a normalized field, \[b_{n}(x)=B_{n}(x)/(2^{n}B_{0})\] where the normalization is such that the total normalized flux, \(\begin{array}{c}1\\ 0\end{array}b_{n}(x)\,{\rm d}x\), is one. Based on this normalized field we introduce a flux measure \(\mu_{B}\) defined such that the measure of an interval \(x_{1}\)\(x\)\(x_{2}(x_{1}\)\(0\), \(x_{2}\)\(1)\) is \[\mu_{B}(x_{1},\,x_{2})=\lim_{n\to\infty}\int_{x_{1}}^{\infty}b_{n}(x)\,{\rm d}x. \tag{4.55}\] We show subsequently that this measure is a fractal measure by determin ing its \(D_{q}\) dimension spectrum. The result will be that for nonuniform stretching \(D_{q}<1\) for \(q>0\) (i.e., the measure is fractal), while for uniform stretching \(D_{q}=1\). Since we expect nonuniform stretching in generic situations the conclusion is that, in general, we expect magnetic flux to be concentrated on a fractal set. This conclusion, although here established only for our cartoon type model, is expected to apply in general (e.g., Ott, 1998, and references therein). It should be noted that this result is for the ideal case, \(R_{m}=\infty\). What is the effect of large but finite \(R_{m}\)? According to Eq. (4.54), \(\epsilon_{*}\), the smallest scale length for variation of the magnetic field, is of order \(R_{m}^{-1/2}\). Thus we can expect that the magnetic field appears to be fractal if we examine it with a measuring instrument whose spatial resolution is not capable of resolving scales as small as \(\epsilon_{*}\). Put another way, to calculate \(D_{q}\) numerically from data using the definition, Eq. (3.14), one typically plots \((1-q)^{-1}\ln I(q,\,\epsilon)\) versus \(\ln(1/\epsilon)\) and attempts to fit a straight line to the plot for small \(\epsilon\). The slope of this line is then taken as an estimate of \(D_{q}\). For our dynamo situation with small but nonzero \(\epsilon_{*}\) (i.e., large \(R_{m}\)), the straight line scaling with the ideal (\(R_{m}\to\infty\)) slope \(D_{q}\) would hold for \(\epsilon>\epsilon_{*}\). However, since the field becomes smooth on scales \(\epsilon<\epsilon_{*}\), the scaling is expected to cross over to a slope of one (nonfractal behavior) for \(\epsilon<\epsilon_{*}\). This type of 'truncated' fractal behavior is common to many other examples of physical fractals where some form of small scale smoothing is very commonly operative. We now obtain \(D_{q}\) for the example Figure 4.24 in the ideal limit (\(R_{m}\to\infty\)). Say we divide \(0\)\(x\)\(1\) into \(M\) equal intervals of length \(\epsilon=1/M\). We note the following important similarity property: if we imagine that we have a flux \({}_{i}\) in each of the \(M\) intervals of length \(\epsilon\) and apply the map to that flux and divide the result by 2, then we obtain \(M\) intervals of width \(\alpha\epsilon\) in \(0\)\(x\)\(\alpha\) and \(M\) intervals of width \(\beta\epsilon\) in \(\alpha\)\(x\)\(1\), with the \(a\) and \(\beta\) segments each having within it a replica of the original flux distribution on the whole interval (cf. Section 3.4 for similar reasoning). Thus to calculate \(D_{q}\) we write \(I(q,\,\epsilon)\) as \(I(q,\,\epsilon)=I_{a}(q,\,\epsilon)+I_{b}(q,\,\epsilon)\) where \(I_{a}(q,\,\epsilon)\) is the sum of the flux measures \(\mu_{i}^{q}\) over all the \(\epsilon\) intervals lying in \(0\)\(x\)\(a\), and \(I_{b}(q,\,\epsilon)\) is the sum over the \(\epsilon\) intervals lying in \(\alpha\)\(x\)\(1\). From the similarity property we have \[I_{a}(q,\,\alpha\epsilon) =\sum_{i=1}^{M=1/\epsilon}\frac{1}{2}\mu_{i}^{q}=\frac{1}{2}^{q} I(q,\,\epsilon)\] \[=(1/2^{q})I(q,\,\epsilon),\] and similarly for \(I_{b}\). Thus \[I(q,\,\epsilon)=2^{-q}I(q,\,\epsilon/\alpha)+2^{-q}I(q,\,\epsilon/\beta). \tag{4.56}\] As in Eq. (3.22), we take \(I(q,\,\epsilon)\)\(\epsilon^{(q-1)D_{q}}\). Equation (4.56) then yields a transcendental equation for \(D_{q}\), \[2^{q}=\alpha^{-(q-1)D_{q}}+\beta^{-(q-1)D_{q}}. \tag{4.57}\] Letting \(q\to 1\), the information dimension is \[D_{0}=(\ln 2)/[\ln(1/\surd\alpha\beta)].\] For \(\alpha=\beta=\frac{1}{2}\) (uniform stretching) the solution of Eq. (4.57) for \(D_{q}\) is \(D_{q}=1\) for all \(q\). For \(q=0\) we obtain \(D_{0}=1\) for any \(\alpha\), \(\beta=(1-\alpha)\). For \(\alpha\neq\beta\) (nonuniform stretching), the solution of Eq. (4.57) decreases monotonically with increasing \(q\), and hence is less than one for \(q>0\). This indicates the tendency for the magnetic field to concentrate on a fractal set. ## Appendix: Gram\(-\)Schmidt orthogonalization In this appendix we briefly state the Gram Schmidt orthogonalization procedure used in calculating Lyapunov exponents. Say we are given \(k\) linearly independent vectors \({\bf v}_{1}\), \({\bf v}_{2}\),, \({\bf v}_{k}\) which lie in a vector space of dimension \(N\)\(k\). We wish to find a set of \(k\) orthonormal basis vectors \({\bf e}_{1}\), \({\bf e}_{2}\),, \({\bf e}_{k}\) for the subspace spanned by the vectors \({\bf v}_{1}\),, \({\bf v}_{k}\). That is, we wish to determine the \({\bf e}_{i}\) such that \({\bf e}_{i}^{\dagger}{\bf e}_{j}=\delta_{ij}\) where \(\delta_{ij}\) is the Kronecker delta and each \({\bf e}_{i}\) is a linear combination of the \({\bf v}\)s for \(i=1\),, \(k\). A solution to this problem is \[\mathbf{e}_{i}= \mathbf{v}_{i}-\sum_{j=1}^{i-1}(\mathbf{v}_{i}\ \ \ \mathbf{e}_{j})\mathbf{e}_{j}\ \ /\beta_{i}\] \[\beta_{i}=\left\|\mathbf{v}_{i}-\sum_{j=1}^{i-1}(\mathbf{v}_{i}\ \ \ \mathbf{e}_{j})\mathbf{e}_{j}\right\|,\] with \(\mathbf{e}_{1}=\mathbf{v}_{1}/\ \ \mathbf{v}_{1}\) and \(\mathbf{w}=(\mathbf{w}^{\dagger}\mathbf{w})^{1/2}\). Thus, iterating the equation for \(\mathbf{e}_{i}\), we can determine the \(k\) orthonormal basis vectors by starting at \(i=1\) and proceeding to successively larger \(i\). The volume of the \(k\) dimensional parallelopiped defined by the vectors \(\mathbf{v}_{1}\),, \(\mathbf{v}_{k}\) is \[\alpha^{(k)}=\beta_{1}\beta_{2}\ \ \ \ \ \ \beta_{k}.\] ## Problems 1. For the map shown in Figure 2.29 draw a transition diagram for the regions \(A=(0,\,\underline{\mathrm{i}})\), \(B=(\underline{\mathrm{i}},\,\underline{\mathrm{j}})\), \(C=(\underline{\mathrm{i}},\,1)\). How many period four orbits are there, and in what order do they visit \(A,\,B\) and \(C\)? 2. For the maps illustrated in Figures 4.25(\(a\)) and (\(b\)) describe the symbolic dynamics for the invariant set in the original rectangle \(S\). How many distinct periodic orbits of period four are there for the map in Figure 4.25(\(a\))? For the map in Figure 4.25(\(b\)), specify the restrictions on the allowed symbol sequences by drawing a figure like those in Figures 4.5(\(c\)) and (\(d\)). 3. Show that the horseshoe map has an uncountable set of nonperiodic orbits in the invariant set \(\Lambda\). 4. Consider linear maps \(\mathbf{x}_{n+1}=\mathbf{M}(\mathbf{x}_{n})\) where \(\mathbf{M}(\mathbf{x})=\mathbf{A}\ \ \ \mathbf{x}\) where \(\mathbf{A}\) is a matrix. For the following matrices \(\mathbf{A}\) describe the dynamics and say whether or not the dynamics is hyperbolic. \[(a) 1 -1 \ \ \,\] \[(b)6. For the H\(\ddot{\rm e}\)non map there are two fixed points. Find them. Which one is on the attractor (refer to Figure 1.12)? For the one on the attractor compute the eigenvalues and eigenvectors of the linearized matrix. What striking fact does one see by comparing the direction of the unstable eigenvector with the direction of the striations of the attractor (Figures 1.12(\(b\)) and (\(c\)))? 7. Consider the following map: \[x_{n+1}=(x_{n}+y_{n}),\;y_{n+1}=ay_{n}+kp(x_{n}+I),\] where \(a\) and \(k\) are positive constants, and the function \(p(x)\) is periodic with period 1. In particular, \(p(x)\) is the'sawtooth' function illustrated in Figure 4.26. * Is the map invertible? Why? * Show that for \(1>a>0\) all orbits eventually enter a strip region, \(k\;/[2(1-a)]\)\(y\), and never leave it. ** For \(k=\frac{1}{4}\) and \(a=\frac{3}{\rho}\) what are the Lyapunov exponents of the map for typical orbits? Is the dynamics chaotic? Why? What is the Kaplan Yorke prediction for the information dimension of the attractor? * Again for \(k=\frac{1}{4}\) and \(a=\frac{3}{\rho}\) describe the stable and unstable manifolds of the fixed point \(x=y=0\). * Consider the following two-dimensional map of points on a toroidal surface specified by angles \(\theta\), \(\varphi\) (which are defined modulo \(2\pi\)), \[\theta_{n+1} = (2\theta_{n}+\varphi_{n})\,\text{mod}\,2\pi\] \[\varphi_{n+1} = (3\theta_{n}+2\varphi_{n})\,\text{mod}\,2\pi.\] * Is this map area preserving or area contracting? * What are the Lyapunov exponents? * What are the stable and unstable tangent spaces \(E^{\text{s}}\) and \(E^{\text{u}}\) at the fixed point \(\theta=\pi\), \(\varphi=\pi\)? * Write a computer program to calculate the largest Lyapunov number \(h_{1}\) of the Henon map. Using the orbit originating from the initial condition (\(x_{0}\), \(y_{0}\)) = (0, 0) plot a graph of \(h_{1}\) versus \(B\) for \(A=1.4\) and \(0\)\(B\)\(0.3\). Using the fact that \(h_{1}+h_{2}=\ln\,B\) also plot the Lyapunov dimension in this range. * Consider the forced damped pendulum equation, Eq. (1.6), with parameters \(v=0.22\), \(T=2.7\) and \(f=1/2\pi\) (these are the parameters used for Figure 1.13). Numerically, it is found that \(h_{1}\)\(0.135\). What are \(h_{2}\), \(h_{3}\) and the Lyapunov dimension? * Draw a transition diagram showing allowed transitions amongst the three regions. How many distinct periodic orbits of period five are there, and what is the order in which each such orbit visits regions 1, 2 and 3 shown in part (\(a\))? * For a typical orbit on the attractor of this map what are the Lyapunov exponents? * For a typical orbit on the attractor of this map, what is the time average of \(y\): \(\langle y\rangle=\lim_{m\to\infty}\frac{1}{m}\stackrel{{\infty}}{{ =1}}y_{n}?\) Figure 4.28: Partition of (\(x\), \(y\)) space for Problem 11. 12. Consider the three-dimensional map given by \[x_{n+1}= \begin{array}{ll}\lambda_{x}x_{n}&\text{if }\alpha\quad z>0\\ \lambda_{x}x_{n}&\text{if }\alpha+\beta\quad z_{n}>\alpha\\ \lambda_{x}x_{n}+(1-\lambda_{x})&\text{if }1\quad z_{n}>\alpha+\beta\end{array}\] \[\lambda_{y}y_{n} \text{if }\alpha\quad z_{n}>0\] \[y_{n+1}= \begin{array}{ll}\lambda_{y}y_{n}+(1-\lambda_{y})&\text{if } \alpha+\beta\quad z_{n}>\alpha\\ \lambda_{y}y_{n}&\text{if }1\quad z_{n}>\alpha+\beta\end{array}\] \[z_{n+1}= \begin{array}{ll}z_{n}/\alpha&\text{if }\alpha\quad z_{n}>0\\ (z_{n}-\alpha)/\beta&\text{if }\alpha+\beta\quad z_{n}>\alpha\\ [z_{n}-(\alpha+\beta)]/\gamma&\text{if }1\quad z_{n}>\alpha+\beta\end{array}\] where \(\alpha+\beta+\gamma\equiv 1\), \(0<2\lambda_{x}\) 1 and \(0<2\lambda_{y}\) 1. The action of the map on the basic unit cube, \(0\quad x\quad 1,0\quad y\quad 1,0\quad z\quad 1\), is illustrated in Figures 4.29(_a_) and 4.29(_b_). The slabs A, B and C in Figure 4.29(_a_) are vertically stretched by \(1/\alpha\), \(1/\beta\) and \(1/\gamma\) respectively. They are then compressed in \(x\) (by \(\lambda_{x}\)\(\frac{1}{2}\)) and compressed in \(y\) (by \(\lambda_{y}\)\(\frac{1}{2}\)) after which they are positioned as shown in Figure 4.29(_b_). 1. What are the Lyapunov exponents of the natural measure? 2. Applying the Kaplan Yorke formula, find the information dimension if \(\alpha=\frac{1}{2}\), \(\beta=\frac{1}{4}\), \(\gamma=\frac{1}{4}\), \(\lambda_{y}=\frac{1}{8}\). What is it if \(\alpha\), \(\beta\) and \(\gamma\) are the same values, but \(\lambda_{x}=\frac{1}{2}\) and \(\lambda_{y}=\frac{1}{4}\)? 3. There is a period two orbit that alternately visits regions A and C of Figure 4.29. Find it (i.e., specify its \(x\), \(y\), \(z\)-coordinates). Describe its stable and unstable tangent spaces. What are the Lyapunov exponents following this orbit?In particular, if \(J\) is chosen small enough, one can always ensure that the action of the map \({\bf M}^{q}\) on the intersections \(S\cap{\bf M}^{q}(S)\) is hyperbolic (defined in Eqs. (4.27)). 3. In particular, if there is a chaotic attractor at \(r_{1}\) and \(r_{2}>r_{1}\), the map at \(r_{2}\) has more periodic orbits than the map at \(r_{1}\). This follows because windows imply the occurrence of forward tangent bifurcations and period doublings creating new periodic orbits. 4. For the case of a chaotic attractor, the attractor must contain the unstable manifold of every point on the attractor. For example, for the Henon attractor there is an unstable saddle fixed point located in the middle of the rectangle in Figure 1.12(\(c\)), and it has been suggested and somewhat supported by numerical calculations that the attractor coincides with the closure of the unstable manifold of that fixed point. 5. For the case of a hyperbolic invariant set satisfying Eqs. (4.27) there are no Lyapunov exponents in the range \(\ln\rho^{-1}\)\(h\)\(-\ln\rho^{-1}\). ## Chapter 5 Nonattracting chaotic sets We have already encountered situations where chaotic motion was non attracting. For example, the map Eq. (3.3) had an invariant Cantor set in [0, 1], but all initial conditions except for a set of Lebesgue measure zero eventually leave the interval [0, 1] and then approach \(x=\pm\infty\). Similarly, the horseshoe map has an invariant set in the square \(S\) (cf. Figure 4.1), but again all initial conditions except for a set of Lebesgue measure zero eventually leave the square.1 The invariant sets for these two cases are examples of nonattracting chaotic sets. While it is clear that chaotic attractors have practically important observable consequences, it may not at this point be clear that nonattracting chaotic sets also have practically important observable consequences. Perhaps the three most prominent consequences of nonattracting chaotic sets are the phenomena of _chaotic transients_, _fractal basin boundaries_, and _chaotic scattering_. Footnote 1: The chaotic dynamics is not a chaotic attractor, but it is not a chaotic attractor. The term chaotic transient refers to the fact that an orbit can spend a long time in the vicinity of a nonattracting chaotic set before it leaves, possibly moving off to some nonchaotic attractor which governs its motion ever after. During the initial phase, when the orbit is in the vicinity of the nonattracting chaotic set, its motion can appear to be very irregular and is, for most purposes, indistinguishable from motion on a chaotic attractor. Say we sprinkle a large number of initial conditions with a uniform distribution in some phase space region \(W\) containing the nonattracting chaotic set. Then the length of the chaotic transient that a given one of these orbits experiences depends on its initial condition. The number \(N(\tau)\) of orbits still in the chaotic transient phase of their orbit after a time \(\tau\) typically decays exponentially with \(\tau\), \(N(\tau)\sim\exp\,-(\tau/\langle\tau\rangle)\), for large \(\tau\). Thus, the fraction of orbits \(P(\tau)\mathrm{d}\tau\) with chaotic transient lengths between \(\tau\) and \(\tau+\mathrm{d}\tau\) is \[P(\tau)=\mathrm{d}N(\tau)/\mathrm{d}\tau\quad\quad\exp\;-(\tau/\langle\tau \rangle), \tag{5.1}\] where we call \(\langle\tau\rangle\) the average lifetime of a chaotic transient. We can also interpret \(P(\tau)\) as the probability distribution of \(\tau\), given that we choose an initial condition randomly in the region \(W\) containing the nonattracting chaotic set. We have already seen examples of the exponential decay law (5.1) for the case of the map Eq. (3.3) and the horseshoe map (cf. Figure 4.1). In particular, referring to Eq. (3.5), we see that, for this example, \[\langle\tau\rangle=[\ln(1-\Delta)^{-1}]^{-1}.\] Hence, the average transient lifetime can be long if \(\Delta\) is small. In such a case observations of an orbit for some appreciable time duration of the order of \(\langle\tau\rangle\) or less may not be sufficient to distinguish a chaotic transient from a chaotic attractor. We shall be discussing chaotic transients in greater detail in Chapter 8. In this chapter we will concentrate on fractal basin boundaries and chaotic scattering. We will also present general results relating the Lyaponov exponents and the average decay time \(\langle\tau\rangle\) to the fractal dimensions of nonattracting chaotic sets. (A useful review dealing with some of this material has been written by Tel (1991).) ### 5.1 Fractal basin boundaries Dynamical systems can have multiple attractors, and which of these is approached depends on the initial condition of the particular orbit. The closure of the set of initial conditions which approach a given attractor is the basin of attraction for that attractor. From this definition it is clear that the orbit resulting from an initial condition inside a given basin must remain inside that basin. Thus, basins of attraction are invariant sets. As an example, consider the case of a particle moving in one dimension under the action of friction and the two well potential \(V(x)\) illustrated in Figure 5.1(_a_). Almost every initial condition comes to rest at one of the two stable equilibrium points \(x=x_{0}\) or \(x=-x_{0}\). Figure 5.1(_b_) schematically shows the basins of attraction for these two attractors in the position velocity phase space of the system. Initial conditions starting in the cross hatched region are attracted to the attractor at \(x=+x_{0}\), \(\mathrm{d}x/\mathrm{d}t=0\), while initial conditions starting in the uncross hatched region are attracted to the attractor at \(x=-x_{0}\), \(\mathrm{d}x/\mathrm{d}t=0\). The boundary separat ing these two regions (the 'basin boundary') is, in this case, a simple curve. This curve goes through the unstable fixed point \(x=0\). Initial conditions on the basin boundary generate orbits that eventually approach the unstable fixed point \(x=0\), \(\mathrm{d}x/\mathrm{d}t=0\). Thus, _the basin boundary is the stable manifold of an unstable invariant set_. In this case the unstable invariant set is particularly simple (it is the point \(x=0\), \(\mathrm{d}x/\mathrm{d}t=0\)). We shall see, however, that the above statement also holds when the unstable invariant set is chaotic. For the example of Figure 5.1 the basin boundary was a simple curve. We now give several pictorial examples showing that basin boundaries can be much more complicated than is the case for Figure 5.1. Figure 5.2(_a_) shows the basins of attraction for the map (Grebogi _et al._, 1983a; McDonald _et al._, 1985), \[\theta_{n+1}=\theta_{n}+a\sin 2\theta_{n}-b\sin 4\theta_{n}-x_{n}\sin\theta_{n}, \tag{5.2a}\] \[x_{n+1}=-J\cos\theta_{n}, \tag{5.2b}\] where \(J=0.3\), \(a=1.32\) and \(b=0.90\). This map has two fixed points, (\(\theta\), \(x\)) = (0, \(-J\)) and (\(\pi\), \(J\)), which are attracting. Figure 5.2 was con structed using a \(256\times 256\) grid of initial conditions. For each initial condition the map was iterated a large number of times. It was found that all the initial conditions generate orbits which go to one of the two fixed point attractors. Thus, we conclude that these are the only attractors for this map. If an initial condition yields an orbit which goes to (0,\(-J\)), then a black dot is plotted at the location of the initial condition. If the orbit goes to the other attractor, then no dot is plotted. (The size of the plotted points on the grid is such that, if all points were plotted, the entire region would be black.) Thus, the black and blank regions are essentially pictures of the two basins of attraction to within the accuracy of the grid used. The graininess in this figure is due to the finite resolution used. At any rate it is apparent that very fine scale structure in the basins of attraction is present. Furthermore, this fine scale structure is evidently present on all scales, as Figure 5.1: (_a_) Potential \(V(x)\) for a point particle moving in one dimension. (_b_) The basins of attraction for the attractors at \(x=x_{0}\) (cross hatched) and at \(x=-x_{0}\) (uncross hatched). revealed by examining magnifications of successively smaller and smaller regions of the phase space which contain fine scale structure. Figure 5.2(_b_) shows such a magnification. We see that on a small scale the basins evidently consist of many narrow black and blank parallel strips of varying widths. In fact, as we shall see, the basin boundary on this scale may be regarded as a Cantor set of parallel lines (separating the black and blank regions), and the fractal dimension of this basin boundary has been numerically computed to be approximately 1.8 (Grebogi _et al._, 1983a). Figure 5.3 shows the basin structure for the forced damped pendulum equation \[{\rm d}^{2}\ \ /{\rm d}t^{2}+0.1{\rm d}\ \ /{\rm d}t+\sin\ \ =\ F\cos\ t\] for \(F=2.1\) (Grebogi, Ott and Yorke, 1987c). There are two periodic attractors that have the same period as the forcing (namely 2_p_). The orbit for one of these two attractors has average clockwise motion (negative average value of \(\dot{\ }\)), while the orbit for the other attractor has average counterclockwise motion. In Figure 5.3 the black region represents initial (_t_ = 0) values of \(\ \average counterclockwise motion. Again, we see that there is small scale structure on which the black and blank regions appear to be finely interwoven. This is again a manifestation of the fractal nature of the basin boundaries. Numerical experiments on the forced damped pendulum equation show that fractal basin boundaries are extremely common for this system. As a further illustrative example, consider the 'kicked double rotor' mechanical system illustrated in Figure 5.4. A fixed pivot is attached to a bar with moment of inertia \(I_{1}\). The free end of this bar is attached by a pivot to the middle of a second bar of moment of inertia \(I_{2}\). An impulsive upward vertical force, \(G\Sigma_{n}\delta(t-nT)\), is periodically applied to one end of the second bar at time instants \(t=0\), \(T\), \(2T\), \(3T\), \(\ldots\). There is friction at the two pivots with coefficients \(v_{1}\) and \(v_{2}\). Examining the positions (, \(\phi\)) and the angular velocities (d \(/\)d\(t\), d\(\phi/\)d\(t\)) just after an impulsive kick, we can analytically derive a four dimensional map giving the positions and angular velocities just after the (\(n+1\))th kick in terms of their values just after the \(n\)th kick (Grebogi _et al._, 1987a). Figure 5.5 shows the basin structure for this map for a particular set of the parameters \(G\), \(I_{1}\), \(I_{2}\), \(v_{1}\) and \(v_{2}\). For this choice of parameters there are two attractors; one is the stable fixed point \(=\phi=0\), d \(/\)d\(t\) = d\(\phi/\)d\(t\) = 0 (both arms are oriented straight up), while the other attractor is chaotic. The plot in Figure 5.5(\(a\)) and the magnification in Figure 5.5(\(b\)) show initial conditions on a two dimensional surface in the four dimensional phase space (namely, the surface d \(/\)d\(t\) = d\(\phi/\)d\(t\) = 0), with the black region corre spending to the basin of the fixed point attractor and the blank region corresponding to the basin of the chaotic attractor. Thus, we can regard Figure 5.5(\(a\)) as a'slice' by a two dimensional plane cutting across the four dimensional phase space. Numerically, it is found that the boundary between the black and blank regions in Figure 5.5 has dimension 1.9, Figure 5.3: Basins of attraction for the forced damped pendulum equation (picture courtesy of H. E. Nusse). Figure 5.4: The double rotor (there is no gravity). Figure 5.5: Basins of attraction for the kicked double rotor; (\(b\)) shows a magnification of a small subregion of (\(a\)) (Grebogi _et al._, 1987a). corresponding to a dimension of the basin boundary in the full four dimensional phase space\({}^{2}\) of 3.9. Fractal basin boundaries also occur for one dimensional maps. Con sider the map shown in Figure 5.6(_a_), where the map function consists of straight lines in \([0,\,\underline{\underline{\sharp}}]\), \(\{\underline{\underline{\sharp}},\,\underline{\underline{\sharp}}\}\), \([\underline{\underline{\sharp}},\,1]\). This map has two attracting fixed points, labeled \(A_{+}\) and \(A_{-}\). The region \(x\,{\geq}\,1\) is part of the basin of attraction for \(A_{+}\) and that the region \(x\)\(0\) is part of the basin of attraction for \(A_{-}\). We now focus on the structure of the basins in \([0,\,1]\). Since the interval \([\underline{\underline{\sharp}},\,\underline{\underline{\sharp}}]\) maps to \(x\,{\geq}\,0\), it is in the basin of \(A_{+}\). This is indicated in Figure 5.6(_b_). We now ask, which intervals map to \([\underline{\underline{\sharp}},\,\underline{\underline{\sharp}}]\) and which to \([\underline{\underline{\sharp}},\,\underline{\underline{\sharp}}]\)? These are the six intervals of length \(\frac{1}{25}\) shown in Figure 5.6(_b_). We see that, at this stage of the construction, the intervals assigned to the two basins alternate between the basin of \(A_{+}\) and the basin of \(A_{-}\) as we move from \(x=0\) to \(x=1\). In fact, this is true at every stage of the construction. Thus, we build up a very complicated, finely interwoven basin structure, Figure 5.6: (_a_) One dimensional map with a fractal basin boundary. (_b_) The basin in \([0,\,1]\). and the boundary between the two basins is the nonattracting invariant Cantor set of points which never leave the interval [0, 1]. (The dimension of this Cantor set is \((\ln 3)/(\ln 5)\).) As a final example of a fractal basin boundary, consider the logistic map, \(x_{n+1}=M(x_{n})\equiv rx_{n}(1-x_{n})\), in the range of \(r\) values for which there is an attracting period three orbit. Although there is only one attractor in this case (the period three orbit), we can create a situation where there are three attractors by considering the map \(M^{3}(x)\) rather than \(M(x)\) (see Figure 2.13). In this case there are three fixed point attractors, which are just the three components of the attracting period three orbit of \(M(x)\), and the boundary separating their basins is fractal (McDonald _et al._, 1985; Park _et al._, 1989; Napiorkowski, 1986). Figure 5.7 shows the basin of the middle fixed point attractor of \(M^{3}(x)\) (blank regions) as a function of \(r\). For further discussion and examples of fractal basin boundaries see McDonald _et al._ (1985) and the book by Gumowski and Mira (1980). ### Final state sensitivity The small scale alternation between different basins that we have seen in the above examples can present a problem when one attempts to predict the future state of a dynamical system. In particular, in the presence of fractal basin boundaries, a small uncertainty in initial conditions can cause anomalously large degradation in one's ability to determine which attrac to is approached. In order to make this quantitative, first consider the case of a simple nonfractal basin boundary \(\Sigma\) for two fixed point attractors \(A_{+}\) and \(A_{-}\), as shown schematically in Figure 5.8. Say our initial conditions are uncertain by an amount \(\varepsilon\) in the sense that, when we say that the initial condition is \(\mathbf{x}=\mathbf{x}_{0}\), what we really know is only that the initial condition lies somewhere in \(|\mathbf{x}-\mathbf{x}_{0}|\)\(\varepsilon\). For the situation in Figure 5.8, under Figure 5.7: Basin structure of the third iterate of the logistic map in the period three window (picture courtesy of R. Breban). uncertainty \(\varepsilon\), we know for sure that initial condition 1 goes to attractor \(A_{+}\). On the other hand, the point labeled 2 in the figure lies in the basin of attractor \(A_{-}\), but because of the \(\varepsilon\) uncertainty, the actual orbit may go to either attractor \(A_{+}\) or attractor \(A_{-}\). We call initial condition 1 \(\varepsilon\) certain and initial condition 2 \(\varepsilon\) uncertain. Clearly, initial conditions that are \(\varepsilon\) uncertain are those which lie within a distance \(\varepsilon\) of the basin boundary \(\Sigma\). If we were to pick an initial condition at random in the rectangle shown in Figure 5.8, the probability of obtaining an \(\varepsilon\) uncertain initial condition is the fraction of the area (or, in higher dimensionality, volume) of the phase space which lies within \(\varepsilon\) of the boundary \(\Sigma\). Denote this fraction \(f(\varepsilon)\). For a simple nonfractal boundary, as in Figure 5.8, \(f(\varepsilon)\) scales linearly with \(\varepsilon\), \(f(\varepsilon)\)\(\varepsilon\). Thus improvement of the initial condition accuracy say by a factor of 10 (i.e., reduction of \(\varepsilon\) by 10), reduces \(f(\varepsilon)\) and hence our probability of potential error by a factor of 10. However, as we show in the appendix, when the boundary is fractal, \(f(\varepsilon)\) has a different scaling with \(\varepsilon\); \[f(\varepsilon)\quad\varepsilon^{\alpha}, \tag{5.3a}\] \[\alpha=N-D_{0}, \tag{5.3b}\] where \(N\) is the phase space dimensionality and \(D_{0}\) is the box counting dimension of the basin boundary. For a nonfractal boundary \(D_{0}=N-1\) and \(\alpha=1\). For fractal basin boundaries, such as those in Figures 5.25.7, \(D_{0}>N-1\) and hence \(\alpha<1\). For example, for the situation in Figure 5.5 we have \(N=4\), \(D_{0}\approx 3.9\), and hence \(\alpha=0.1\). Thus \(f(\varepsilon)\)\(\varepsilon^{0.1}\). In this case there is relatively little one can do to reduce \(f(\varepsilon)\) by improving the accuracy of initial conditions. In the case of Figure 5.5 (\(\alpha\approx 0.1\)), to reduce \(f(\varepsilon)\) by a factor of 10 requires a reduction of \(\varepsilon\) by a factor of \(10^{10}\)! If \(\alpha<1\) (i.e., the boundary is fractal), then we say there is _final state sensitivity_, and, as the example above makes clear, the situation with respect to potential improvement in prediction by increasing initial condition accuracy is less favorable the smaller \(\alpha\) is. (Note in Eq. (5.3b) that the dimension \(D_{0}\) satisfies \(D_{0}\)\(N-1\), since the basin boundary must divide the phase space; hence \(\alpha\) cannot exceed 1.) We call \(\alpha\) the 'uncertainty exponent'. The dimension of a fractal basin boundary can be numerically calculated on the basis of the above discussion (McDonald _et al._, 1983). For example, for the case of the basin boundary shown in Figure 5.2 we proceed as follows. Consider an initial condition ( \({}_{0}\), \(x_{0}\)), and perturb its \(x\) coordinate by an amount \(\varepsilon\) producing two new initial conditions, ( \({}_{0}\), \(x_{0}-\varepsilon\)) and ( \({}_{0}\), \(x_{0}+\varepsilon\)). Now iterate the map and determine which attractor (\(A_{+}\) or \(A_{-}\)) each of the three initial conditions goes to. If they do not all go to the same attractor, then we count the original initial condition as uncertain. Now, we randomly choose a large number of initial conditions in the rectangle of Figure 5.2. We then determine the fraction \(\vec{f}(\varepsilon)\) of these that are uncertain, and we repeat this for several values of \(\varepsilon\). From the definitions of \(f(\varepsilon)\) and \(\vec{f}(\varepsilon)\), we expect that \(f\) is approximately proportional to \(\vec{f}\) (for further discussion see Grebogi _et al._ (1988a)) so that \(\alpha\) can be extracted from the scaling of \(\vec{f}\) with \(\varepsilon\). Figure 5.9 shows results from a set of numerical experiments plotted on log log axes. The straight line fit indicates that \(\vec{f}\) scales as a power of \(\varepsilon\), and the slope of the line gives the power \(\alpha\). The result is \(\alpha\)\(0.2\), from which Eq. (5.3) yields \(D_{0}\)\(1.8\). Even when error in initial conditions is essentially absent, errors in the specification of parameter values specifying the system may be present (e.g., the parameter \(F\) in the pendulum equation used in Figure 5.3; \({\rm d}^{2}\)\(/{\rm d}t^{2}+0.14\)\(/{\rm d}t+\sin\)\(=F\cos t\)). A small error in a system para meter might alter the location of the basin boundary so that a fixed initial condition shifts from one basin to another. In a finite region of parameter space, the fraction of randomly chosen parameter values which produces such a change when perturbed by a parameter error \(\delta\) is some uncertain fraction which we denote \(f_{p}(\delta)\). If the basin boundary dimension is approximately constant in the region of parameter space examined, then the scaling of \(f_{p}(\delta)\) is conjectured to be the same as that for \(f\) (\(\varepsilon\)); \(f_{p}(\delta)\)\(\delta^{\alpha}\) with \(\alpha=N-D_{0}\). Moon (1985) has experimentally examined the parameter dependence of the system in Figure 1.1 to see which attractor a fixed initial condition goes to, and he has concluded, on this basis, that the basin boundary is fractal. In addition to final state and parameter sensitivity, another practical consequence of fractal basin boundaries and nonattracting chaotic sets has been investigated in the Josephson junction experiments of Iansiti _et al._ (1985). These authors find that, when periodic attractors are near a fractal basin boundary, noise can cause frequent kicks of the orbit into the region of finely interwoven basin structure. This leads to an orbit which resemblesa chaotic orbit on a strange attractor even when the noise is relatively small. ### Structure of fractal basin boundaries We now give a description of how the dynamics of the map Eq. (5.2) leads to the fractal basin boundary structure in Figure 5.2. Figure 5.10(_a_) schematically shows a region of the phase space in 0 \(\pi\) (and narrower in \(x\) than the region shown in Figure 5.2). In addition to the two fixed point attractors \(A_{+}\) and \(A_{-}\), there are also three other fixed points which are not attracting. These three, labeled \(S_{+}\), \(S_{-}\) and \(S_{0}\), are saddles; that is, they have a one dimensional stable manifold and a one dimensional unstable manifold. We are particularly interested in the saddles \(S_{+}\) and \(S_{-}\) segments of whose stable manifolds _ab_ and _cd_ are indicated in the figure. The entire region to the left of _ab_ (right of _cd_) can be shown to be part of the basin of attraction of the fixed point attractor \(A_{+}\) (\(A_{-}\)). The question now becomes, what is the basin structure in the region \(Q=\)_abcd_ which lies between the two stable manifold segments _ab_ and _cd_? (\(Q\) is shown cross hatched in Figure 5.10(\(a\)).) For the purpose of addressing this question, we show an expanded schematic view of the region \(Q\) in Figure 5.10(\(b\)). The action of the map on \(Q\) is to take it into the \(S\) shaped cross hatched region shown in Figure 5.10(\(b\)), where the map takes \(a\) to \(a^{\prime}\), \(b\) to \(b^{\prime}\), \(c\) to \(c^{\prime}\) and \(d\) to \(d^{\prime}\). The stable manifold segments _ab_ and _cd_ divide the S shaped region \(\mathbf{M}(Q)\) into five subregions, labeled I\({}^{\prime}\), II\({}^{\prime}\), III\({}^{\prime}\), IV\({}^{\prime}\) and V\({}^{\prime}\). The region II\({}^{\prime}\) lies to the right of the stable manifold of \(S_{-}\) and so is in the basin of attraction of \(A_{-}\). Similarly, region IV\({}^{\prime}\) is in the basin of \(A_{+}\). We now ask, what are the preimages of these regions? In particular, the preimage of II\({}^{\prime}\) (which we denote II) will be in the basin of \(A_{-}\), and the preimage of the region IV\({}^{\prime}\) (denoted IV) will be in the basin of \(A_{+}\). These preimages are shown in Figure 5.11. Since the region \(\mathbf{M}(Q)\cap\mathrm{II}\) is in the basin of \(A_{-}\) its preimage, \(\mathbf{M}^{-1}[\mathbf{M}(Q)\cap\mathrm{II}]\) is also in the basin of \(A_{-}\). This preimage is also shown in Figure 5.11 as the three narrow cross hatched vertical strips. Similarly, \(\mathbf{M}^{-1}[\mathbf{M}(Q)\cap\mathrm{IV}]\) is the three narrow shaded vertical strips and is part of the basin of \(A_{+}\). Proceeding iteratively in this way we build up successively finer and finer scale resolution of the basin structure. Note that the shaded and cross hatched vertical strips alternate as we move horizontally across \(Q\), and that this is true at all stages of the construction.3 Note the similarity of the action of the map on the region \(Q\) in Figure 5.10 with the horseshoe map (imagine turning Figure 4.1(\(d\)) on its side). The main difference is that \(\mathbf{M}(Q)\cap Q\) consists of _three_ strips for the case in Figure 5.10, while the action of the horseshoe map on the square produces a region (the horse shoe) which intersects the original square in _two_ strips. A symbolic dynamics of the chaotic invariant set for Figure 5.10 can be worked out in analogy to the horseshoe analysis (cf. Problem 2 of Chapter 4), and is a full shift on three symbols (in contrast with the two symbols of the horseshoe map). As in the horseshoe, we may think of the chaotic invariant set as the intersection of a Cantor set of lines running vertically with a Cantor set of lines running horizontally. Furthermore, the Cantor set of vertically oriented lines constitutes the basin boundary in the region \(Q\), and is also the stable manifold of the chaotic invariant set. (The horizontal lines are the unstable manifold.) Thus, we see that, in both this example and the example of the two well potential (Figure 5.1), the basin boundary is the stable manifold for a nonattracting invariant set (i.e., the point \(x=\mathrm{d}x/\mathrm{d}t=0\) for Figure 5.1 and a nonattracting chaotic invariant set for the case of Figure 5.10). We emphasize that the type of basin structure we have found here, locally consisting of a Cantor set of smooth curves, is very common, but it is not the only type of structure that fractal basin boundaries for typical Figure 5.11: The basin of \(A_{-}\) is shown cross hatched, and the basin of \(A_{+}\) is shown shaded. dynamical systems can have. In particular, McDonald (1985) and Grebogi (1983b, 1985a) give an example where the basin boundary can be analytically calculated and is a continuous, but nowhere differenti able, curve. The example they consider is the following map, \[x_{n+1} = \lambda_{x}x_{n}\,({\rm mod}\,1), \tag{5.4a}\] \[y_{n+1} = \lambda_{y}y_{n}+\cos(2\pi x_{n}), \tag{5.4b}\] with \(\lambda_{y}\) and \(\lambda_{x}\) greater than 1 and \(\lambda_{x}\) an integer greater than \(\lambda_{y}\). This map has no attractors with finite \(y\) (cf. Problem 5 of Chapter 4). Almost every initial condition generates an orbit which approaches either \(y=+\infty\) or \(y=-\infty\). We regard \(y=+\infty\) and \(y=-\infty\) as the two attractors for this system. The basins of attraction are shown in Figure 5.12 (the \(y=-\infty\) attractor is shown black). The analysis shows that the basin boundary is the continuous curve given by \[y=-\sum_{j=1}^{\infty}\lambda_{y}^{-j}\cos(2\pi\lambda_{x}^{j-1}x). \tag{5.5}\] The sum converges since \(\lambda_{y}>1\). The derivative \({\rm d}y/{\rm d}x\) does not exist, however; differentiating inside the sum produces the sum \[\frac{2\pi}{\lambda_{x}}\sum_{j=1}^{\infty}\left(\frac{\lambda_{x}}{\lambda_{y }}\right)^{j}\sin(2\pi\lambda_{x}^{j-1}x),\] which does not converge since we have assumed \((\lambda_{x}/\lambda_{y})>1\). Equation (5.5) is called a Weierstrass curve and has fractal dimension \[D_{0}=2-\left[(\ln\lambda_{y})/(\ln\lambda_{x})\right]\] (which is \(D_{0}=1.62\)... for the parameters of Figure 5.12). Figure 5.12: Basins for Eqs. (5.4) with \(\lambda_{x}=3\) and \(\lambda_{y}=1.5\) (McDonald (1985).) Another type of basin structure that is common is the case where the same basin boundary can have different dimensions in different regions. Furthermore, in a certain sense, these regions of different dimension can be intertwined on arbitrarily fine scale (Grebogi _et al._, 1987a). An example illustrating this phenomenon is the basin boundary of the kicked double rotor shown in the cross section in Figure 5.5. In Figure 5.5(_a_) we see that the boundary between the black and blank areas in the region \(0\)\((\;,\phi)\)\(1\) appears to be a simple smooth curve (\(D_{0}=1\)) sharply dividing the two basins. On the other hand the very mixed appearance in the central region surrounding the point \(=\phi=\pi\) suggests that the boundary is fractal there. Indeed application of the numerical final state sensitivity technique to the region \(0\)\((\;,\phi)\)\(2\pi\) yields a dimension of the boundary of approximately \(1.9\)\((\)in the \(\mathrm{d}\;/\mathrm{d}t=\mathrm{d}\phi/\mathrm{d}t=0\) cross section\()\). Note, however, that, when we consider two sets, \(S_{a}\) and \(S_{b}\), of different dimensions, \(d_{a}\) and \(d_{b}\), the dimension of the union of the two sets is the larger of the dimensions of the two sets, \[\mathrm{dim}(S_{a}\cup S_{b})=\mathrm{max}(d_{a},\,d_{b}).\] Thus, there is no contradiction with our observation that the dimension in \(0\)\((\;,\phi)\)\(1\) is \(1\). (Indeed applying the final state sensitivity technique to the region \(0\)\((\;,\phi)\)\(1\) yields \(D_{0}=1\).) Now, consider the magnification shown in Figure 5.5(_b_). The dimension of the boundary in this small region is again \(D_{0}\)\(1.9\). Note, however, that there are areas within this small region where the basin boundary is apparently one dimensional (e.g., \(1.010\)\(1.012\), \(2.160\)\(\phi\)\(2.162\)). Moreover, this situation is general for the double rotor: Given any square subregion within \(0\)\((\;,\phi)\)\(2\pi\) which contains part of the basin boundary, the boundary in that square is nonfractal (\(D_{0}=1\)) or fractal, and if it is fractal its dimension is always the same (\(D_{0}\)\(1.9\)). Furthermore, no matter how small the square is, if the dimension of the boundary in the square is fractal, then there is some smaller square within it for which the contained boundary is not fractal (\(D_{0}=1\)). Thus, regions of the basin boundary with different dimension are interwoven on arbitrarily fine scale. For further discussion of this phenomenon and how it comes about as a result of the dynamics see Grebogi _et al._ (1987a, 1988a). So far we have been discussing fractal boundaries that separate the basins of different attractors. We wish to point out, however, that fractal boundaries can also occur even in conservative (nondissipative) systems for which attractors do not exist. As a simple example of this type, consider the motion of a point particle without friction moving along straight line orbit segments in a region bounded by hard walls (shown in Figure 5.13(_a_)) at which the particle experiences specular reflection on each encounter (Bleher _et al._, 1988). We examine initial conditions on the dashed horizontal line segment shown in Figure 5.13(_a_). The initial position \(x_{0}\) is measured from the center of the line and the initial velocity vector angle \({}_{0}\) is measured clockwise from the vertical. Figure 5.13(_b_) shows the regions of this initial condition space for which the particle exits through hole \(A\) (black) and for which it exits through hole \(B\) (blank). The dimension of the boundary separating these two regions is numerically found to be approximately 1.8. Blow ups, however, reveal that there is the same sort of fine scaled interweaving of fractal (\(D_{0}\) 1.8) and nonfractal (\(D_{0}=1\)) boundary regions as for the kicked double rotor example. Another interesting aspect of fractal basin boundaries arises in situa ions where there are three or more basins of attraction. In this case it is commonly found that the basin boundaries can have an interesting topological property. Namely, every point on the boundary of the basin for one of the attractors may simultaneously be on the boundary of all the other attractor basins. This means that if we pick any boundary \(p\), then a small ball of radius \(\epsilon\) centered at \(p\) contains within it pieces of all the basins, and this is the case no matter how small \(\epsilon\) may be. Such a basin boundary is called a Wada boundary after the Japanese mathematician who considered a geometric construction of three regions in a plane such that any boundary point is a boundary point for all three regions. In order to see that this represents a nonstandard topology look at a map in an atlas, e.g., the map of the 48 contiguous states of the United States of America. The state boundaries are typically curve segments. A boundary point on such a curve segment is typically a boundary point for just two states, one on either side of the curve segment. There are, however, a number of 'corner points' where three curve segments meet, and these corner points are simultaneously boundary points for three states. (I count 56 such points on a map of the United States.) There is also one boundary point common to four states (Arizona, New Mexico, Colorado and Utah). The important fact is that there is only a finite number (57) of points that are boundary points for more than two states, but an uncountable infinity of points that are boundary points for only two states. How is it possible then for there to be three regions such that all boundary points are simulta neously boundary points for all three? The answer is that fractal geometry makes this possible and, in fact, natural, as the following example shows. Consider a Cantor set type construction in which we first divide the interval [0, 1] into 7 equal pieces of length \(\frac{1}{7}\) each. Now imagine that we color the second segment red, the fourth segment white and the sixth segment blue. There remain four uncolored segments. We divide each of these 4 uncolored segments into 7 equal pieces of length \((\frac{1}{7})^{2}=\frac{1}{49}\). For each of these 4, we again color the second segment red, the fourth white and the sixth blue. We are now left with 16 uncolored segments of length \(\frac{1}{49}\). Repeat the operation on each of the \(\frac{1}{49}\) length segments, and so on _ad infinitum_. This eventually colors the full length of the original [0, 1] segment. Any boundary point is simultaneously a boundary point for the red, white and blue regions; an \(\epsilon\) interval centered at any boundary point necessarily contains within it an infinite number of ever smaller, red, white and blue intervals accumulating on the boundary point. Figure 5.14 shows a numerical example of a Wada basin boundary that arises for the forced damped pendulum equation, \(\mathrm{d}^{2}\ /\mathrm{d}t^{2}=0.2\mathrm{d}\ /\mathrm{d}t+\mathrm{sin}\ \ =1.66\mathrm{sin}\ t\). This system has three attractors whose basins are shown as white, black and grey. The boundary between the white, black and grey is apparently a Wada boundary: successive magnifications about any boundary point continue to show regions of all three colors no matter how great the magnification. See Section 5.5 for a discussion of the experimental Wada basin photograph on the cover of this book. ### Chaotic scattering In this section we consider the classical scattering problem for a conserva five dynamical system.4 The simplest example of this problem deals with the motion without friction of a point particle in a potential _V_(**x**) for which _V_(**x**) is zero, or else very small, outside of some finite region of space which we call the scattering region. Thus, the particle moves along a straight line (or an approximately straight line) sufficiently far outside the scattering region. We envision that a particle moves toward the scattering region from outside it, interacts with the scatterer, and then leaves the scattering region. The question to be addressed is how does the motion far from the scatterer after scattering depend on the motion far from the scatterer before scattering? As an example, consider Figure 5.15 which shows a scattering problem in two dimensions. The incident particle has a velocity parallel to the \(x\) axis at a vertical displacement \(y=b\). After interacting with the scatterer, the particle moves off to infinity with its velocity vector making an angle \(\phi\) to the \(x\) axis. We refer to the quantities \(b\) and \(\phi\) as the impact parameter and the scattering angle, and we wish to investigate the character of the functional dependence of \(\phi\) on \(b\). Figure 5.14: Wada basins of attraction for the forced damped pendulum equation (Nusse, Ott and Yorke, 1995). As an example consider the potential (Bleher _et al._, 1990) \[V(x,\;y)=x^{2}y^{2}\exp[-(x^{2}+y^{2})] \tag{5.6}\] shown in Figure 5.16. This potential consists of four potential 'hills' with equal maxima at (\(x\), \(y\)) coordinate locations (1, 1), (1, \(-\)1), (\(-\)1, 1), and (\(-\)1, \(-\)1). The maximum value of the potential is \(E_{\text{m}}=1/e^{2}\). For large distances \(r=(x^{2}+y^{2})^{1/2}\) from the origin, \(V(x,\;y)\) approaches zero rapidly with increasing \(r\). Figure 5.17(\(a\)) shows a plot of the _scattering function_, \(\phi\) versus \(b\), for the case where the incident particle energy \(E\) is larger than \(E_{\text{m}}\). We observe for this case (\(E/E_{\text{m}}=1.626\)) that the scattering function is a smooth curve. Furthermore, it is also found to be a smooth curve for all \(E>E_{\text{m}}\). Figure 5.17(\(b\)) shows the scattering function for a case where \(E<E_{\text{m}}\). We observe that the numerically computed dependence of \(\phi\) on \(b\) is poorly resolved in the regions 0.6 \(\pm b\) 0.2. To understand why this might be so, we note that Figure 5.17 is constructed by choosing a large number (\(10^{4}\)) of \(b\) values evenly spaced along the interval of the \(b\) axis shown in the plot. We then integrate the equation of motion for a particle of mass \(m\), \(md^{2}\mathbf{x}/d\mathbf{t}^{2}=-\nabla V(\mathbf{x})\), for incident particles far from the potential for each \(b\) value, and obtain the corresponding scattering angles \(\phi\). We then plot these angles to obtain the figure. Thus, the speckling of individually discernible points seen in Figure 5.17(\(b\)) in the region 0.6 \(\pm b\) 0.2 might be taken to imply that the curve \(\phi\) versus \(b\) varies too rapidly to be resolved on the scale determined by the spacing of \(b\) values used to construct the figure. In this view one might still hope that sufficient resolution would reveal a smooth curve as in Figure 5.17(\(a\)). That this is not the case can be seen in Figures18(_a_) (_c_) which show successive magnifications of unresolved regions. Evidently magnification of a portion of unresolved region of Figure 5.17(_b_) by a factor of order \(10^{3}\) (Figure 5.18(_c_)) does not reveal a smooth curve. (This persists on still further magnification.) We call a value \(b=b_{\rm s}\) a singularity of the scattering function, if, for an interval \([b_{\rm s}-(\Delta b/2)\), \(b_{\rm s}+(\Delta b/2)]\), a plot of the scattering function made as in Figures 5.17 and 5.18 always shows unresolved regions for any interval length \(\Delta b\), and, in particular, for _arbitrarily small_\(\Delta b\). Another, more precise, way of defining \(b_{\rm s}\) as a singularity of the scattering function is to say that, in any small interval [\(b_{\rm s}-(\Delta b/2)\), \(b_{\rm s}+(\Delta b/2)\)], there is a pair of \(b\) values which yields scattering angles whose difference exceeds some value \(K>0\) which is _independent_ of \(\Delta b\). (That is arbitrarily small differences in \(b\) yield \(\phi\) values which differ by order 1.) The interesting result concerning the scattering function shown in Figure 5.17(\(b\)) is that the set of singular \(b\) values is a fractal. Bleher _et al._ (1990) calculate a fractal dimension of approximately 0.67 for the singular set. We call the phenomenon seen in Figure 5.17(\(b\)) _chaotic scattering_ as distinguished by the singular set. Figure 5.18: Successive magneifications of the scattering function (\(a\)) for a small \(b\) interval in Figure 5.17(\(b\)), (\(b\)) for a small \(b\) interval in Figure 5.18(\(a\)), and (\(c\)) for a small \(b\) interval in Figure 5.18(\(b\)) (Bleher _et al._, 1990). from the case of _regular scattering_ (Figure 5.17(_a_)). (The transition from regular to chaotic scattering as the energy is lowered from the value in Figure 5.17(_a_) to the value in Figure 5.17(_b_) will be discussed in Chapter 8.) The chaotic scattering phenomenology we have described above is a general feature of a large class of problems. Chaotic scattering has appeared in numerous applications including celestial mechanics (Petit and Henon, 1986), the scattering of vortices in fluids (Eckhardt and Aref, 1988), scattering of microwaves (Doron _et al._, 1990), the conversion of magnetic field energy to heat in solar plasmas (Lau and Finn, 1991), chemical reactions (Noid _et al._, 1986), collisions between nuclei (Rapisar da and Baldo, 1991), and conductance fluctuations in very tiny two dimensional conductor junctions (Jalabert _et al._, 1990). The latter three examples are cases where it becomes important to consider the quantum mechanical treatment of a problem whose classical counterpart exhibits chaotic scattering. For further material on the quantum aspects of chaotic scattering see Blumel (1991), Cvitanovic and Eckhardt (1989), Gaspard and Rice (1989a,b,c), and Blumel and Smilansky (1988). ### 5.5 The dynamics of chaotic scattering How does the dynamics of the scattering problem lead to the phenomena we have observed in Figures 5.17(_b_) and 5.18? In order to gain some insight into this question we plot in Figure 5.19 the 'time delay' (the amount of time that a particle spends in the scattering region bouncing between the hills) as a function of the impact parameter \(b\) for the potential (5.6) with the same particle energy as for Figures 5.17(_b_) and 5.18. We see that the regions of poor resolution of the scattering function (cf. Figure 5.17(_a_)) coincide with \(b\) values for which the time delay is long. Indeed careful examination of magnifications suggests that the singularities of the scattering function coincide with the values of \(b\) where the time delay is infinite. Very near a value of \(b\) for which the time delay is infinite the time delay will be very long, indicating that the incident particle experiences many bounces between potential hills before leaving the scattering region. Say we choose a \(b\) value yielding a long time delay for which the particle experiences say 1000 bounces before exiting the scattering region. Now change \(b\) very slightly so as to increase the delay time by a small percentage yielding say 1001 bounces before the particle exits the scattering region. The presence of this one extra bounce means that the scattering angle for the two cases can be completely different. Hence, we expect arbitrarily rapid variations of _ph_ with \(b\) near a \(b\) value yielding an infinite time delay, and we may thus conclude that these values coincide with the singularities of the scattering function. The effect is illustrated in Figure5.20 which shows two orbit trajectories whose \(b\) values differ by \(10^{-8}\). The orbit in Figure 5.20(_a_) (\(b=-0.39013269\)) experiences about 14 bounces (depending on how you define a bounce). The orbit in Figure 5.20(_b_) (\(b=-0.39013268\)) is very close to that in Figure 5.20(_a_) for the first 13 or so bounces but subsequently experiences about 4 more bounces than the orbit in Figure 5.20(_a_). The two orbits have completely different scattering angles, one yielding scattering upward (Figure 5.20(_a_)) and the other yielding scattering downward. The interpretation of these results is as follows. The equations of motion are four dimensional, \[m{\rm d}{\bf v}/{\rm d}t = -\nabla V({\bf x}), \tag{5.7a}\] \[{\rm d}{\bf x}/{\rm d}t = {\bf v}, \tag{5.7b}\] where \({\bf x}=(x,\ y)\) and \({\mathbf{v}}=({\mathbf{v}}_{x},\ {\mathbf{v}}_{y})\), but because the particle energy, \[E={{1\over 3}}m{\bf v}^{2}+V({\bf x}), \tag{5.8}\] is conserved, we can regard the phase space as being three dimensional. (For example, we can regard the phase space as consisting of the three Figure 5.19: Time delay versus impact parameter for the \(b\) intervals (\(a\)) corresponding to that in Figure 5.17(_b_), and (_b_) corresponding to that in Figure 5.18(_a_) (\(E/E_{\rm m}=0.260\)) (Bleher _et al._, 1990). variables, \(x\), \(y\),, where is the angle the vector \({\bf v}\) makes with the positive \(x\) axis. These three variables uniquely determine the system state, \(x\), \(y\), \(\nu_{x}\), \(\nu_{y}\), because (5.8) gives \(|{\bf v}|\) in terms of \(x\) and \(y\), \(|{\bf v}|=[2(E-V({\bf x}))/m]^{1/2}\).) The presence of infinite time delays on a fractal set of \(b\) values is due to the existence of a nonattracting chaotic invariant set that is in a bounded region of phase space. Orbits on this invariant set bounce forever between the hills never leaving the scattering region both for \(t\to+\infty\) and for \(t\to-\infty\). This chaotic set is essentially the intersection of its stable and unstable manifolds, each of which locally consists of a Cantor set of approximately parallel two dimensional surfaces in the three dimensional phase space. Thus, the stable and unstable manifolds are each fractal sets of dimension between 2 and 3. We have, in numerically obtaining our scattering function plots, taken initial conditions at some large \(x\) value \(x=x_{0}\) and have chosen the initial angle \({}_{0}\) between \({\bf v}\) and the positive \(x\) axis to be \({}_{0}=\pi\) (i.e., \(\nu_{y0}=0\) and \(\nu_{x0}<0\); see Figure 5.15). This defines a line in the space (\(x\), \(y\), ) which we regard as the phase space. This line of initial conditions generically intersects the stable manifold of the nonattracting chaotic invariant set in a Cantor set of dimension between zero and one\({}^{2}\) (cf. Eq. (3.46)). It is this intersection set that is the set of singular \(b\) values of the scattering function. Since these \(b\) values correspond to initial conditions on the stable manifold of the chaotic invariant set, the orbits they generate approach the invariant set as \(t\to+\infty\); hence they never leave the scattering region. Figure 5.21(_a_) shows a numerical plot of the \(y=0\) cross section of the stable manifold of the chaotic invariant set. This plot is created by taking a grid of initial conditions in (\(x_{0}\), \(\;\;{}_{0}\)) and integrating them forward in time. Figure 5.21: Intersection with the \(y=0\) surface of section of (_a_) the stable manifold, (_b_) the unstable manifold, and (_c_) the nonattracting chaotic invariant set (Bleher _et al._, 1990). Then only those initial conditions yielding long delay times are plotted. We observe that the stable manifold intersection appears as smooth (and swirling) along one dimension with (poorly resolved) fine scale (presumably fractal) structure transverse to that direction. Figure 5.21(_b_) shows a similar plot of the intersection of the unstable manifold with the \(y=0\) plane obtained by integrating initial conditions on the grid backwards in time and again plotting those initial conditions whose orbits remain in the scattering region for a long time. We see that the unstable manifold picture is a mirror image (through the line \(x_{0}=0\)) of the stable manifold picture. This is a result of the time reversal symmetry[5] of Eqs. (5.7) (they are invariant to the transformation \({\bf v}\rightarrow-{\bf v},\ t\rightarrow-t\)) and the symmetry of the potential (5.6). In particular this means that the stable and unstable manifolds have the same fractal dimension. Figure 5.21(_c_) shows the intersection of the chaotic invariant set with the plane \(y=0\). This picture is consistent with the invariant set being the intersection of its computed stable and unstable manifolds (i.e., the set shown in Figure 5.21(_c_) is the intersection of the sets shown in Figures 5.21(_a_) and (_b_)). Apparently, these intersections occur with angles bounded well away from zero. Hence, there appear to be no tangencies between the stable and unstable manifolds, thus supporting the idea that, in this case, the dynamics on the invariant set is hyperbolic. (See Bleher _et al_. (1990) for a description of how Figure 5.21(_c_) is numerically computed. This computation makes use of a numerical technique for obtaining unstable chaotic sets which is discussed and analyzed by Nusse and Yorke (1989); see also Hsu _et al_. (1988).) The existence of a Cantor set of singular \(b\) values for the scattering function implies that it will often be very difficult to obtain accurate values of the scattering angle if there are small random experimental errors in the specification of \(b\). This situation is similar to that which exists when there are fractal basin boundaries.[6] Indeed we can employ a modification of the uncertainty exponent technique of Section 5.2 to obtain the fractal dimension of the singular set. We observe that, for our example (Eq. (5.6) with \(E/E_{\rm m}=0.260\)), small perturbations about a singular \(b\) value can lead to either upward scattering (\(0\ \ \ \ \ \ \ \pi\) as in Figure 5.20(_a_)) or downward scattering (\(\pi\ \ \ \ \phi\ \ \ \ \ 0\) as in Figure 5.20(_b_)). Thus, we randomly choose many values of \(b\) in an interval containing the Cantor set. We then perturb each value by an amount \(\varepsilon\) and determine whether the scattering is upward or downward for each of the three impact parameter values, \(b-\varepsilon\), \(b\) and \(b+\varepsilon\). If all three scatter upward or all three scatter downward, we say that the \(b\) value is \(\varepsilon\) certain, and, if not, we say it is \(\varepsilon\) uncertain. We do this for several \(\varepsilon\) values and plot on a log log scale the fraction of uncertain \(b\) values \(\widetilde{f}(\varepsilon)\). The result is shown in Figure 5.22 which shows a good straight line fit to the data indicating a power law dependence \(\vec{f}(\varepsilon)\)\(\varepsilon^{\alpha}\). The exponent \(\alpha\) is related to the dimension of the set of singular \(b\) values by \[D_{0}=1-\alpha \tag{5.9}\] (i.e., Eq. (5.3b) with \(N=1\) corresponding to the fact that the initial conditions of the scattering function lie along a line in the three dimensional phase space). The straight line fit in Figure 5.22 yields a slope of \(\alpha=0.33\) corresponding to a fractal dimension of \(D_{0}=0.67\) for the scattering function of Figures 5.17(\(b\)) and 5.18. The dimension of the stable and unstable manifolds in the full three dimensional phase space is \(2+D_{0}\), and the dimension of their intersection (which is the dimension \(D_{\rm cs}\) of the chaotic set) is (cf. Eq. (3.46)) \[D_{\rm cs}=2D_{0}+1. \tag{5.10}\] The dimension of the intersection of the chaotic set with the \(y=0\) plane\({}^{2}\) (i.e., the dimension of the set plotted in Figure 5.21(\(c\))) is \(2\,D_{0}\). In all of our discussion of chaotic scattering we have been concerned with a particular illustrative example of scattering in two degrees of freedom (i.e., the two spatial dimensions \(x\) and \(y\)). The phenomena we see are typical for two degree of freedom scattering. Other works on chaotic scattering have also tended to be for examples with two degrees of freedom. The possibility of new chaotic scattering phenomena in systems with more than two degrees of freedom remains largely unexplored. An exception is the paper of Chen _et al_. (1990b) who consider the question of whether the presence of a chaotic invariant set in the phase space implies a fractal set of singularities in a scattering function plot (i.e., a plot giving an after scattering variable as a function of a single before scattering variable). They find that, when the number of degrees of freedom is greater than 2, the scattering function typically does not exhibit fractal behavior, even when the invariant set is fractal and chaotic, unless the fractal dimension of the invariant set is large enough, \[D_{\rm cs}>2\,M-3, \tag{5.11}\] where \(M\) is the number of degrees of freedom. Since \(D_{\rm cs}\) for \(M=2\) is greater than 1 in the fractal case, we see that Eq. (5.11) is always satisfied in chaotic two degree of freedom potential scattering problems. In the case of three degree of freedom systems, however, we require \(D_{\rm cs}>3\). Chen _et al_. illustrate this numerically for the simple three dimensional scattering system consisting of four hard reflecting spheres of equal radii with centers located on the vertices of a regular tetrahedron, as illustrated in Figure 5.23. They show numerically that as the sphere radius increases, \(D_{\rm cs}\) increases from below 3 to above 3, and this is accompanied by the appearance of fractal behavior in typical scattering functions. At the end of Section 5.3 we discussed the fact that, when there are three or more basins, it is possible that the boundary separating them may be a Wada boundary: any point on the boundary of one of the basins is also a boundary point for each of the other basins. This property can apply, not only to basins of attraction for different attractors, but also to chaotic scattering (see Problem 4 at the end of this chapter). For this to be so we must first have some way of identifying basins in chaotic scattering problems. To do this we first recall that, after an orbit enters the scattering region of a chaotic scatter, it bounces around chaotically and then exits the scattering region. It is not uncommon in such problems that it is possible to identify several distinct ways that the orbit might exit the scattering region. As an example, consider the scatterer in Figure 5.23 in the case where the diameter of the spheres is the same as the tetrahedron edge length. That is, the spheres are touching. Considering the part of the interior of the tetrahedron that is exterior to the spheres as the scattering region, we see that there are four distinct ways that the scattering region can be exited. In particular, exit may occur by the particle passing through any one of the four tetrahedron sides. A picture showing the type of basin structure this results in is shown on the cover of this book. That picture is created from a physical experiment in which the four balls have mirrored Figure 5.23: Scatterer consisting of four equal radii hard spheres with centers located on the vertices of a regular tetrahedron. surfaces, and light rays play the role of particle orbits (Sweet _et al._, 1999). Since both light rays and point particles travel in straight lines and experience specular reflection from the sphere surfaces, these mechanical and optical problems are equivalent. To create the photograph, white, red and blue paper is placed to block three of the exits from the scatterer. These are illuminated from behind in a darkened room. A photograph is then taken with the camera looking in from the fourth, uncovered, exit. The red regions are then due to light originating from the red paper bouncing from the spheres, perhaps many times, and then entering the camera aperture, and similarly for the white and blue regions. The black regions correspond to rays that enter the scattering region from the room (had the lights in the room been on, a reflected image of the camera would have been visible in the black regions). To every ray path there corre spends another ray traversing the same path but in the opposite direction. Thus, we can interpret the picture on the cover as a map specifying how rays originating at the camera aperture eventually exit the scatterer. Furthermore, it is claimed that the fractal boundary between these four colors is a Wada boundary. ### 5.6 The dimensions of nonattracting chaotic sets and their stable and unstable manifolds We have seen in Chapter 4 that there is an apparent relationship between the Lyapunov exponents of a chaotic attractor and its information dimension (Eqs. (4.36) (4.38)). In this section we will show that the same is true for nonattracting chaotic invariant sets of the type that arise in chaotic transients, fractal basin boundaries, and chaotic scattering (Kantz and Grassberger, 1985; Bohr and Rand, 1987; Hsu _et al._, 1988). In particular, we treat the case of a smooth two dimensional map **M**(**x**) which has a nonattracting invariant chaotic set \(\Lambda\). In Figure 5.24 we schematically picture the invariant set as being the intersection of stable and unstable manifolds. Let \(B\), also shown in the figure, be a compact set containing the invariant set such that under the action of **M** almost all points (with respect to Lebesgue measure) eventually leave \(B\) and never return. The only initial conditions that generate forward orbits which remain forever in \(B\) are those which lie on the invariant set and its stable manifold. Thus, part of \(B\) must map out of \(B\) and the Lebesgue measure (area) which remains in \(B\) is decaying. Say, we randomly sprinkle \(N(0)\) points uniformly in \(B\). After iterating these points for \(t\) iterates, only \(N(t)<N(0)\) have not yet left. The average decay time (see Eq. (5.1)) is Figure 5.24: Schematic of the nonattracting chaotic invariant set. \[\frac{1}{\langle\mathbf{r}\rangle}=\lim_{t\to\infty}\frac{1}{t}\lim_{N(0)\to \infty}\ln[N(0)/N(t)]. \tag{5.12}\] Lyapunov exponent approximations can be calculated for any finite time \(t\) by taking the \(N(t)\) orbits still in the box \(B\) and using their initial conditions to calculate exponents over the time interval \(0\)\(n\)\(t\). Averaging these over the \(N(t)\) initial conditions, letting \(N(0)\) approach infinity, and then letting \(t\) approach infinity, we obtain the Lyapunov exponents with respect to the natural measure on the nonattracting chaotic invariant set (this measure is defined subsequently). Following the notation of Eqs. (4.32), \[\bar{h}_{t} = \frac{1}{t}\frac{1}{N(t)}^{N(t)}\ln\lvert\mathbf{DM}^{t}(\mathbf{ x}_{0i})\mathbf{\cdot u}_{0i}\rvert\] \[h = \lim_{t\to\infty}\lim_{N(0)\to\infty}\bar{h}t,\] and \(h\) has two possible values \(h_{1}>0>h_{2}\) which depend on the \(\mathbf{u}_{0i}\). In particular, assuming hyperbolicity, \(h=h_{2}\) if the \(\mathbf{u}_{0i}\) are tangent to the stable manifold at \(\mathbf{x}_{0i}\), and \(h=h_{1}\) when the directions of the \(\mathbf{u}_{0i}\) are chosen randomly. We now define natural measures on the stable manifold, on the unstable manifold, and on the invariant set \(\Lambda\) itself. We denote these measures by \(\mu_{\mathrm{s}}\), \(\mu_{\mathrm{u}}\) and \(\mu\), respectively. We then heuristically relate the information dimensions of these measures to the quantities \(h_{1}\), \(h_{2}\) and \(\langle\mathbf{r}\rangle\). We define the stable manifold natural measure of a set \(A\) to be \[\mu_{\mathrm{s}}(A)=\lim_{t\to\infty}\lim_{N(0)\to\infty}N_{\mathrm{s}}(A,\,t) /N(t), \tag{5.13}\] where \(N_{\mathrm{s}}(A,\,t)\) is the number of the remaining \(N(t)\) orbit points whose initial conditions lie in the set \(A\). For large but finite time \(t\) the \(N(t)\) orbit points still in \(B\) are arranged in narrow strips of width of order \(\exp(h_{2}t)\) along the unstable manifold running horizontally the full length across the box \(B\) (see Figure 5.24). Iterating these orbits backward \(t\) iterates to see the initial conditions that they came from, we find that these initial conditions are arranged in narrow strips of width of order \(\exp(-h_{1}t)\) along the stable manifold running vertically the full height across \(B\). (To see this, it is useful to think of the example of the horseshoe map and to associate our box \(B\) with the set \(S\) in Figure 4.1. In particular, refer to Figures 4.2\((d)\) and \((e)\).) The projection of this set of initial conditions along the stable manifold onto a horizontal line is a fattened Cantor like set of intervals of size \(\varepsilon\)\(\exp(-h_{1}t)\). Let \(1+d_{\mathrm{s}}\) be the information dimension of the measure \(\mu_{\mathrm{s}}\). There will be of the order of \(\varepsilon^{-d_{\mathrm{s}}}\) intervals. Hence, the total length occupied by the fattened Cantor set is of the order of \(\varepsilon^{1-d_{\mathrm{s}}}\). This length is proportional to the fraction of the \(N(0)\) initial conditions that have not yet left \(B\), \(\varepsilon^{1-d_{\mathrm{s}}}\)\(N(t)/N(0)\)\(\exp(-t/\langle\mathbf{r}\rangle)\). Thus,\[\exp[-th_{1}(1-d_{\rm s})]\quad\exp(-t/\langle\tau\rangle).\] Hence, taking logarithms we obtain the following formula for the dimen of the stable manifold, \[d_{\rm s}=1-\frac{1}{\langle\tau\rangle h_{1}}. \tag{5.14}\] For the unstable manifold we iterate \(t\) times and consider the image of the points remaining in \(B\). This image, as we have said, consists of horizontal strips along the unstable manifold. These strips have vertical widths of the order of \(\varepsilon\quad\exp\left(h_{2}t\right)\). We define the natural measure \(\mu_{\rm u}\) on the unstable manifold as \[\mu_{\rm u}(A)=\lim_{t\to\infty}\lim_{N(0)\to\infty}N_{\rm u}(A,\ t)/N(t), \tag{5.15}\] where \(N_{\rm u}(A,\ t)\) is the number of orbit points in \(A\cap B\) at time \(t\). The density of points in the horizontal strip is larger than the density of the original sprinkling of the \(N(0)\) points at \(t=0\) if the map is area contract \[\exp[t(h_{1}+h_{2})]\quad\exp[(h_{1}+h_{2})<0\] for contraction; see Figure 4.16. Thus, letting \((1+d_{\rm u})\) denote the information dimension of the measure \(\mu_{\rm u}\), we have that the fraction of points remaining in \(B\) is roughly \[\frac{\varepsilon^{1-d_{\rm u}}}{\exp[(h_{1}+h_{2})t]}\quad\exp(-t/\langle\tau \rangle)\] which yields \[d_{\rm u}=\frac{h_{1}-1/\langle\tau\rangle}{|h_{2}|}. \tag{5.16}\] To define the natural measure \(\mu\) on the chaotic invariant set itself, we first pick a number \(\xi\) in the range \(0<\xi<1\) (e.g., we might chose \(\xi=\frac{1}{2}\)). We then have \[\mu(A)=\lim_{t\to\infty}\lim_{N(0)\to\infty}N_{\xi}(A,\ t)/N(t), \tag{5.17}\] where \(N_{\xi}(A,\ t)\) is the number of orbit points that are in \(A\cap B\) at time \(\xi t\) and have not yet left \(B\) at time \(t\). For \(t\) and \(N(0)\) large, trajectories that remain in \(B\) would stay near the invariant set for most of the time between zero and \(t\), except at the beginning when they are approaching the invariant set along the stable manifold, and at the end when they are exiting along the stable manifold. Thus, the measure \(\mu\), defined in Eq. (5.17) is expected to be independent of \(\xi\), as long as \(0<\xi<1\) (note that Eq. (5.17) gives \(\mu_{\rm s}\) if \(\xi=0\) and \(\mu_{\rm u}\) if \(\xi=1\)). Since the invariant set is the intersection of its stable and unstable manifold, we conclude from Eqs. (5.14) and (5.16) that the information dimension of \(\mu\) is \[d=d_{\rm s}+d_{\rm u}=\biggl{(}h_{1}-\frac{1}{\langle\tau\rangle}\biggr{)}\biggl{(} \frac{1}{h_{1}}-\frac{1}{h_{2}}\biggr{)}. \tag{5.18}\] Note that for an attractor \(\langle\tau\rangle=\infty\) and Eq. (5.18) reduces to the Kaplan Yorke result Eq. (4.38). Note also that in an area preserving map (as would result for a surface of section for a conservative system) \(h_{1}+h_{2}=0\). Thus, Eqs. (5.14) and (5.16) give \(d_{\rm u}=d_{\rm s}=d/2=1-1/(\langle\tau\rangle h_{1})\). This is the case, for example, for chaotic scattering (Figure 5.21). Comparing Eq. (5.18) with Young's formula Eq. (4.46), we conclude that the metric entropy of the measure \(\mu\) is \[h(\mu)=h_{1}-\frac{1}{\langle\tau\rangle}. \tag{5.19}\] The derivations of the dimension formulae Eqs. (5.14), (5.16) and (5.18) given above are heuristic. Thus, it is worthwhile to test them numerically. This has been done for all three formulae using the Henon Eqs. (1.14), in the paper of Hsu _et al_. (1988) and for (5.18) by Kantz and Grassberger (1985). The case \(A=1.42\) and \(B=0.3\) studied by Henon (Figure 1.12) has a strange attractor. Increasing \(A\) slightly it is observed that there is no bounded attractor, and almost all initial conditions with respect to Lebesgue measure generate orbits which go to infinity. In this case there is a nonattracting chaotic invariant set. Numerical experiments checking the formulae for \(d_{\rm s}\), \(d_{\rm u}\) and \(d\) using \(A=1.6\), \(B=0.3\) and \(A=3.0\), \(B=0.3\) were performed and yielded data which tended to support the formulae. Figures 5.25 from Hsu _et al_. (1988) show the Figure 5.25: The invariant set for the Hénon map with \(A=1.6\) and \(B=0.3\). (\(b\)) A magnification of the invariant set in the small rectangle shown in (\(a\)). (\(c\)) The stable manifold. (\(d\)) The unstable manifold. (Hsu _et al_., 1988.) nonattracting chaotic invariant set, its stable manifold, and its unstable manifold for \(A=1.6\) and \(B=0.3\). Finally, we note an interesting application of these ideas in the context of an experiment in fluid dynamics (Sommerer _et al._, 1996). In this experiment a long vertically oriented cylinder is translated at uniform velocity through water in a large tank. A salinity gradient is used to generate a vertical density profile that suppresses fluid motion in the vertical direction. Thus, the fluid velocity is essentially two dimensional (horizontal). At the velocity at which the cylinder is dragged through the water it is found that, in a frame co moving with the cylinder, the resulting flow is periodic in time. In this periodic flow a vortex is formed on one side of the cylinder and is subsequently convected downstream from the cylinder; following this a vortex forms on the other side of the cylinder and then moves downstream in the same way; see Figure 5.26. The cycle then repeats, establishing a so called von Karmann vortex street in the wake of the cylinder. Thus, if we sample the fluid at the time period of the flow, we have a two dimensional map for the evolution of the position of a fluid particle. This map is area preserving due to the incompressibility of the water. In the experiment a patch of dye is initially situated upstream from the cylinder, and the cylinder is dragged through it. In a fixed viewing region, behind the cylinder (and moving with it), as time proceeds, the dye pattern is observed to concentrate on more and more, thinner and thinner filaments. In addition, the total amount of dye in the viewing region decreases exponentially with time as dye is swept down Figure 5.26: Schematic diagram of the experimental setup in the experiment of Sommerer _et al._ (1996). stream and out of the region. The interpretation is that there is a nonattracting chaotic set behind the cylinder, and the region with dye is contracting onto the unstable manifold of this set. Sommerer _et al_. verify this interpretation by measuring the values of the Lyapunov exponent, and, in the viewing region behind the cylinder, the exponential decay time of the dye, and the fractal dimension of the dye pattern filaments at long time. Inserting \(\langle\tau\rangle\) and \(h_{1}\) in Eq. (5.16) (\(|h_{2}|=h_{1}\) by incompressibility) it is found that the resulting value for \(d_{\rm u}\) is consistent with the measured value of the dimension of the dye pattern. ### 5.7 Riddled basins of attraction In Sections 5.1 to 5.3 we discussed fractal basin boundaries. In this section we discuss a situation where the basin structure is of a particularly bizarre type, namely a 'riddled' basin (Alexander, _et al_., 1992). In order to define a riddled basin we first need to be careful about how we define the word attractor. For the purposes of this section we say a set is an attractor if it is the limit set of orbits from a set of initial conditions (its basin) whose volume in phase space is nonzero (i.e., the set of these initial conditions has positive Lebesgue measure; see the appendix to Chapter 2 for a definition of Lebesgue measure in the case of a one dimensional phase space). This definition differs from another commonly used definition which states that a set is an attractor if it has a neighborhood such that orbits from all initial conditions in this neighborhood limit on the set. As will become evident, an attractor whose basin is riddled is an attractor by the first definition but not by the second definition. By a riddled basin we mean the following. Say the system has two attractors \(A\) and \(C\) with basins \(\hat{A}\) and \(\hat{C}\), respectively. Let \(B_{\epsilon}(p)\) be an \(\epsilon\) radius ball centered at the phase space point \(p\). The basin \(\hat{A}\) is riddled if, for every point \(p\) in \(\hat{A}\), we have that the intersection, \(B_{\epsilon}(p)\cap\hat{C}\), has positive volume. This results in what might be called an extreme obstruction to determinism. In particular, assume that someone does an experiment in which an initial condition is prepared at point \(p\), and the subsequent evolution is toward attractor \(A\). Furthermore, assume that there is uncertainty \(\epsilon\) in the preparation of the initial condition in the sense that, when we say the initial condition is at \(p\), it truly can be anywhere in \(B_{\epsilon}(p)\). If the basin \(\hat{A}\) is riddled, then, when we attempt to repeat the experiment by preparing the same initial condition as before (to within \(\epsilon\)) we can never be sure that the subsequent orbit goes to \(A\) rather than to \(C\). Furthermore, this is the case for _every_ point in \(\hat{A}\) and no matter how small \(\epsilon\) is. This is in contrast to the situation discussed in Sections 5.1 and 5.3 where such uncertainty only results if the initial condition \(p\) is within \(\epsilon\) of the boundary, and the fraction of the basin's volume within \(\epsilon\) of the boundary goes to zero as \(\epsilon\to 0\). Riddling meansthat the set \(\hat{A}\) and its boundary set are the same. The riddling situation is illustrated schematically in Figure 27. (Note also that in any neighbor hood of \(A\) there must be points that go to \(C\), and hence \(A\) fails the definition requiring an attractor to have a neighborhood of attracted points.) Riddled basins only occur in special types of systems. In particular, to have an attractor with a riddled basin, the system should possess an invariant smooth hypersurface (i.e., an invariant'manifold') on which the riddled basin attractor is located (see Figure 27). By invariant we mean that the orbit from any initial condition in the hypersurface remains in the hypersurface. For a system of first order autonomous ordinary differential equations, the hypersurface must be at least three dimensional in order to contain a chaotic attractor. Since the phase space must be of higher dimension than the hypersurface, in order for a riddled basin to be possible, the minimum dimensionality of a system of first order autono mous ordinary differential equations is four. Correspondingly, the mini mum dimensionality for riddling in the case of an invertible map is three, and for a noninvertible map the minimum dimensionality for riddling is two. While invariant hypersurfaces are not expected in generic systems, there are still practically interesting situations where such systems arise. One example is where a system has a symmetry. For example, Sommerer and Ott (1993b) consider the motion of a point particle of unit mass in two dimensions \({\bf r}=(x,\,y)\), where the particle is subject to a potential \(V(x,\,y)\) (yielding a force \(-\nabla V\)), an applied oscillatory \(x\) directed force, \({\bf f}=f_{0}\sin(\omega t){\bf x}_{0}\), and a viscous friction force \(-v{\rm d}{\bf r}/{\rm d}t\) \[{\rm d}^{2}{\bf r}/{\rm d}t^{2}=-\nabla V+f_{0}\sin\omega t\ {\bf x}_{0}-v{ \rm d}{\bf r}/{\rm d}t. \tag{5.20}\] Figure 27: Schematic diagram illustrating an attractor \(A\) with a riddled basin of attraction. The basin of \(C\) need not be riddled. That is, there are points \(p\) in \(\hat{C}\) such that \(B_{i}(p)\cap\hat{A}\) is empty. The potential was taken to have even symmetry in \(y\), \(V(x,\,y)=V(x,\,-y)\). Thus the \(y\) component of the potential force, \(-\partial V/\partial y\), is zero if \(y=0\). Consequently, any initial condition with \(y=\mathrm{d}y/\mathrm{d}t=0\) generates an orbit with \(y=\mathrm{d}y/\mathrm{d}t=0\) for all time. Thus, the hypersurface \(y=\mathrm{d}y/\mathrm{d}t=0\) is invariant, and, for the particular example investigated by Sommerer and Ott, a riddled basin is found. Another situation where invariant hypersurfaces occur is that of identical coupled chaotic oscillators. Say oscillator \(a\) is described by \(\mathrm{d}\mathbf{x}_{a}/\mathrm{d}t=\mathbf{F}(\mathbf{x}_{a})\) and oscillator \(b\) is described by \(\mathrm{d}\mathbf{x}_{b}/\mathrm{d}t=\mathbf{F}(\mathbf{x}_{b})\) Note that the \(a\) equation and the \(b\) equation both employ the same function \(\mathbf{F}\) on the right hand side. Although the orbits \(\mathbf{x}_{a}(t)\) and \(\mathbf{x}_{b}(t)\) may be on the same chaotic attractor in their respective phase spaces, at any given time \(\mathbf{x}_{a}(t)\neq\mathbf{x}_{b}(t)\). Furthermore, if initially \(\mathbf{x}_{a}(0)\) and \(\mathbf{x}_{b}(0)\) are close, due to chaos their subsequent orbits will be far apart. Can we couple some output of the \(a\) oscillator to the \(b\) oscillator in such a way as to promote \(\mathbf{x}_{a}(t)=\mathbf{x}_{b}(t)\)? That is, can we _synchronize_ the chaotic motions of the two systems? Say the dimension of \(\mathbf{x}_{a}\) (and also \(\mathbf{x}_{b}\)) is \(N\). Then the coupled system is of dimension \(2N\), and the synchronized state \(\mathbf{x}_{a}(t)=\mathbf{x}_{b}(t)\) represents an \(N\) dimensional hypersurface embedded in the \(2N\) dimenional state space of the coupled system. The problem of synchronized chaotic systems has generated much interest recently due to its potential relevance in a variety of applications, notably in schemes for communica tion using chaotic signals. See Chapter 10 for a more extensive discussion of synchronization of chaotic systems. We now illustrate, by use of a simple example, the dynamical processes leading to a riddled basin of attraction. We consider a two dimensional noninvertible map which we specify to have the following form in the region \(0\quad\ y\quad\) 1 (Ott _et al._, 1994), \[x_{n+1}=M(x_{n})=\left\{\begin{array}{ll}x_{n}/\alpha&\text{for $0$ }\quad x_{n}<\alpha,\\ (x_{n}-\alpha)/\beta&\text{for $\alpha<x_{n}$ }\quad 1,\end{array}\right. \tag{5.21a}\] \[y_{n+1}=\left\{\begin{array}{ll}2y_{n}&\text{for $0$ }\quad x_{n}<\alpha,\\ y_{n}/2&\text{for $\alpha<x_{n}$ }\quad 1,\end{array}\right. \tag{5.21b}\] where \(\alpha+\beta=1\), \(0<\alpha<\frac{1}{2}<\beta<1\). For a random choice of \(x_{0}\) in [0, 1], Eq. (5.21a) generates an orbit with uniform natural measure in \(0\quad\ x\quad\)1. Such an orbit spends a fraction \(\alpha\) of its time in \(0\quad\ x_{n}<\alpha\) and a fraction \(\beta\) of its time in \(\alpha<x_{n}\quad\)1. If \(y_{0}=0\) initially, then, by (5.21b), \(y\) remains zero for all time. Hence, \(y=0\) is invariant, and it contains a chaotic set. This set is an attractor if its Lyapunov exponent in \(y\), denoted \(h_{\perp}\), is negative. Since the natural measure is uniform in \(x\), \[h_{\perp}=\alpha\ln 2+\beta\ln\left(\tfrac{1}{2}\right)=-(\beta-\alpha)\ln 2<0,\]where \(h_{\perp}<0\) follows from our assumption that \(\alpha<\beta\). Equations (5.21) specify the map in \(0\)\(y\)\(1\). For \(y>1\) we assume the map has some other form such that there is another attractor in \(y>1\) which is ap proached by all orbits in \(y>1\). We now examine the basin structure in the square \(0\)\(x\)\(1\), \(0\)\(y\)\(1\). Refer to Figure 5.28. The action of the map takes the cross hatched region labeled \(1\) to \(y>1\), implying that region \(1\) is in the \(y>0\) basin. Similarly, since on one application of the map, region \(2\rightarrow\) region \(1\), region \(3\to 2\), etc., we see that regions \(1\), \(2\), \(3\), \(4\), \(\ldots\), \(m\), \(\ldots\) are all part of the \(y>1\) basin. This sequence of regions, given by \(0<x<a^{m}\) and \(2^{-(m-1)}>y>2^{-m}(m=1\), \(2\), \(3\), \(\ldots)\), extends down ward in \(y\) limiting on the point \((x,\ y)=(0,\ 0)\) as \(m\to 0\). As it does so, the width and height of each region decreases as \((\frac{1}{2})^{m}\) and \(\alpha^{-m}\). But this is not all. The vertical line \(x=\alpha\) maps to \(x=0\), and to its right there is a sequence of regions mapping to the regions \(2\), \(3\), \(4\), \(\ldots\), namely region \(2^{\prime}\rightarrow\) region \(2\), region \(3^{\prime}\rightarrow\) region \(3\), etc. These regions are thus also part of the \(y>1\) basin. Furthermore, the sequence of regions \(2^{\prime}\), \(3^{\prime}\), \(4^{\prime}\), \(\ldots\) extends downward, limiting on the point \((x,\ y)=(\alpha,\ 0)\). Similarly, the vertical lines \(x=\alpha^{2}\) and \(x=\alpha+\alpha\beta\) map by (5.21a) to \(x=\alpha\). To the right and adjacent to the lines \(x=\alpha^{2}\), \(\alpha+\alpha\beta\) there are sequences of Figure 5.28: The basin structure for the two dimensional map (5.21). regions mapping to regions \(2^{\prime}\), \(3^{\prime}\), \(4^{\prime}\),..., that extend downward limiting on \((x,\,y)=(\alpha^{2},\,0)\) and \((\alpha+\alpha\beta,\,0)\). Similarly the two vertical lines \(x=\alpha^{3}\), \(\alpha+\alpha^{2}\beta\) both map to \(x=\alpha^{2}\), and the two vertical lines \(x=\alpha^{2}+\alpha^{2}\beta\), \(\alpha+\alpha\beta+\alpha\beta^{2}\) map to \(x=\alpha+\alpha\beta\), and \(x=\alpha^{3}\), \(\alpha^{2}+\alpha^{2}\beta\), \(\alpha+\alpha^{2}\beta\), \(\alpha+\alpha\beta+\alpha\beta^{2}\) similarly bound regions of the \(y>1\) basin extending down to \(y=0\) at \(x=\alpha^{3}\), \(\alpha^{2}+\alpha^{2}\beta\), \(\alpha+\alpha^{2}\beta\), \(\alpha+\alpha\beta+\alpha\beta^{2}\). Note that these four points are the second preiterated of \(x=0\) for the \(x\) map, (5.21a). Successively repeating this construction we have similar regions of the \(y>1\) basin left bounded by the vertical lines \(x=x_{p,q}\) for \(p=1\),..., \(2^{q-1}\) where \(x_{pq}\) are the \(2^{q}\) values of \(x\) that iterate to \(x=0\) on \(q\) applications of the map (5.21a). Since this set of vertical lines is dense in the square, \(0\quad x\quad 1,\,0\quad y\quad 1\), we see that any initial condition going to the \(y=0\) attractor has points going to the \(y>1\) attractor arbitrarily nearby. Thus, the basin of the \(y=0\) attractor appears to be riddled. The only possible worry in the above argument is that, by successively removing all those regions that eventually map to \(y>1\), we may have removed all the area of the square \(0\quad x\quad 1,\,0\quad y\quad 1\). In fact, this is not so, and there is still a positive Lebesgue measure set of points that go to the \(y=0\) attractor. In particular, consider a horizontal line \(y=Y\). We ask, what is the fraction of the length of this line corresponding to initial conditions going to the \(y>1\) attractor? We show below that this quantity scales as \(Y^{\eta}\), \(\eta>0\) (Ott _et al._, 1994). Thus, the measure of the \(y=0\) attractor basin along a horizontal line \(y=Y\) approaches 1 as \(Y\to 0\), thus confirming that the basin of the \(y=0\) attractor has nonzero Lebesgue measure. To obtain this result let \((\frac{1}{2})^{m}<Y<(\frac{1}{2})^{m-1}\), and denote by \(S_{m}\) the set of \(x\) values on \(y=Y\) corresponding to orbits going to \(y>1\), and by \(P_{m}\) the length of \(S_{m}\). Further, let \(S_{m}=S_{m}^{\alpha}\quad S_{m}^{\beta}\) where \(S_{m}^{\alpha}\) and \(S_{m}^{\beta}\) are in \(x<\alpha\) and \(x\quad\alpha\) respectively. Noting that \(M(S_{m}^{\alpha})=S_{m-1}\) (\(y\) is multiplied by 2 for \(x\) in \(S_{m}^{\alpha}\)) and \(M(S_{m}^{\beta})=S_{m+1}\) (\(y\) is multiplied by \(\frac{1}{2}\) for \(x\) in \(S_{m}^{\beta}\)), we have \[P_{m}=\alpha P_{m-1}+\beta P_{m+1}.\] The solution to this equation is \(P_{m}=K(\alpha/\beta)^{m}\). Finally, since \(Y\quad 2^{-m}\), we obtain \[P_{m}\quad\quad Y^{\eta}, \tag{5.22}\] where \(\eta=[\ln(\beta/\alpha)]/\ln 2\). As a further demonstration of a riddled basin, we consider the previously mentioned example of particle motion in two dimensions, Eq. (5.20). Sampling at times \(t_{n}=2\pi n/o\) we have a four dimensional map in the variables \(x_{n}\), \(y_{n}\), \(\upsilon_{m}\), \(\upsilon_{yn}\), where \(\upsilon_{x}=\mathrm{d}x/\mathrm{d}t\), \(\upsilon_{y}=\mathrm{d}y/\mathrm{d}t\). For the example studied \(v=\frac{1}{20}\). \(f_{0}=2.3\), \(\omega=3.5\) and \(V(x,\,y)=(1-x^{2})+(x+1.9)y^{2}\), and the system has two possible motions, one corresponding to \(|y|\to\infty\) (which we regard as an attractor) and the other an attractor corresponding to \(y\to 0\) with chaotic motion in \(x\). Scaling as in Eq. (5.22) also applies for this example. Figure 5.29 shows a log log plot of \(P_{*}(y)\), the fraction of \(x\) initial conditions on a line, \(y=\mbox{const.}\), \(\upsilon_{x}=\upsilon_{y}=0\), generating orbits going to \(|y|=\infty\). The plot is well fitted by a straight line whose slope \(\eta\) agrees well with a theoretical value (Ott _et al._, 1994). ## Appendix: Derivation of Eqs. (5.3) Here we derive Eqs. (5.3). Let \(B(\varepsilon,\Sigma)\) be the set of points within \(\varepsilon\) of a closed bounded set \(\Sigma\) whose box counting dimension is \(D_{0}\) (cf. Grebogi _et al._, 1988a). We cover the region of the \(N\) dimensional space in which the set lies by a grid of spacing \(\varepsilon\). Each point \(x\) in \(\Sigma\) lies in a cube of the grid. Any point \(y\) within \(\varepsilon\) of \(x\) must therefore lie in one of the \(3^{N}\) cubes which are the original cube containing \(x\) or a cube touching the original cube. Thus, the volume of \(B(\varepsilon,\Sigma)\) satisfies \[\mbox{Vol}[B(\varepsilon,\Sigma)]\quad\quad 3^{N}\varepsilon^{N}\overline{N}(\varepsilon)\]where \(\bar{N}(\varepsilon)\) is the number of cubes needed to cover \(\Sigma\). Now, cover \(\Sigma\) with a grid of cubes of edge length \(\varepsilon/N^{1/2}\). Any two points within such a cube are separated by a distance of at most \(\varepsilon\). Thus, every cube of the grid used in covering \(\Sigma\) lies within \(B(\varepsilon,\,\Sigma)\), \[\text{Vol}[\,B(\varepsilon,\,\Sigma)]\quad\quad(\varepsilon/N^{1/2})^{N}\, \bar{N}(\varepsilon/N^{1/2}).\] Hence, \[N\frac{\ln N^{-1/2}}{\ln\varepsilon}+\frac{\ln\bar{N}(\varepsilon /N^{1/2})}{\ln(\varepsilon/N^{1/2})+\ln N^{1/2}}+N \frac{\ln\{\text{Vol}[\,B(\varepsilon,\,\Sigma)]\}}{\ln\varepsilon}\] \[\frac{\ln 3^{N}}{\ln\varepsilon}+\frac{\ln\bar{N}(\varepsilon)}{\ln \varepsilon}+N.\] Letting \(\varepsilon\) approach zero and noting the definition of the box counting dimension \(D_{0}\) Eq. (3.1), we have \[\lim_{\varepsilon\to 0}\frac{\ln\text{Vol}[\,B(\varepsilon,\,\Sigma)]}{\ln \varepsilon}=N-D_{0}.\] Since \(f\) (\(\varepsilon\)) is proportional to \(\text{Vol}[\,B(\varepsilon,\,\Sigma)]\), we have \[\lim_{\varepsilon\to 0}\frac{\ln f\,(\varepsilon)}{\ln\varepsilon}=N-D_{0}=\alpha,\] which we have abbreviated in Eq. (5.3a) as \(f\) (\(\varepsilon\)) \(\varepsilon^{a}\). ## Problems 1. Describe the basin boundary structure for the following systems. 1. The map given by Eq. (3.3) and Figure 3.3 regarding \(x=+\infty\) and \(x=-\infty\) as two attractors. 2. The map shown in Figure 5.30, where \(O\), \(U\) and \(W\) are unstable fixed points; \(B\) is an attracting fixed point; points in (\(O\), \(U\)) tend to \(B\); points in (\(-\infty\), \(0\)) tend to \(-\infty\); and we assume that \(-\infty\) and \(B\) are the only attractors. In particular show for both (\(a\)) and (\(b\)) that regions of the basin boundary which are fractal are interwoven on arbitrarily fine scale with nonfractal regions of the boundary. 2. Consider the map shown in Figure 5.31 where the map function consists of straight lines in the regions \(x<a\), \(b<x<c\), \(d<x\), and \(a=\frac{1}{3}\), \(b=\frac{4}{3}\), \(c=\frac{5}{3}\), \(d=\frac{3}{3}\). 1. All initial conditions, except for a set of \(x\) values of Lebesgue measure zero eventually approach either \(x=+\infty\) or \(x=-\infty\). Find the dimension of the boundary separating initial conditions that yield these two eventual behaviors. 2. If an initial condition is chosen at random with uniform probability density in \(0<x<1\), what is the probability that it will stay in \(0<x<1\) for three iterates or more? 3. Consider a sequence of points \(\lambda^{\ast}\) (\(n=0,\,1,\,2,\,\ldots\)) in \([0,\,1]\) where \(0<\lambda<1\)Figure 5.30: The map for Problem 1(_b_). Figure 5.31: Map for Problem 2. Define two sets \(A\) and \(B\), where \(A=\cup\mathop{\infty}\limits_{m=0}^{\infty}[\lambda^{2n+1},\,\lambda^{2n}]\) and \(B=\cup\mathop{\infty}\limits_{m=1}^{\infty}[\lambda^{2n},\,\lambda^{2n-1}]\). Say we pick a point at random in \([0,\,1]\) and we specify that point to have an uncertainty \(\varepsilon\). Show that the probability \(f(\varepsilon)\) of a possible error in determining whether the randomly chosen point lies in \(A\) or in \(B\) is roughly given by \(f\left(\varepsilon\right)\quad K\varepsilon\log(1/\varepsilon)\). (_Note_: We define the symbol, so that this scaling is included in the statement \(f\left(\varepsilon\right)\quad\varepsilon\) by virtue of \(\lim_{\varepsilon\to 0}[\left(\log f\left(\varepsilon\right)\right)/(\log \varepsilon)]=1\quad\) for \(f\left(\varepsilon\right)=K\varepsilon\log(1/\varepsilon)\).) 4. Consider a particle which experiences two-dimensional free motion without friction between perfectly elastic bounces off three hard cylinders (Figure 5.32). At each bounce the angle of incidence is equal to the angle of reflection. Consider initial conditions on the \(y\)-axis in the range \(-K>y>K\) (\(K>r_{0}\)) directed parallel to the \(x\)-axis to the right. Argue that the set of initial conditions that bounce forever between the cylinders is a Cantor set. To do this consider the set of initial conditions that experience at least one bounce, the set that experiences at least two bounces, three bounces, etc. Argue that the set experiencing at least \(n\) bounces consists of \(2^{n-1}\) small intervals, and that each of these contain two of the two small intervals of initial conditions which experience \(n+1\) bounces. Argue that this provides an example of a Wada basin boundary if we identify three basins in the following way. Consider the triangle formed by joining the centers of the circles. If an orbit point is located in the triangle, and subsequently leaves the triangle by crossing one of its sides, then the orbit never returns. Thus, we can define three basins for initial conditions in the triangle by regarding two points to be in the same basin if orbits from these points leave the triangle through the same side. 5. Consider the situation given in Problem 12 of Chapter 4, but suppose that the dynamics is changed so that slab A in Figure 4.29(a) is mapped out of the basic unit cube (\(0\quad x\quad 1,\quad 0\quad y\quad 1,\quad 0\quad z\quad 1\)), but slabs B and C in Figure 4.29(_a_) map to B' and C' in Figure 4.29(_b_) (as before). 1. If a very large number of initial conditions are sprinkled uniformly in the basic cube, as these are iterated, the number of orbits \(N(n)\) that have never left the unit cube decays exponentially with the number \(n\) of iterates as \(N(n)\quad\exp-(n/\tau)\). What is \(\tau\)? 2. What are the three Lyapunov exponents for the natural measure of the invariant set that never leaves the unit cube? ## Notes 1. The horseshoe map specifies the dynamics of points in the square \(S\). The fate of orbits that leave the square depends on the dynamics that orbits experience outside the square. For example, they may be attracted to some periodic attractor outside the square, or the dynamics may be such that orbits which leave the square are eventually fed back into the square. In the latter case the invariant set of the horseshoe map may be embedded in some larger chaotic invariant set that forms a chaotic attractor. In the former case initial conditions near the invariant set of the horseshoe will experience a chaotic transient before settling down to motion on the attracting periodic orbit. 2. In Eq. (3.46) we have stated that the dimension \(d_{0}\) of the intersection of two smooth surfaces (these are nonfractal sets) of dimensions \(d_{1}\) and \(d_{2}\) is generically \(d_{0}=d_{1}+d_{2}-N\) where \(N\) is the dimension of the Cartesian space in which these sets lie. Mattila (1975) proves that this equation holds fairly generally for the case where set 1 is a plane of dimension \(d_{1}\), set 2 is a fractal of Hausdorff dimension \(d_{2}\), and \(d_{0}\) is the Hausdorff dimension of the intersection. The concept of Hausdorff dimension is discussed in the appendix to Chapter 3, but we note here that it is very closely related to the box-counting dimension and, based on examples (Pelikan, 1985), it is believed that the box-counting dimension and the Hausdorff dimension of fractal basin boundaries are typically the same. 3. The construction for the one-dimensional map of Figure 5.6(_a_) is analogous to the construction here. In particular, compare Figures 5.6(_b_) and 5.11. 4. For reviews dealing with chaotic scattering see Eckhardt (1988a), Smilansky (1992), and Ott and Tel (1993). 5. Time reversal symmetry can be absent for other problems of Hamiltonian mechanics (e.g., for charged particle motion in a static magnetic field). As in the case of fractal basin boundaries, the difficulty in making a determination (in this case, a determination of the scattering angle) increases with the fractal dimension. The worst case is attained when the scattering is nonhyperbolic. In that situation orbits entering the scattering region can stick for a long time near a hierarchy of bounding KAM surfaces (Meiss and Ott, 1985) and this leads to a very complicated behavior. (KAM surfaces are discussed in Chapter 7.) Lau _et al._ (1991) show that in this case the dimension of the set of values on which the scattering function is singular is one, in spite of the fact that this set has zero Lebesgue measure. ## Chapter 6 Quasiperiodicity ### 6.1 Frequency spectrum and attractors In Chapter 1 we introduced three kinds of dynamical motions for continuous time systems: steady states (as in Figure 1.10(_a_)), periodic motion (as in Figure 1.10(_b_)), and chaotic motion (as in Figure 1.2). In addition to these three, there is another type of dynamical motion that is common; namely, _quasiperiodic_ motion. Quasiperiodic motion is espe cially important in Hamiltonian systems where it plays a central role (see Chapter 7). Furthermore, in dissipative systems quasiperiodic _attracting_ motions frequently occur. Let us contrast quasiperiodic motion with periodic motion. Say we have a system of differential equations with a limit cycle attractor (Figure 1.10(_b_)). For orbits on the attractor, a dynamical variable, call it \(f(t)\), will vary periodically with time. This means that there is some smallest time \(T>0\) (the period) such that \(f(t)=f(t+T)\). Correspondingly, the Four tier transform of \(f(t)\), \[\hat{f}(\omega)=\int_{-\infty}^{\infty}f(t)\exp(\mathrm{i}\omega t)\,\mathrm{d }t, \tag{6.1}\] consists of delta function spikes of varying strength located at integer multiples of the fundamental frequency \(\Omega=2\pi/T\), \[\hat{f}(\omega)=2\pi\sum_{n}a_{n}\delta(\omega-n\Omega). \tag{6.2}\] Basically, quasiperiodic motion can be thought of as a mixture of periodic motions of several different fundamental frequencies. We speak of \(N\) frequency quasiperiodicity when the number of fundamental frequencies that are'mixed' is \(N\). In the case of \(N\) frequency quasiperiodic motion a dynamical variable \(f(t)\) can be represented in terms of a function of \(N\) independent variables, \(G(t_{1},\,t_{2}\,\dots,\,t_{N})\), such that \(G\) is periodic in each of its \(N\) independent variables. That is, \[G(t_{1},\,t_{2},\,\dots,\,t_{i}+\,T_{i},\,\dots,\,t_{N})=G(t_{1},\,t_{2},\, \dots,\,t_{i}\,\dots,\,t_{N}), \tag{63}\] where, for each of the \(N\) variables, there is a period \(T_{i}\). Furthermore, the \(N\) frequencies \(\Omega_{i}=2\pi/T_{i}\) are _incommensurate_. This means that no one of the frequencies \(\Omega_{i}\) can be expressed as a linear combination of the others using coefficients that are rational numbers. In particular, a relation of the form \[m_{1}\Omega_{1}+m_{2}\Omega_{2}\,+\,\dots+\,m_{N}\Omega_{N}=0 \tag{64}\] does not hold for _any_ set of integers, \(m_{1}\), \(m_{2}\),..., \(m_{N}\) (negative integers are allowed), except for the trivial solution \(m_{1}=m_{2}=\dots=m_{N}=0\). In terms of the function \(G\), an \(N\) frequency quasiperiodic dynamical variable \(f(t)\) can be represented as \[f(t)=G(t,\,t,\,t,\,\dots,\,t) \tag{65}\] That is, \(f\) is obtained from \(G\) by setting all its \(N\) variables equal to \(t\); \(t_{1}=t_{2}=\dots=t_{N}=t\). Due to the periodicity of \(G\), it can be represented as an \(N\) tuple Fourier series of the form \[G=\begin{array}{cc}a_{n_{1},n_{2},\,\,\,,n_{N}}&a_{n_{1}}&n_{N}\,\exp[{\rm i }(n_{1}\Omega_{1}t_{1}+\,n_{2}\Omega_{2}t_{2}\,+\dots+\,n_{N}\Omega_{N}t_{N})] \end{array}\] Thus setting \(t=t_{1}=t_{2}=\dots=t_{N}\) and taking the Fourier transform we obtain, \[f(\omega)=2\pi\begin{array}{cc}a_{n_{1},n_{2},\,\,\,,n_{N}}&a_{n_{1}}&n_{N} \,\,\,\,(\omega&(n_{1}\Omega_{1}+\,n_{2}\Omega_{2}\,+\dots+\,n_{N}\Omega_{N})) \end{array} \tag{66}\] Hence the Fourier transform of a dynamical variable \(f(\omega)\) consists of delta functions at all integer linear combinations of the \(N\) fundamental frequen cities \(\Omega_{1}\),..., \(\Omega_{N}\). Figure 6.1 shows the magnitude squared of the Fourier transform (i.e., the frequency power spectrum) of a dynamical variable for three experi mental situations: (_a_) a case with a limit cycle attractor, (_b_) a case with a two frequency quasiperiodic attractor, and (_c_) a case with a chaotic attractor. These results (Swinney and Gollub, 1978) were obtained for an experiment on Couette Taylor flow (see Figure 3.10(_a_)). The three spectra shown correspond to three values of the rotation rate of the inner cylinder in Figure 3.10(_a_), with (_a_) corresponding to the smallest rate and (_c_) corresponding to the largest rate. Note that for the quasiperiodic case the frequencies \(n_{1}\Omega_{1}+n_{2}\Omega_{2}\) are dense on the \(\omega\) axis, but, since their amplitudes decrease with increasing \(n_{1}\) and \(n_{2}\), peaks at frequencies corresponding to very large values of \(n_{1}\) and \(n_{2}\) are eventually below the overall noise level of the experiment. In the chaotic case, Figure 6.1(_c_), we see that peaks at the two basic frequencies \(\Omega_{1}\) and \(\Omega_{2}\) are present, but that the spectrum has also developed a broad continuous component. (Note that the spectrum is not a discrete spectrum, but the spectrum is not a discrete spectrum, but the spectrum is not a discrete spectrum.) Figure 6.1: Results for frequency power spectra for a Couette Taylor experiment with increasing rotation rate of the inner cylinder (Gollub and Swinney, 1975). that the broad continuous component in Figure 6.1(_c_) is far above the noise level of \(\sim 10\)[4] evident in Figure 6.1(_a_).) The situation in Figure 6.1(_c_) is in contrast to that in Figure 6.1(_b_), where the only apparent frequency components are discrete (namely \(n_{1}\Omega_{1}+n_{2}\Omega_{2}\)). The presence of a continuous component in a frequency power spectrum is a hallmark of chaotic dynamics. A simple way to envision the creation of a quasiperiodic signal with a mixture of frequencies is illustrated in Figure 6.2, which shows two sinusoidal voltage oscillators in series with a nonlinear resistive element whose resistance \(R\) is a function of the voltage \(V\) across it, \(R=R(V)\). Since the voltage sources are in series, we have \(V=\upsilon_{1}\sin(\Omega_{1}t+\theta_{0}^{(1)})+\upsilon_{2}\sin(\Omega_{2}t +\theta_{0}^{(2)})\). The current through the resistor, \(I(t)=V/R(V)\), is a nonlinear function of \(V\) and hence will typically have all frequency components \(n_{1}\Omega_{1}+n_{2}\Omega_{2}\). Assuming that \(\Omega_{1}\) and \(\Omega_{2}\) are incommensurate, the current \(I(t)\) is two frequency quasi periodic. The situation shown in Figure 6.2 is, in a sense, _too_ simple to give very interesting behavior. If, for example, the value of the current \(I\) were to affect the dynamics of the voltage source oscillators, then a much richer range of behaviors would be possible, including _frequency locking_ and chaos. Frequency locking refers to a situation where the interaction of two nonlinear oscillators causes them to self synchronize in a coherent way so that their basic frequencies become commensurate (as we shall see, this implies that the motion is periodic) and remain locked in their commensurate relationship over a range of parameters. This will be discussed further shortly. Let us now specialize to the case of attracting two frequency quasi periodicity (\(N=2\)) and ask, what is the geometrical shape of the attractor in phase space in such a case? To answer this, assume that we have a two frequency quasiperiodic solution of the dynamical system Eq. (1.3). In this case every component \(x^{(i)}\) of the vector \(\mathbf{x}\) giving the system state can be expressed as Figure 6.2: Two sinusoidal voltage sources driving a nonlinear resistor. \[x^{(i)}(t)=G^{(i)}(t_{1},\,t_{2})|_{t_{1}=t_{2}=t}\] Since \(G^{(i)}\) is periodic in \(t_{1}\) and \(t_{2}\), we only need specify the value of \(t_{1}\) and \(t_{2}\) modulo \(T_{1}\) and \(T_{2}\) respectively. That is, we can regard the \(G^{(i)}\) as being functions of two _angle_ variables \[\bar{\theta}_{j}=\Omega_{j}t_{j}\mbox{ modulo }2\pi;\,j=1,\,2 \tag{67}\] Thus the system state is specified by specifying two angles, \[{\bf x}={\bf G}(\bar{\theta}_{1}/\Omega_{1},\,\bar{\theta}_{2}/\Omega_{2}), \tag{68}\] where \({\bf G}\) is periodic with period \(2\pi\) in \(\bar{\theta}_{1}\) and \(\bar{\theta}_{2}\). Specification of one angle can be regarded geometrically as specifying a point on a circle. Specification of two angles can be regarded geometrically as specifying a point on a two dimensional toroidal surface (cf. Figure 6.3). In the full phase space, the attractor is given by Eq. (6.8), which must hence be topologically equivalent to a two dimensional torus (i.e., it is a distorted version of Figure 6.3). A two frequency quasiperiodic orbit on a toroidal surface in a three dimensional \({\bf x}\) phase space is shown schematically in Figure 6.4. The orbit continually winds around the torus in the short direction (making an average of \(\Omega_{1}/2\pi\) rotations per unit time) and simultaneously continually winds around the torus in the long direction (making an average of \(\Omega_{2}/2\pi\) rotations per unit time). Provided that \(\Omega_{1}\) and \(\Omega_{2}\) are incommensurate, the orbit on the torus never closes on itself, and, as time goes to infinity, the orbit will eventually come arbitrarily close to every point on the toroidal surface. If we consider the orbit originating Figure 6.3: A point on a torus specifying the two angles \(\bar{\theta}_{1}\) and \(\bar{\theta}_{2}\). from the initial condition \(\mathbf{x}_{0}\) near (but not on) a _toroidal attractor_, as shown in Figure 6.4, then, as time progresses, the orbit circulates around the torus in the long and short directions and asymptotes to a two frequency quasiperiodic orbit on the torus. We define the _rotation number_ in the short direction as the average number of rotations executed by the orbit in the short direction for each rotation it makes in the long direction, \[R=\Omega_{1}/\Omega_{2} \tag{6.9}\] When \(R\) is irrational the orbit fills the torus, never closing on itself. When \(R\) is rational, \(R=\tilde{p}/\tilde{q}\) with \(\tilde{p}\) and \(\tilde{q}\) integers that have no common factor, the orbit closes on itself after \(\tilde{p}\) rotations the short way and \(\tilde{q}\) rotations the long way. Such an orbit is periodic and has period \(\tilde{p}T_{1}=\tilde{q}T_{2}\). The case \(R=3\) is illustrated in Figure 6.5, where we see that the orbit closes on itself after three rotations the short way around and one rotation the long way around. In Figures 6.2 6.4 we have restricted our considerations to two frequency quasiperiodicity. We emphasize, however, that the situation is Figure 6.4: Two frequency quasiperiodic orbit on a torus lying in a three dimensional phase space \(x=(x^{(1)},\,x^{(2)},\,x^{(3)})\). Figure 6.5: An orbit with a rotation number of \(R=3\). essentially the same for \(N\) frequency quasiperiodicity. In that case the orbit fills up an \(N\) dimensional torus in the phase space. By an \(N\) dimensional torus we mean an \(N\) dimensional surface on which it is possible to specify uniquely any point by a smooth one to one relationship with the values of \(N\) angle variables. We denote the \(N\) dimensional torus by the symbol \(T^{N}\). In some situations it is possible to rule out the possibility of quasiper iodicity. As an example, consider the system of equations studied by Lorenz, Eqs. (2.30). It was shown in the paper by Lorenz (1963) that all orbits eventually enter a spherical region, \(X^{2}+Y^{2}+Z^{2}<\) (const ), from which they never leave. Thus, \(X\), \(Y\) and \(Z\) are bounded, and we may regard the phase space as Cartesian with axes \(X\), \(Y\) and \(Z\). A two frequency quasiperiodic orbit fills up a two dimensional toroidal surface in this space. Thus the toroidal surface is invariant under the flow. That is, evolving every point on the surface forward in time by any fixed amount maps the surface to itself. Furthermore, the volume inside the torus must also be invariant by the continuity of the flow. However, we have seen in Section 2.4.1 that following the points on a closed surface forward in time, the Lorenz equations contract the enclosed phase space volumes exponen tially in time. Thus two frequency quasiperiodic motion is impossible for this system of equations. ### The circle map The system illustrated in Figure 6.2 is particularly simple. Since \(\Omega_{1}t\) and \(\Omega_{2}t\) appear only as the argument in sinusoids, we regard them as angles \(\theta^{(1)}(t)=\Omega_{1}t+\theta_{0}^{(1)}\) and \(\theta^{(2)}(t)=\Omega_{2}t+\theta_{0}^{(2)}\). In these terms the dynamical system reduces to \[{\rm d}\theta^{(1)}/{\rm d}t=\Omega_{1}\ \mbox{and}\ {\rm d}\theta^{(2)}/{\rm d }t=\Omega_{2}\] Now taking a surface of section at \((\theta^{(2)}\mbox{ modulo }2\pi)=\) (const ), we obtain a one dimensional map for \(\theta_{n}=\theta^{(1)}(t_{n})\) modulo \(2\pi\) (where \(t_{n}\) denotes the time at the \(n\)th piercing of the surface of section), \[\theta_{n+1}=(\theta_{n}+w)\ \mbox{modulo}\ 2\pi, \tag{6.10}\] where \(w=2\pi\Omega_{1}/\Omega_{2}\). Geometrically, the map Eq. (6.10) can be thought of as a rigid rotation of the circle by the angle \(w\). For incommensurate frequencies, \(\Omega_{1}/\Omega_{2}\) is irrational, and for any initial condition, the orbit obtained from the map (6.10) densely fills the circle, creating a uniform invariant density of orbit points in the limit as time goes to infinity. On the other hand, if \(\Omega_{1}/\Omega_{2}=\tilde{p}/\tilde{q}\) is rational, then the orbit is periodic with period \(\tilde{q}\) (\(\theta_{n+\tilde{q}}=(\theta_{n}+\tilde{q}w)\) modulo \(2\pi=\theta_{n}\)). Thus there is only a zero Lebesgue measure set of \(w\) (namely, the rationals) for which periodic motion (as opposed to two frequency quasiperiodic motion) applies. Let us now ask, what would we expect to happen if the two voltage oscillators in Figure 6.2 were allowed to couple nonlinearly? Would the quasiperiodicity be destroyed and immediately be replaced by periodic orbits? Since the rationals are dense, and coupling is known to induce frequency locking, this question deserves some serious consideration. To answer this Arnold (1965) considered a model that addresses the main points. In particular, the effect of such coupling of the oscillator dynamics is to add nonlinearity to Eq. (6.10). Thus Arnold introduced the map, \[\theta_{n+1}=(\theta_{n}+w+k\sin\theta_{n})\mbox{ modulo }2\pi, \tag{6.11}\] where the term \(k\sin\theta\) models the effect of the nonlinear oscillator coupling. This map is called the _sine circle map_. In what follows we take \(w\) to lie in the range \([0,\,2\pi]\). Although deceptively simple in appearance, the circle map (like the logistic map) reveals a wealth of intricate behavior. It is of interest to understand the behavior of this map as a function of both \(w\) and the nonlinearity parameter \(k\). A key role is played by the rotation number, which for this case is given by \[R=\frac{1}{2\pi}\lim_{m\to\infty}\frac{1}{m}^{m}\stackrel{{ 1}}{{ =0}}\Delta\theta_{n}, \tag{6.12}\] where \(\Delta\theta_{n}=w+k\sin\theta_{n}\). For \(k=0\), we have \(R=w/2\pi\) and the periodic orbits (rational values of \(R\)) only occur for a set of \(w\) of Lebesgue measure zero (i.e., rational values of \(w/2\pi\)). What are the characters of the sets of \(w\) values yielding rational and irrational \(R\) if \(k>0\)? Arnold (1965) considered this problem for small \(k\). Specifically, we ask whether the Lebesgue measure of \(w\) yielding irrational \(R\) (i.e., quasiperiodic motion) immediately becomes zero when \(k\) is made nonzero. Arnold proved the fundamental result that quasiperiodicity survives in the follow being sense. For small \(k\) the Lebesgue measure of \(w/2\pi\) yielding quasiperodicity is close to 1 and approaches 1 as \(k\to 0\). The set of \(w\) values yielding quasiperiodicity, however, is nontrivial because arbitrarily close to a \(w\) value yielding quasiperiodicity (irrational \(R\)) there are _intervals_ of \(w\) yielding attracting periodic motion (rational \(R\)). (The existence of intervals where \(R\) is rational is what we mean by the term frequency locking.) Thus, the periodic motions are dense in \(w\). (This corresponds to the fact that rational numbers are dense.) The set of \(w\) values yielding quasiperiodicity is a Cantor set of positive Lebesgue measure (in the terminology of Section 3.10, it is a 'fat fractal'). Arnold's result was an important advance and is closely related to the celebrated KAM theory (for Kolmogorov, Arnold and Moser) for Hamiltonian systems (see Chapter 7). Specifically, in dealing with the circle map, as well as the problem which KAM theory addresses, one has to confront the difficulty of the 'problem of small denominators.' To indicate briefly the nature of this problem, first note that Arnold was examining the case of small \(k\). The natural approach is to do a perturbation expansion around the case \(k=0\) (i.e., the pure rotation, Eq. (6.10)). One problem is that at every stage of the expansion this results in infinite series terms of the form \[\begin{array}{c} A_{m}\\ {\overline{1}}\end{array}\begin{array}{c} A_{m}\\ {\exp(2\pi{\rm i}m\,R)}\end{array}\exp({\rm i}m\theta)\] For \(R\) any irrational, the number [(\(mR\)) modulo 1] can be made as small as we wish by a proper choice of the integer \(m\) (possibly very large). Hence the denominator, \(1\quad\exp(2\pi{\rm i}mR)\), can become small, and thus there is the concern that the series might not converge. To estimate this effect say \(\exp(2\pi{\rm i}mR)\) is close to 1 so that the denominator is small. This occurs when \(mR\) is close to an integer; call it \(n\). In this case \[1\quad\exp(2\pi{\rm i}mR)\simeq-\ 2\pi{\rm i}(mR\quad n)\] Thus, the magnitude of a term in the sum is approximately \[\begin{array}{c} {\frac{1}{2\pi m}\ \frac{A_{m}}{R\quad n/m}}\end{array}\] (Clearly, if \(R\) is rational, then \(R=n/m\) for some \(n\) and \(m\), and this expansion fails. But we are here interested in the case of quasiperiodic motion for which \(R\) is irrational.) The convergence of the sum will depend on the number \(R\). In particular, \(R\) values satisfying the inequality \[\begin{array}{c}\left|\,R\quad\frac{n}{m}\,\right|>\frac{K}{m^{(2+\varepsilon )}}\end{array}\] for some positive numbers \(K\) and \(\varepsilon\) and all values of the integers \(m\) and \(n\) (\(m\neq 0\)) are said to be 'badly approximated by rationals.' It is a basic fact of number theory that the set of numbers (\(R\) in our case) that are not badly approximated by rationals has Lebesgue measure zero. The coefficients \(A_{m}\) are obtained from Fourier expansion of an analytic function, and hence the \(A_{m}\) decay exponentially with \(m\), i.e., for some positive numbers \(\sigma\) and \(c\), we have \(A_{m}\ <c\exp(\quad\sigma\ m)\). Thus \[\begin{array}{c} {\frac{1}{2\pi m}\ \frac{A_{m}}{R\quad m/n}<O(m^{(1+\varepsilon)}\exp(\quad\sigma\ m))}\end{array}\] The exponential decay \(\exp(\quad\sigma\ m)\) is much stronger than the power law increase \(m^{(1+\varepsilon)}\), and convergence of the sum is therefore obtained for all \(R\) values that are badly approximated by rationals. This, however, is only the beginning of the story since, at each stage of the perturbation expansion, sums of this type appear. While these sums converge, it still remains to show convergence of the perturbation expansion itself. Arnold was able to prove convergence of his perturbation expansion. Thereby he showed that the Lebesgue measure of \(w\) in \([0,\,2\pi\,\) for which there is quasiperiodicity (i.e., irrational \(R\)) is not zero for small \(k\) and that this measure approaches \(2\pi\) in the limit \(k\to 0\). Thus, for small \(k\), quasiper iodicity survives and occupies most of the Lebesgue measure. Let us now address the issue of frequency locking. As an example, consider the rotation number \(R=0\). This corresponds to a fixed point of the map. Hence we look for solutions of \[\theta=(\theta+w+k\sin\theta) \tag{6.13}\] The solution of this equation is demonstrated graphically in Figure 6.6(_a_) for several values of \(w\). Note that there are no solutions of the fixed point equation, Eq. (6.13), for the value of \(w\) labeled \(w<\quad w_{0}\) in the figure. As \(w\) is increased from \(w<\quad w_{0}\), the graph of \((\theta+w+k\sin\theta)\) becomes tangent to the dashed \(45^{\circ}\) line at \(w==-\)\(w_{0}\). Thus two fixed point orbits, one stable and one unstable, are born by a tangent bifurcation as \(w\) increases through \(w==-\)\(w_{0}\). As \(w\) is increased further, the two solutions continue to exist, until, as \(w\) increases through \(w_{0}\), they are destroyed in a backward tangent bifurcation. Figure 6.6(_b_) shows the corresponding bifurcation diagram. From Eq. (6.13) we have \(w_{0}=k\). Thus we see that, for \(k>0\), the stable fixed point (\(R=0\)) exists in an _interval_ of \(w\) values, \(k>w>\quad k\), whereas at \(k=0\) we only have \(R=0\) at the single value \(w=0\). This is what we mean by frequency locking. Similarly, one can show that, for small \(k\), an attracting period two orbit (corresponding to a rotation number \(R=\frac{1}{2}\)) exists in a range \(w_{1/2}<w<w_{1/2}^{+}\), where \[w_{1/2}^{\pm}=\pi\pm k^{2}/4+O(k^{3}) \tag{6.14}\] In general, for any rational rotation number \(R=\tilde{p}/\tilde{q}\) there is a frequency locking range of \(w\) in which the corresponding attracting period \(\tilde{q}\) orbit exists, and this range (\(w_{\tilde{p}/\tilde{q}}\), \(w_{\tilde{p}/\tilde{q}}^{+}\)) has a width \(\Delta w_{\tilde{p}/\tilde{q}}=w_{\tilde{p}/\tilde{q}}^{+}\quad w_{\tilde{p}/ \tilde{q}}\) which scales as \[\Delta w_{\tilde{p}/\tilde{q}}=O(k^{\tilde{q}}) \tag{6.15}\] Furthermore, as \(w\) increases through the value \(w_{\tilde{p}/\tilde{q}}\), the attracting period \(\tilde{q}\) orbit with rotation number \(\tilde{p}/\tilde{q}\) is born by a forward tangent bifurcation, and as \(w\) increases through the value \(w_{\tilde{p}/\tilde{q}}^{+}\) the attracting period \(\tilde{q}\) dies by a backward tangent bifurcation. Note that, since the map function is monotonically increasing \(0\leq k\leq 1\), its derivative and that of its \(n\) times composition are positive, \(\mathrm{d}M^{n}(\theta)/\mathrm{d}\theta>0\). Hence there can be no period doubling bifurcations of a period \(n\) orbit for any \(n\) in \(0\leq k\leq 1\), since the stability coefficient (slope of \(M^{n}\)) must be \(1\) at a period doubling bifurcation point. Consider the total length in \(w\) (Lebesgue measure) of all frequency locked intervals in \([0,\,2\pi]\), \[\Delta w_{r}\] Arnold's results show that this number is small for small \(k\) and decreases to zero in the limit \(k\to 0\). Thus the set of \(w\) values yielding quasiperiodic motion has most of the Lebesgue measure of \(w\) for small \(k\). This set is a Cantor set of positive Lebesgue measure. (We have previously encountered such a set in Section 2.2 when we considered the set of \(r\) values for which the logistic map yields attracting chaotic motion.) The situation can be illustrated as in Figure 6.7 which shows regions of the \(wk\) plane (called Arnold tongues) in which the rotation numbers \(R=n/m\) for \(m\) up to eight exist. We see that there are narrow frequency locked tongues of rational \(R\) which extend down to \(k=0\). For higher periods (i.e., larger \(\tilde{q}\) in \(R=\tilde{p}/\tilde{q}\)) the frequency locked intervals become extremely small for small \(k\). (This qualitative type of frequency locking behavior occurring in tongues in parameter space has been found in numerical solutions of ordinary different equations, as well as in physical experiments.) For \(k<1\) and a \(w\) value yielding an irrational value of \(R\), the orbit points on the resulting quasiperiodic orbit generate a smooth invariant density \(\rho(\theta)\) In this case, by a smooth change of variables \(\phi=f(\theta)\) the circle map can be transformed to the pure rotation \[\phi_{n+1}=\phi_{n}+2\pi R(w,\ k)\] Since the pure rotation generates a uniform density \(\tilde{\rho}(\phi)=1/2\pi\), and the circle map is invertible for \(k<1\), we see by \(\tilde{\rho}(\phi)\mathrm{d}\phi=\rho(\theta)\mathrm{d}\theta\) that the change of variables is \[\phi=f(\theta)\equiv 2\pi\underset{0}{\theta}\rho(\theta^{\prime})\,\mathrm{d} \theta^{\prime} \tag{61}\] As \(k\) approaches 1 from below, the widths \(\Delta w_{p/q}\) increase, and the sum \(\sum\Delta w_{r}\) approaches \(2\pi\). That is, at \(k=1\), the entire Lebesgue measure in \(w\) is occupied by frequency locked periodic orbits, and the quasi periodic orbits occupy zero Lebesgue measure in \(w\). Figure 8 shows a numerical plot of \(R\) versus \(w\) at \(k=1\). We see that \(R\) increases mono tonically with \(w\). The set of \(w\) values on which \(R\) increases is the Cantor set of zero Lebesgue measure on which \(R\) is irrational (i.e., the motion is quasiperiodic). The function \(R\) versus \(w\) at \(k=1\) is called a _complete devil's staircase_. At lower \(k\) we again obtain a monotonic function which increases only on the Cantor set of \(w\) values where \(R\) is irrational, but now the Cantor set has positive Lebesgue measure (it is a fat fractal (Section 3.10)). We consequently say that \(R\) versus \(w\) is an _incomplete devil's staircase_ for \(1>k>0\). The box counting dimension of the set on which \(R\) increases for \(k=1\) (the complete devil's staircase case) has been calculated by Jensen, Bak and Bohr (1983). They obtain a dimension value of \(D_{0}\) 0 87. Further Figure 7: Arnold tongues for the circle map (Jensen _et al._, 1984). more, they claim that this value is universal in that it applies to a broad class of systems, not just the circle map. This contention is supported by the renormalization group theory of Cvitanovic _et al._ (1985). For \(k>1\), the circle map is noninvertible (d\(\theta_{n+1}/\)d\(\theta_{n}\) changes sign as \(\theta_{n}\) varies when \(k>1\)). As a consequence of this, typical initial conditions can yield chaotic orbits but do not yield quasiperiodic orbits for \(k>1\). To see why quasiperiodic orbits do not result from typical initial conditions, we note that we have previously seen that a smooth change of variables Eq. (6.16) transforms the circle map to the pure rotation if there is a quasiperiodic orbit with a smooth invariant density \(\rho(\theta)\). Since it is not possible to transform a noninvertible map to an invertible one (i.e., the pure rotation), we conclude that there can be no quasiperiodic orbits generating smooth invariant densities1 for \(k>1\). As an example of circle map type dynamics appearing in an experi ment, we mention the paper of Brandstater and Swinney (1987) on Couette Taylor flow (see Section 3.7). Under particular conditions the authors observe two frequency quasiperiodic motion on a two dimensional toroidal surface. Figure 6.9(_a_) shows a delay coordinate plot of the orbit \(V(t)\) versus \(V(t\quad\tau)\) where \(V(t)\) is the radial velocity component meas ured at a particular point in the flow. Taking a surface of section along the dashed line in Figure 6.9(_a_) one obtains a closed curve indicating that the orbit in Figure 6.9(_a_) lies on a two dimensional torus. Figure 6.9(_b_) shows such a surface of section plot (for slightly different conditions from those in Figure 6.9(_a_)). Brandstater and Swinney then parameterize the location of orbit points in the surface of section by an angle \(\theta\) measured from a point inside the closed curve. In Figure 6.9(_c_) they plot the value of \(\theta\) at the (\(n+1\))th piercing of the surface of section versus Figure 6.8: Complete devil’s staircase at \(k=1\) (Jensen _et al._, 1984). ### 6.2 The circle map The circle map is indeed of a similar form to the circle map of Arnold:2 it is invertible and is close to a pure rotation with an added nonlinear piece, \(\theta_{n+1}=\theta_{n}+w+P(\theta_{n})\) modulo \(2\pi\), where \(P(\theta)\) is the periodic nonlinear piece, \(P(\theta)=P(\theta+2\pi)\). (In the absence of \(P(\theta)\) the map would be two parallel straight lines at \(45^{\circ}\) (pure rotation), which Figure 6.9(\(c\)) would resemble if the wiggles due to \(P(\theta)\) were absent.) As an example of how circle map type phenomena can appear in a differential equation, consider the equation \[{\rm d}\theta/{\rm d}t+ h(\theta)=V+W\cos(\Omega t), \tag{6.17}\] where the function \(h(\theta)\) is \(2\pi\) periodic, \(h(\theta)=h(\theta+2\pi)\). This equation may be viewed as arising from the one dimensional motion of a particle in a spatially periodic potential, where the particle experiences a strong frictional drag force proportional to its velocity, and is subject to an external force given by the sum of a component constant in time and a component sinusoidal in time. Identifying \(\theta\) with \(2\pi x/\lambda\), where \(x\) is the particle position and \(\lambda\) is the spatial period of the potential, Eq. (6.17) applies in the highly damped, slowly forced case such that the particle inertia, proportional to \({\rm d}^{2}x/{\rm d}t^{2}\), is negligible. (In the special case \(h(\theta)=\sin\theta\), Eq. (6.17) may be viewed as a highly damped, slowly forced Figure 6.9: (\(a\)) Projection of the orbit onto the delay coordinate plane \(V(t)\) versus \(V(t\)\). (\(b\)) The surface of section given by the dashed line in (\(a\)). (\(c\)) Experimental circle map obtained from (\(b\)) (Brandstater and Swinney, 1987).
pendulum equation (compare with Eq. (1.6a). We note, however, that the case \(h(\theta)=\sin\theta\) is, in a sense, singular,[3] and hence we wish to consider general functions \(h(\theta)\).) Now consider the solution \(\theta(t)\) of (6.17). Letting \(\theta_{n}=\theta(t_{n})\,{\rm mod}\,2\pi\), where \(t_{n}=2\,n\pi/\Omega\), we can in principle integrate (6.17) from time \(t_{n}\) to time \(t_{n+1}\) to obtain an invertible one dimensional map, \[\theta_{n+1}=M(\theta_{n};\;\;\;,\;V\,,\,W) \tag{6.18}\] For typical \(h(\theta)\), this circle map, although more complicated than Arnold's sine circle map (6.11), is expected to exhibit similar phenomena when the nonlinearity parameter \(k\) in (6.11) satisfies \(k\;<1\). (The reason we restrict \(k\;<1\) is that (6.11) is noninvertible for \(k\;>1\), while (6.18) must always be invertible since it arises from integration of an ordinary differential equation.) In particular, we expect frequency locking and Arnold tongues as we vary the parameters. For example, if we fix \(W\), we can view \(V\) as being roughly analogous to the rotation parameter \(w\) in (6.11), and we can view as analogous to the nonlinearity parameter \(k\). Thus we expect the picture in the \(V\) plane to be a distorted version of that in the \(wk\) plane (see Fig. 6.7). Equation (6.17) can be considered as a two dimensional dynamical system in the two variables \(\theta^{(1)}=\theta\) and \(\theta^{(2)}=\Omega\,t\) which are both angle variables. Hence (6.17) describes a flow on a two dimensional toroidal surface. On this surface we can either have a quasiperiodic orbit, or an attracting periodic orbit, the latter corresponding to a frequency locked situation. The attraction of orbits on the torus to a periodic orbit is illustrated in Figure 6.10. As mentioned already, this behavior is displayed by higher dimensional systems. What happens in these higher dimensional systems is that there is an invariant two dimensional torus embedded in the phase space flows. On the torus, the flow can be either quasiperiodic or else it can have an attracting periodic orbit (Figure 6.10). When the flow is Figure 6.10: Attraction of initial conditions on a two dimensional torus to a periodic orbit. quasiperiodic, a surface of section yields a picture of the attractor cross section which is either a closed curve, or several closed curves, resulting from the intersection of the surface of section with the attracting invariant torus. When the attractor is periodic, the surface of section intersection with the attractor reveals a finite number of discrete points (note, however, that there can still be an invariant torus on which the attractor lies). We can think of the flow in the higher dimensional phase space as being attracted to a lower dimensional (two dimensional) flow on the torus, on which, in turn, there can be quasiperiodic motion or a periodic attractor. A fairly common way that one sees chaos appearing as a system parameter is varied is that first two frequency quasiperiodicity is seen, then frequency locking to a periodic attractor, and then a chaotic attractor. Since chaos is not possible for a two dimensional flow, in order for the chaos to appear, the orbit can no longer be on a two dimensional torus. Typically, as the parameter is increased toward the value yielding chaos, the invariant two dimensional torus is destroyed. When this happens, it does so while in the parameter range in which the periodic attractor exists. In terms of the circle map, we can think of the destruction of the torus as analogous to the map becoming noninvertible as \(k\) increases through 1 (quasiperiodic orbits do not occur for typical initial conditions for \(k>1\)). If we were to fix \(w\) and increase \(k\), we might expect to see quasiperiodicity and then frequency locking as \(k\) is increased toward one, since the frequency locked regions have Lebesgue measure \(2\pi\) in \(w\) at \(k=1\). The periodic solutions at \(k=1\) typically remain stable as \(k\) is increased past 1 into the region where chaos becomes possible. These periodic solutions can then become chaotic, for example, by going through a period doubling cascade. In our discussion above of the onset of chaos for the circle map, we imagined a typically chosen variation along a path in parameter space; specifically, we imagined choosing a typical \(w\) and then increasing \(k\). Another possibility is carefully to choose a path in parameter space such that we maintain the rotation number to be constant and irrational. Thus, as we increase \(k\), we adjust \(w\) to keep \(R(k,\,w)\) the same. Such a path threads between the frequency locked Arnold tongues all the way up to \(k=1\). The same can be done in an experiment on a higher dimensional system, in which case \(k=1\) corresponds to the point at which the torus is destroyed. Studies of this type of variation have revealed that there is a universal phenomenology in the behavior of systems approaching torus destruction along such a path in parameter space. The behavior depends on the rotation number \(R\) chosen but is essentially system independent. Extensive work demonstrating this has been done for the case of the path on which the rotation number is held constant at the value given by the golden mean, \(R=(\surd 5\quad 1)/2\equiv R_{\rm g}\) (Shenker, 1982; Feigenbaum _et al._,1982; Ostlund _et al._, 1983; Umberger _et al._, 1986). This number is of particular significance because of its number theoretic properties. Specifi calls, in some sense (see Section 7.3.2), \(R_{\rm g}\) is the most irrational of all irrational numbers in that it is most difficult to approximate by rational numbers of limited denominator size. These results for \(R=R_{\rm g}\) are obtained using the renormalization group technique, the same technique used to analyze the universal properties of the period doubling cascade (cf. Chapter 8). Perhaps the most striking of these results is that for the low frequency power spectrum of a process which is quasiperiodic with rotation number equal to the golden mean and parameters corresponding to the critical point at which the torus is about to be destroyed (\(k=1\) for the circle map). As illustrated in Figure 6.11, if one plots the frequency power spectrum \(P(\omega)\) divided by \(\omega^{2}\) versus the frequency \(\omega\) on a log log plot, then the result is predicted to be universal and periodic in \(\log\omega\) for small \(\omega\). Furthermore, the periodicity length in \(\log\omega\) is just the logarithm of the golden mean. ### \(N\) frequency quasiperiodicity with \(N>2\) For \(N\) frequency quasiperiodicity we imagine a flow analogous to that pictured in Figure 6.4, but on an \(N\) dimensional toroidal surface \(T^{N}\). Figures 6.12(_a_) and (_b_) illustrate the 'unwrapping' of quasiperiodic flowson \(T^{2}\) and \(T^{3}\). Since the \(N\) frequency quasiperiodic flow can be put in the form of a dynamical system on \(T^{N}\) given by \[{\rm d}\theta_{i}/{\rm d}t=\Omega_{i},\,i=1,\,2,\,\ldots,\,N,\] via a suitable change of variables (cf. Eq. (6.8)), we see that an \(N\) frequency quasiperiodic attractor has \(N\) Lyapunov exponents that are zero. (For an \(N\) frequency quasiperiodic attractor, the other exponents are negative corresponding to attraction to the torus.) It is useful to imagine the creation of quasiperiodic motion on an \(N\) torus for a continuous time dynamical system as arising via the successive addition of new active'modes' of oscillation, each with its own frequency. Thus, say we start with a situation where the attracting motion is a steady state. We then increase some system parameter \(p\). As \(p\) increases past \(p_{1}\), \(p_{2}\),..., \(p_{N}\) new active modes of oscillation, each with their own fundamental frequency, \(\Omega_{1}\), \(\Omega_{2}\),..., \(\Omega_{N}\), are introduced as \(p\) passes each \(p_{i}\), and this leads the attractor to make transitions as follows: (steady state) \(\rightarrow\) (periodic) \(\rightarrow\) (2 torus) \(\rightarrow\cdots\rightarrow\) (\(N\) torus). The mechanism whereby new active modes of oscillation are added is the _Hopf bifurcation_ which we now briefly describe with reference to the first transition, (steady state) \(\rightarrow\) (periodic). Consider the case where the linearized equations about a fixed point \({\bf x}={\bf x}_{\ast}\) have a solution of Eq. (4.8) such that there is a pair of complex conjugate eigenvalues, \(s=\sigma(p)\pm{\rm i}\omega(p)\), and its real part \(\sigma(p)\) increases with \(p\), being negative for \(p<p_{1}\), zero for \(p=p_{1}\), and positive for \(p>p_{1}\). Furthermore, we assume that all other eigenvalues \(s\) have nega veal parts when \(p\) increases through \(p_{1}\). The essential dynamics for \(p\) close to \(p_{1}\) and \({\bf x}\) close to \({\bf x}_{\ast}\) can be studied by restricting attention to a two dimensional subspace of the phase space. For \(p<p_{1}\) an appropriate reduced set of two dimensionalized linearized equations yield spiraling in toward \({\bf x}_{*}\), while for \(p>p_{1}\) an initial conditional spirals out from \({\bf x}_{*}\) (see Figure 4.6). As it spirals out, increasing its distance from \({\bf x}_{*}\), the linear approximation to the dynamics becomes less well satisfied, and nonlinear terms in the equations of motion must be considered. Including the nonlinearity to lowest order by making a Taylor series expansion of the dynamical equations about the point \({\bf x}={\bf x}_{*}\), the essential two dimensional dynamics can be cast in a _normal form_ (for example, Guckenheimer and Holmes, 1983), \[\frac{{\rm d}r}{{\rm d}t}=[(p\ \ \ \ p_{1})\sigma^{\prime}\ \ \ \ \ ar^{2}\ r,\] (6 19a) \[\frac{{\rm d}\theta}{{\rm d}t}=\ \Omega_{1}+(p\ \ \ \ p_{1})\omega^{\prime}+br^{2}],\] (6 19b) where \(\sigma^{\prime}=({\rm d}\sigma(p)/{\rm d}p)>0\), \(\omega^{\prime}={\rm d}\omega(p)/{\rm d}p\) both evaluated at \(p=p_{1}\), and \(\Omega_{1}=\omega(p_{1})\). Here \(r\) and \(\theta\) are scaled polar coordinates centered at \({\bf x}_{*}\). Thus, for \(p<p_{1}\) the trajectory spirals in, approaching \({\bf x}_{*}\), while for \(p>p_{1}\) it spirals out approaching the limit cycle circle \(r=\dot{r}(p)\equiv[(\sigma^{\prime}/a)(p\ \ \ \ p_{1})]^{1/2}\) on which it circulates with angular frequency \(\Omega_{1}+(p\ \ \ \ p_{1})\omega^{\prime}+b\dot{r}^{2}(p)\) (we assume here that\({}^{4}\)\(a>0\) in which case the bifurcation is said to be'supercritical'). (See Figure 6.13.) Thus the Hopf bifurcation creates a periodic orbit whose frequency at the bifurcation is \(\Omega_{1}\). New frequencies are added by successive additional Hopf bifurcations at the parameter values \(p_{2}\), \(p_{3}\),..., \(p_{N}\), thus leading to motion on a torus \(T^{N}\). In an early paper, Ruelle and Takens (1971) considered four frequency quasiperiodic flows on the torus \(T^{4}\). They showed that it was possible to Figure 6.13: Illustration of a supercritical Hopf bifurcation for \(a>0\). make _arbitrarily small_ (but very carefully chosen), smooth perturbations to the flow such that the quasiperiodic flow was converted to a chaotic flow on a strange attractor lying in the torus \(T^{4}\). Furthermore, these chaotic attractors, once created, cannot be destroyed by arbitrarily small perturbations of the flow. Subsequently, Newhouse, Ruelle and Takens (1978) showed that the same could be said for a three frequency quasiperiodic flow5 on the torus \(T^{3}\). It had been tentatively conjectured that these results meant that three and four frequency quasiperiodicities are unlikely to occur because they are 'easily' destroyed and supplanted by chaos; and furthermore that, if there was two frequency quasiperiodicity and a third frequency was destabilized as some stress parameter of the system is increased, then the flow would immediately become chaotic. In particular, the relevance of this to the onset of turbulence in fluids was discussed. While these speculations turned out not to be true in detail, these papers played an important role in providing early motivation for the study of chaos, especially in fluids. Furthermore, they pointed out that broad frequency spectra need not be the result of the successive addition of a great many discrete new frequency components as a system parameter is increased, as had been previously proposed, but could instead appear more abruptly as the result of the onset of a chaotic attractor. Numerical experiments were performed by Grebogi _et al._ (1985c) to see if three frequency quasiperiodicity would occur and, if so, to obtain an idea of how often. They, like Newhouse _et al._, assumed a flow on a three torus \(T^{3}\). Then, taking a surface of section at times corresponding to one of the flow periods, a _map_ on a two torus results, which Grebogi _et al._ took to be of the form \[\left.\begin{array}{l}\theta_{n+1}=\ \theta_{n}+w_{1}+kP_{1}(\theta_{n},\,\phi_{n})\ \mbox{\rm] modulo}\ 2\pi,\\ \phi_{n+1}=\ \phi_{n}+w_{2}+kP_{2}(\theta_{n},\,\phi_{n})\ \mbox{\rm] modulo}\ 2\pi,\end{array}\right\} \tag{62}\] where \(\theta\) and \(\phi\) are angles and \(P_{1,2}\) are \(2\pi\) periodic in both \(\theta\) and \(\phi\). Equations (6.20) may be thought of as the extension to three frequency quasiperiodicity of the circle map model of two frequency quasiperiodicity. For \(k=0\), three frequency quasiperiodic flows correspond to \(w_{1}\) and \(w_{2}\) being incommensurate with themselves and \(2\pi\). That is, the only solution of \[m_{1}w_{1}+m_{2}w_{2}+2\pi m_{3}=0\] for integer \(m_{1,2,3}\) is the trivial solution \(m_{1}=m_{2}=m_{3}=0\). Grebogi _et al._ then arbitrarily chose particular sinusoidal forms for \(P_{1}\) and \(P_{2}\) and tested to see what fraction of the measure of (\(w_{1}\), \(w_{2}\)) was occupied by different types of attractors for various sizes of the nonlinearity parameter \(k\). This was done by choosing many pairs (\(w_{1}\), \(w_{2}\)) randomly in [0, \(2\pi\ \times\) [0, \(2\pi\) and calculating the two Lyapunov exponents \(h_{1}\) and \(h_{2}\) on the attractor. (By convention \(h_{1}\approx h_{2}\) ) If \(h_{1}=h_{2}=0\), the flow is three frequency quasiperiodic; if \(h_{1}=0\) and \(h_{2}<0\), the flow is two frequency quasiper iodic; if \(h_{1}\) and \(h_{2}\) are both less than zero the flow is periodic; and if \(h_{1}>0\) the flow is chaotic. The results are shown in Table 6.1. The value \(k_{\rm c}\) is the value of \(k\) past which the map (6.20) becomes noninvertible. Three frequency quasiperiodicity is not possible for \(k>k_{\rm c}\) (analogous to \(k>1\) for the circle map). As is evident, three frequency quasiperiodicity is very common at moderate values of the nonlinearity parameter \(k\). (Grebogi _et al._ also obtained similar results for flows on \(T^{4}\).) The situation is roughly analogous to that which occurs in the circle map: Two frequency quasiperiodicity motion can be converted to periodic motion by an arbitrarily small (but carefully chosen) change in \(w\) (the locked regions are dense in \(w\) and the quasiperiodicity exists on a Cantor set). Once \(w\) has been changed so that it lies in the interior of a phase locked interval, perturbations of \(w\) that are too small will be insufficient to move it out of the phase locked interval. Nevertheless, the measure of \(w\) corresponding to quasiperiodicity is positive, and quasiperiodicity is, therefore, expected to occur. The key point is that, in deciding whether a phenomenon can occur in practice, one should ask whether the measure in parameter space over which the phenomenon occurs is zero or positive, not whether carefully chosen arbitrarily small perturbations of the system can destroy the phenomenon. If the measure in parameter space yielding a particular phenomenon is positive, then a random choice of parameters has a nonzero probability of yielding that phenomenon, and we can expect that sometimes it will occur. In the works of Ruelle and Takens and Newhouse _et al._, the flow was on a torus. As mentioned in Section 6.2, however, it is possible for invariant tori to be destroyed as a system parameter is varied (in the context of Eqs. (6.20), this corresponds to \(k>k_{\rm c}\)). Thus another question \begin{table} \begin{tabular}{c c c c c} \hline \hline & Lyapunov exponents & \(\frac{k}{k_{\rm c}}=\frac{3}{8}\) & \(\frac{k}{k_{\rm c}}=\frac{3}{4}\) & \(\frac{k}{k_{\rm c}}=\frac{9}{8}\) \\ \hline Three frequency & \(h_{1}=h_{2}=0\) & 82\% & 44\% & 0\% \\ quasiperiodic & & & & & \\ Two frequency & \(h_{1}=0\) & 16\% & 38\% & 33\% \\ quasiperiodic & \(h_{2}<0\) & & & & \\ Periodic & \(h_{1,2}<0\) & 2\% & 11\% & 31\% \\ Chaotic & \(h_{1}>0\) & 0\% & 7\% & 36\% \\ \hline \hline \end{tabular} \end{table} Table 6.1: Fraction of attractors of various types. that naturally arises is the following. Say that one sees a transition in which, at some value of a parameter, there is three frequency quasiper iodicity, while at another there is chaos. Is the chaotic motion on an invariant three dimensional toroidal surface embedded in the phase space, or has the surface \(T^{3}\) been destroyed as the parameter is varied from the quasiperiodic value to the chaotic value? Both possibilities should be possible depending on the specific system considered. An example has been considered numerically by Battelino _et al._ (1989) who formulated a numerical technique for testing whether a chaotic attractor lies on a three torus. They found, for their example (which involved coupled van der Pol oscillators), that destruction of the three torus apparently preceded the occurance of a chaotic attractor. ### 6.4 Strange nonchaotic attractors of quasiperiodically forced systems In Chapter 1 we have defined a strange attractor as one which has fractal phase space structure, while we have defined a chaotic attractor as one on which typical orbits exhibit sensitive dependence on initial conditions. The logistic map at \(r=4\) has a chaotic attractor with Lyapunov exponent \(h=\ln 2>1\), but this attractor is not strange; it is simply the interval \([0,\,1]\). Strange attractors that are not chaotic are also possible. For example, the logistic map at \(r=r_{\infty}\) (the accumulation point for period doublings) has an attractor which has a Lyapunov exponent \(h=0\) but is a Cantor set with fractal dimension \(d\approx 0\) 54 (Grassberger, 1981). Hence, it is a strange nonchaotic attractor. Furthermore, there is a countably infinite set of \(r\) values corresponding to the accumulation points of period doublings experienced by each period \(p\) orbit born in a tangent bifurcation (at the beginning of a window). At each such \(r\) the attractor is fractal (with \(d\approx 0\) 54) and \(h=0\). Note, however, that the set of parameter values \(r\) which for the logistic map yield strange nonchaotic attractors has zero Lebesgue measure in \(r\) because these \(r\) values are countable. Hence we say these attractors are not typical. A natural question that arises is whether there are any systems for which strange nonchaotic attractors are typical in the sense that they occupy a positive Lebesgue measure of the parameter space. This question has been considered in a series of papers[6] where the authors demonstrated that strange nonchaotic attractors are indeed typical in systems that are driven by a two frequency quasiperiodic forcing function. For example, Romeiras and Ott (1987) consider strange nonchaotic attractors for the quasiperiodically forced damped pendulum equation \[\frac{{\rm d}^{2}\theta}{{\rm d}t^{2}}+\ \ \frac{{\rm d}\theta}{{\rm d}t}+\Omega^{2} \sin\theta=T_{1}\sin(\Omega_{1}t)+T_{2}\sin(\Omega_{2}t)+K, \tag{62}\] where \(\Omega_{1}/\Omega_{2}\) is irrational and, \(\Omega^{2}\), \(K\) and \(T_{1,2}\) are parameters. They find that as \(T_{1,2}\) are increased there is a transition to a situation where strange nonchaotic attractors are observed on a Cantor set of positive Lebesgue measure in the parameter space. Further increase of \(T_{1,2}\) then produces a transition to chaos. In the theoretical studies\({}^{6}\) it was shown that strange nonchaotic attractors have a distinctive signature in their frequency spectra, and this has been observed in experiments on a quasiperiodically forced magnetoelastic ribbon by Ditto _et al._ (1990b). In Eq. (6.21) say we use a strobic surface of section, \(\Omega_{1}t_{n}=\psi_{0}+2n\pi\). Let \(\phi_{n}=\Omega_{2}t_{n}\) and \(\omega=\Omega_{2}/\Omega_{1}\), where we assume \(\omega\) to be irrational. Then the evolution of (6.21) gives a map of the form, \[\phi_{n+1}=(\phi_{n}+2\pi\omega)\ {\rm modulo}\ 2\pi, \tag{62}\] and \[{\bf x}_{n+1}={\bf M}({\bf x}_{n},\,\phi_{n}), \tag{62}\] where for (6.21) \({\bf x}={\bf x}(t_{n})\) is the two dimensional vector \({\bf x}(t)=\theta(t)\), \({\rm d}\theta(t)/{\rm d}t\). Interpreting \({\bf x}_{n}\) and \({\bf M}\) more generally, Eqs. (6.22) are the general form that results from any two frequency quasiperiodically forced system. In order to analytically demonstrate the possibility of strange nonchaotic attractors in quasiperiodically forced systems, we consider a special example of the general form Eqs. (6.22) (Grebogi _et al._, 1994). In particular, we take \({\bf x}\) to be a scalar, and for \({\bf M}\) we choose \[M(x,\,\phi)=2\lambda(\tanh x)\cos\phi \tag{63}\] In this case there are two Lyapunov exponents. One of them, correspond ing to Eq. (6.22a), is zero. The other Lyapunov exponent is \[h=\lim_{m\to\infty}\left\{\frac{1}{m}\,\,^{m}\,\ln\,\partial M/\partial x\,_{x _{n},\phi_{n}}\right\} \tag{64}\] For the case of the map (6.23) the \(\phi\) axis (i.e., \(x=0\)) is invariant by virtue of \(\tanh(0)=0\). Whether the \(\phi\) axis is an attractor or not is determined by its stability. If \(h>0\) for the \(x=0\) orbit, then this orbit is unstable. To see this, we note that two orbits with \(x=0\) maintain a constant separation. Thus, if nearby points diverge from each other exponentially, they can only do so by diverging from the \(\phi\) axis which is invariant. To calculate \(h\) for the \(x=0\) orbit, we make use of the ergodicity of \(\theta\) for irrational \(\omega\) to convert a trajectory average to a phase space average. From Eq. (6.24) we obtain\({}^{7}\) for \(x=0\) \[h=\ln\lambda \tag{6.26}\] Thus if \(\lambda>1\), \(x=0\) is not an attractor. However, from Eqs. (6.23) and (6.22b), \(x_{n}<2\lambda\). Hence the orbit is confined to a finite region of space, and there must be an attractor in \(x<2\lambda\). Due to the ergodicity in \(\phi\), the measure on the attractor generated by an orbit is uniform in \(\phi\). On the other hand, consider points on the attractor at \(\phi=\pi/2\) and \(\phi=3\pi/2\). Since the \(\cos\phi\) term is zero for these values of \(\phi\), the attractor must contain the points (\(\phi=\pi/2+2\pi\omega\), \(x=0\)) and (\(\phi=3\pi/2+2\pi\omega\), \(x=0\)) and must not contain any points in (\(\phi=\pi/2+2\pi\omega\), \(x\neq 0\)) and (\(\phi=3\pi/2+2\pi\omega\), \(x\neq 0\)). Iterating these points forward, we find that, for all positive integers \(k\), the attractor contains the points (\(\phi=\pi/2+2\pi k\omega\), \(x=0\)) and (\(\phi=3\pi/2+2\pi k\omega\), \(x=0\)) but does not contain any points in (\(\phi=\pi/2+2\pi k\omega\), \(x\neq 0\)) and (\(\phi=3\pi/2+2\pi k\omega\), \(x\neq 0\)). Thus for \(\lambda>1\), \(x=0\), \(\phi\in[0\), \(2\pi\) is not the attractor, but there is a countable set of points in the attractor that are dense in \(x=0\), \(\phi\in[0\), \(2\pi]\). To clarify, Figure 6.14 shows a picture of the attractor for \(\lambda=1\)2 obtained by iterating the map and then plotting the points after the initial transient has died away. From Figure 6.14 we see that the attractor has points off \(x=0\) (as expected), and, from our previous considerations it follows that, according to our definition, the attractor is strange. Calculation of the Lyapunov exponent for the case shown in Figure 6.14 gives \(h\)\(1\)\(282\). Thus the attractor is not chaotic, and we have an example of a strange nonchaotic attractor. In order to prove that \(h\) must be negative (implying a nonchaotic attractor), not that \(x^{1}\tanh x\approx{\rm d}/{\rm d}x(\tanh x)\), with the equality applying only as \(x\to 0\), \(\infty\). Thus, from Eq. (6.23), \(\partial M/\partial x\approx M/x\) or, for \(x_{n}\) and \(x_{n+1}\) finite and nonzero, Figure 6.14: Plot of \(x\) versus \(\phi\) for the strange nonchaotic attractor for the two dimensional map given by Eqs. (6.22) and (6.23) where \(\lambda=1\)2 and the number of iterations is \(4\times 10^{5}\) (this figure courtesy of D. N. Armstead). \[\partial M/\partial x_{x_{n},\phi_{n}}<\ x_{n+1}/x_{n}\] Using this in Eq. (6.25) it immediately follows that \(h\) is negative, since \[h<\lim_{m\to\infty}\left(\frac{1}{m}^{m}_{n=1}\ln\ x_{n+1}/x_{n}\ \right)=\lim_{m\to\infty}\left(\frac{1}{m}\ln\ x_{n+1}/x_{1}\ \right)=0, \tag{6.27}\] where \(x_{k}\) is assumed to be nonzero. Since the orbit for the strange attractor has \(x=0\) on a set of zero measure (namely, \(\phi=\pi\pm\pi/2+2\pi ko\)), the assumption \(x_{k}=0\) is valid. ### Phase locking of a population of globally coupled oscillators In Section 6.2 we considered the coupling of two nonlinear oscillators that, when uncoupled, oscillate at two different frequencies. We found that, if the coupling strength exceeds a threshold value, then the two oscillators may 'lock' so that they oscillate at commensurate frequencies. The simplest situation of this type is where they oscillate at the same frequency. With reference to Figure 6.7, this corresponds to the circle map tongues emanating from the (\(\omega/2\pi\)) axis at \(\omega/2\pi=0\) and \(\omega/2\pi=1\) (e.g., \(k>w>\quad k\) for the \(\omega/2\pi=0\) tongue). The problem we address in this section is that of collective synchronization, in which a very large number of oscillators are coupled. The main question is whether, and to what extent, this population of oscillators is subject to locking whereby some portion of the population oscillates in step at the same frequency. Examples where this problem is of interest include networks of pacemaker cells in the heart and brain, synchronously flashing fireflys and chirping crickets, and synchronization of laser arrays and arrays of Josephson junction devices. Using a perturbative averaging technique Kuramoto (1984) derived the following equation for a system of \(N\) weakly coupled periodic oscillators, \[{\rm d}\theta_{i}/{\rm d}t=\omega_{i}+\sum_{=1}^{N}K_{ij}(\theta\quad\quad \theta_{i}), \tag{6.28}\] where \(i=1\), \(2\),..., \(N\), the quantity \(\theta_{i}(t)\) is the phase of the oscillator \(i\), \(\omega_{i}\) is the natural (uncoupled) frequency of oscillator \(i\), and \(K_{ij}(\theta)=K_{ij}(\theta+2\pi)\) is a nonlinear coupling between oscillator and oscillator \(i\). The simplest choice for \(K_{ij}(\theta)\) is the case of all to all identical sinusoidal coupling, \[K_{ij}(\theta)=(K/N)\sin\theta,\] for which Eq. (6.28) becomes ### Phase locking of a population of globally coupled oscillators \[{\rm d}\theta_{i}/{\rm d}t=\omega_{i}+(K/N)\;\;\;{}^{N}_{=1}\;\sin(\theta\;\;\;\; \;\theta_{i}), \tag{6.29}\] which we refer to as the Kuramoto model. See Strogatz (2000) for a review. Noting that \[N\;\;^{1}\qquad\sin(\theta\;\;\;\;\;\theta_{i})={\rm Im}\;\;\;{\rm e}\;\;^{{\rm i }\theta_{i}}\;\;\;N\;\;^{1}\;\;\;\;\;\;{\rm e}^{{\rm i}\theta}\qquad,\] Kuramoto introduces the complex 'order parameter,' \[r{\rm e}^{{\rm i}\psi}=\frac{1}{N}\;\;\;\;\;\;\;{\rm e}^{{\rm i}\theta}\;, \tag{6.30}\] where \(r(t)\) and \(\psi(t)\) are real global quantities that reflect the amplitude (\(r\)) and phase (\(\psi\)) of the collective oscillatory behavior of the coupled oscillator system. In terms of \(r\) and \(\psi\), Eq. (6.29) becomes \[{\rm d}\theta_{i}/{\rm d}t=\omega_{i}+Kr\sin(\psi\;\;\;\;\;\theta_{i}) \tag{6.31}\] The basic behavior of the Kuramoto model is schematically illustrated in Figure 6.15(\(a\)) which shows the time dependence of the amplitude of the coherent global oscillation of the system \(r(t)\). For \(N\gg 1\) there is a critical coupling strength \(K_{*}\) below which (\(K<K_{*}\)) the oscillators behave incoherently. That is, they oscillate at different frequencies and there is no correlation between the oscillator phases. Thus, the sum on the right hand side of Eq. (6.30) is of the order of \(N\;\;^{1/2}\) as expected for \(\theta\;\) distributed randomly in (0, 2\(\pi\)). When \(K\) exceeds the critical value \(K_{*}\) the oscillator phases tend to clump around the mean phase \(\psi(t)\). Letting \(N\rightarrow\infty\) the \(O(N\;\;^{1/2})\) jitter above \(r=0\) for \(K<K_{*}\) approaches zero, as does the jitter around the long time asymptotic amplitude \(r_{\infty}\) for \(K>K_{*}\). Figure 6.15: (\(a\)) \(r(t)\) versus \(t\); if \(K<K_{*}\) and the initial phases are distributed in a coherent way so that \(r(0)>0\), then the initial coherence decays with time so that \(r\) becomes of order \(N\;\;^{1/2}\). (\(b\)) \(r_{\infty}\equiv r(t\rightarrow\infty)\) versus the coupling constant \(K\). (This picture applies for oscillator distributions of the type shown in Figure 6.16.)As indicated by Figure 6.15(_b_), the degree to which the oscillator population is synchronized increases continuously from \(r=0\) as \(K\) in creases past \(K_{*}\). In particular, as shown by Kuramoto, for \(K>K_{*}\) a fraction of the oscillator population is synchronized with their phases locked together, while the remaining oscillators remain unlocked. Further more, the fraction of locked oscillators increases continuously from zero at \(K=K_{*}\) as \(K\) increases past \(K_{*}\). In order to analyze the behavior illustrated in Figure 6.15 it is useful to take the \(N\to\infty\) limit. That is, we consider a continuum of oscillators which we characterize by a distribution function \(F(\theta,\,\omega,\,t)\) such that \(F(\theta,\,\omega,\,t)\,\mathrm{d}\theta\,\mathrm{d}\omega\) is the fraction of the oscillators whose phase angles lie between \(\theta\) and \(\theta+\mathrm{d}\theta\) and whose natural frequencies lie between \(\omega\) and \(\omega+\mathrm{d}\omega\). Thus, \(\int\int F\,\mathrm{d}\theta\,\mathrm{d}\omega\equiv 1\). Since the number of oscillators is conserved we can immediately write the equation \[\frac{\partial F}{\partial t}+\frac{\partial}{\partial\theta}\;\;\;\frac{ \mathrm{d}\theta}{\mathrm{d}t}\,F\;\;\;+\frac{\partial}{\partial\omega}\;\;\; \frac{\mathrm{d}\omega}{\mathrm{d}t}\,F\;\;\;=0\] This equation is similar to that for particle conservation for a compressible fluid of density \(\rho(\mathbf{x},\,t)\), \(\partial\rho/\partial t+\nabla\cdot(\rho\mathbf{v})=0\), where \(\mathbf{v}\) is the fluid velocity. Here \(F\) is the density of oscillators in (\(\theta\), \(\omega\)) space, \(\mathrm{d}\theta/\mathrm{d}t\) is the velocity in \(\theta\) and \(\mathrm{d}\omega/\mathrm{d}t\) is the velocity in \(\omega\). For our problem the natural frequency of an oscillator does not change with time, \(\mathrm{d}\omega/\mathrm{d}t=0\), while \(\mathrm{d}\theta/\mathrm{d}t\) is given by Eq. (6.31). Thus, the equations describing the \(N\to\infty\) con tinuum limited are Note that Eq. (6.28) is invariant to the transformation \(\theta_{i}\rightarrow\theta_{i}+\overline{\omega}t\), \(\omega_{i}\rightarrow\omega_{i}+\overline{\omega}\), for any constant \(\overline{\omega}\). Thus, without loss of generality, we are free to shift the oscillator frequencies by a constant value. For example, Figure 6.16(_a_) shows a distribution of oscillator frequencies peaked about \(\omega=\omega_{0}\). By the above argument, an equivalent problem is that where this distribution is shifted so that its peak is at zero as in Figure 6.16(_b_). In what follows we consider the distribution \(G(\omega)\) shown in Figure 6.16(_b_), and we assume that \(G(\omega)\) is even in \(\omega\) and decreases monotonically away from \(\omega=0\). An important question that we can ask is what is the critical coupling coefficient \(K_{*}\) for a given oscillator distribution \(G(\omega)\)? To address this problem we first note the character of the time dependence of \(r(t)\) in Figure 6.15(_a_) for \(K>K_{*}\). In particular, from numerical solutions of (6.29) with large \(N\), it is found that the initial increase of \(r(t)\), if \(r(0)\) is small, is exponential. That is, it has the character of an instability. This can be understood as follows. For \(K>K_{*}\), a small phase coherence of the oscillators produces a small \(r\). This small coherence pulls the oscillator phases toward \(\psi(t)\), thus increasing the coherence and hence \(r\), which more strongly pulls the oscillator phases toward \(\psi(t)\), etc. To analyze the onset of such an instability, we first note that Eqs. (6.33) possess an incoherent solution, \[\vec{F}=1/(2\pi)\text{ for }0\leq\theta\leq 2\pi\] (6 34) This solution is called incoherent because the phase of any oscillator is equally likely to be anywhere in \(0\) to \(2\pi\). Equation (6.34) is a solution of (6.33a) because, from (6.33b), it yields \(r=0\). The question now is whether this incoherent solution is stable. We will find that it is stable for \(K\) below a critical value, and is unstable for \(K\) above a critical value, and we identify \(K_{*}\) with this critical coupling strength for coherence. To do this stability analysis we consider a distribution which is slightly perturbed from the incoherent distribution (6.34) and examine how the perturbation evolves. Let \[F(\theta,\,\omega,\,t)=\bar{F}+f(\theta,\,\omega){\rm e}^{\pi t}\] where we assume \(\bar{F}\gg f\,{\rm e}^{\pi t}\) so that we can linearize the system (6.33). Equation (6.33a) then yields \[sf+\omega\frac{\partial f}{\partial\theta}=\frac{Kr}{2\pi}\cos(\psi\quad\theta)\] The solution of this equation is \[f=\frac{Kr}{4\pi}\left[\frac{{\rm e}^{{\rm i}(\psi\quad\theta)}}{s}\frac{{\rm e }^{{\rm i}(\psi\quad\theta)}}{{\rm i}\omega}+\frac{{\rm e}^{{\rm i}(\psi\quad \theta)}}{s+{\rm i}\omega}\right]\] which, when substituted into Eq. (6.33b), yields \[D(s)\equiv 1\quad\frac{K}{2}\quad{}^{+\infty}_{\infty}\frac{G(\omega)}{s}\frac{{ \rm d}\omega}{{\rm i}\omega}\,{\rm d}\omega=0 \tag{6.35}\] We refer to \(D(s)=0\) as the dispersion relation. The solution of \(D(s)=0\) determines \(s\), and, if \({\rm Re}(s)>0\), the perturbations grow exponentially with time, while, if \({\rm Re}(s)<0\), the perturbations are exponentially damped. Thus, \({\rm Re}(s)>0\) corresponds to instability of the incoherent state (\(\bar{F}=1/2\pi\)), while \({\rm Re}(s)<0\) corresponds to stability. There is a subtlety of this analysis that has been glossed over, and we must now come to grips with it. Namely, we are dealing with an initial value problem: at \(t=0\) we introduce a small perturbation to \(\bar{F}\), and we then examine how the coherent oscillation evolves for \(t>0\). A proper way of treating such a problem is via a Laplace transform. Regarding \(s\) in Eq. (6.35) as a Laplace transform variable, the dispersion function \(D(s)\) is also produced from the Laplace transform analysis of the initial value problem. However, it is crucial to note that such an analysis only yields the Laplace transform for \({\rm Re}(s)>0\), and that, for \({\rm Re}(s)\leq 0\), the Laplace transform is defined by analytic continuation from the region \({\rm Re}(s)>0\). Thus, considering the integrand in Eq. (6.35), the pole \(\omega=-\) is in the complex \(\omega\) plane leads to different definitions of the integral \(\int{\rm d}\omega G(\omega)/(s\quad{\rm i}\omega)\) for \({\rm Re}(s)>0\), for \({\rm Re}(s)=0\) and for \({\rm Re}(s)<0\). For \({\rm Re}(s)>0\) the result for \(D(s)\) with the integral along the real \(\omega\) axis Eq. (6.35) is valid. For \({\rm Re}(s)\leq 0\) the dispersion function \(D(s)\) is defined as the analytic continuation of \(D(s)\) from \({\rm Re}(s)>0\). This analytic continua tion means that we cannot let the pole \(\omega=-\) is cross the integration contour since that would create a (nonanalytic) discontinous jump in \(D(s)\). Thus, we replace the integral along the real \(\omega\) axis from \(\omega=-\infty\) to \(\omega=+\infty\) by the contour integrals shown in Figure 6.17. For the case in Figure 6.16(_b_), where _G_(_o_) is an even function of \(o\), monotonically decreasing away from \(o\) = 0, it can be shown that there is only one root of _D_(_s_) = 0 and that that root is real. As \(K\) increases from zero, the root migrates along the negative \(\operatorname{Im}(\omega)\) axis, until at \(K\) = _K_* it reaches the origin, Figure 6.17(_b_). Thus, evaluating _D_(0) using the contour in Figure 6.17(_b_) and setting _D_(0) = 0 we obtain an equation for the critical coupling _K_* \[0 = 1\quad\frac{\operatorname{i}K_{*}}{2}\quad\frac{G(\omega)}{\omega}\; \mathrm{d}\omega,\] where the contour \(C\)0 is specified in Figure 6.17(_b_). Since \(G\) is assumed even _G_(_o_)/_o_ is odd and the integral over the part of \(C\)0 along the real axis is zero. Letting the radius of the semicircular indentation around \(o\) = 0 approach zero, we see that the integration over the semicircle arc is (\(\frac{1}{2}\)) x ( 2pi) x (the residue of the integrand at \(o\) = 0). Thus, the integral is \(\int_{\mathcal{C}_{0}}\;G(\omega)/\omega\mathrm{d}\omega=-\pi\mathrm{i}G(0)\). Using this we obtain the critical coupling constant, \[K_{*} = 2(\pi G(0))\;\;^{1} \tag{63}\] Another instructive exercise is to apply Eq. (6.35) to a distribution for which the integration can be carried out in closed form. For this purpose we examine the case of a Lorentzian distribution of oscillator frequencies, \[G(\omega) = \frac{\Delta}{\pi}\frac{1}{\omega^{2}+\Delta^{2}}, \tag{64}\]which, when inserted in Eq. (6.35) with the contours shown in Figure 6.17, yields \[s=\frac{1}{2}\,K\quad\Delta \tag{6.38}\] for \(\mbox{Re}(s)>0\), \(\mbox{Re}(s)=0\) and \(\mbox{Re}(s)<0\). Thus, for \(K<2\Delta=K_{*}\) there is exponential damping (stability of the incoherent state) and for \(K>2\Delta=K_{*}\) there is exponential growth (instability). Having found \(K_{*}\), we now attempt to determine the phase coherent nonlinear state that the system evolves to due to instability when \(K>K_{*}\). In particular, we wish to determine \(r_{\infty}\) as a function of \(K\) (Figure 6.15(_b_)). We again restrict our consideration to even, monotonically decreasing oscillator distributions as shown in Figure 6.16(_b_). For this case the nonlinear phase coherent state that is approached at long time is time independent. (For the \(G(\omega)\) in Figure 6.16(_a_) it is time periodic with frequency \(\omega_{0}\).) We denote the time asymptotic nonlinear state by \(F_{\infty}\), \[F_{\infty}(\theta,\,\omega)=F(\theta,\,\omega,\,\,t\to\infty)\] Thus, setting \(\partial F_{\infty}/\partial t=0\) in (6.33a), we obtain \[\omega+Kr_{\infty}\sin(\psi\quad\theta)]F_{\infty}=C(\omega), \tag{6.39}\] where \(C(\omega)\) is an, as yet undetermined, function of \(\omega\), and, since \(F_{\infty}\) is a distribution function, we require \(F_{\infty}(\theta,\,\omega)\approx 0\) for all \(\theta\) and \(\omega\). If \(\omega>Kr_{\infty}\) (\(\omega<\quad Kr_{\infty}\)), then the term, \(\omega+Kr_{\infty}\sin(\psi\quad\theta)\), is positive (negative) for all \(\theta\), and we have \[F_{\infty}=C(\omega)[\omega+Kr_{\infty}\sin(\psi\quad\theta)]\,\,\,{}^{1}\] (6.40a) for \[\omega>Kr_{\infty}\]. Since \[F_{\infty}\approx 0\], \[C(\omega)>0\] for \[\omega>0\] and \[C(\omega)<0\] for \[\omega<0\]. For \[\omega<Kr_{\infty}\], the term, \[\omega+Kr_{\infty}\sin(\psi\quad\theta)\], changes sign as \[\phi=(\psi\quad\theta)\] varies from 0 to 2 \[\pi\] (see Figure 6.18). Since \[F_{\infty}\] cannot change sign (it must be non negative), we conclude that \[C(\omega)\] in Eq. (6.39) must be zero if \[\omega<Kr_{\infty}\]. This does not, however, mean that \[F_{\infty}\] is zero for \[\omega<Kr_{\infty}\]. In particular, Eq. (6.39) with \[C(\omega)=0\] has the solution \[F_{\infty}=c_{\rm s}\,\,\,(\theta+\phi_{\rm s}\quad\psi)+c_{\rm u}\,\,\,( \theta+\phi_{\rm u}\quad\psi)\] (6.40b) for \[\omega<Kr_{\infty}\], where \[c_{\rm s}\] and \[c_{\rm u}\] are constants. That is, \[F_{\infty}\] is a delta function at the roots \[\phi_{\rm s}(\omega)\] and \[\phi_{\rm u}(\omega)\] of \[\omega+Kr_{\infty}\sin\phi=0\]. From the equation \[{\rm d}\theta/{\rm d}t=\omega+Kr\sin(\psi\quad\theta)\], we see that, for fixed \[r<\omega/K\], small perturbations of \[\theta\] about the fixed points \[\psi\quad\theta=\phi_{\rm u,s}\] are governed by \[{\rm d}\delta\theta/{\rm d}t=-\,(Kr\cos\phi_{\rm u,s})\delta\theta\], where, as seen from Figure 6.18, \[\cos\phi_{\rm u}<0\] and \[\cos\phi_{\rm s}>0\]. Thus, \[\phi_{\rm s}\] is stable while \[\phi_{\rm u}\] is unstable. Since we seek a stable nonlinear state, we set \[c_{\rm u}=0\]. Now making use of the normalization \(\int F_{\infty}\,{\rm d}\theta=1\), we determine that \(c_{\rm u}=1\) while \(C(\omega)\) is given by \[C(\omega)=\quad\mathop{\rm{}^{2\pi}}_{0}\frac{{\rm d}\phi}{\omega+Kr_{\infty}\sin \phi}\quad\mathop{\rm{}^{1}}=\frac{{\rm sgn}(\omega)}{2\pi}\surd\omega^{2}\quad K ^{2}r^{2}, \tag{64}\] where \({\rm sgn}(\omega)=+1\) for \(\omega>0\) and \({\rm sgn}(\omega)=-1\) for \(\omega<0\). We can now substitute \(F_{\infty}\) from Eqs. (6.40) and (6.41) into (6.32b) to obtain \(r_{\infty}\). The integrals from \(\omega=-\infty\) to \(\omega=-Kr_{\infty}\) and from \(\omega=Kr_{\infty}\) to \(\omega=+\infty\) cancel each other, leaving only the contributions from the delta function, \[r_{\infty}=\mathop{\rm{}^{2\pi}}_{0}\mathop{\rm{}^{+Kr_{\infty}}}_{K_{r_{ \infty}}}G(\omega)\;\;(\phi\quad\phi_{\rm s}){\rm e}^{{\rm i}\phi}\,{\rm d} \omega\,{\rm d}\phi,\] where \(\phi_{\rm s}(\omega)\) is shown in Figure 18. Performing the integration over \(\omega\), we obtain \[r_{\infty}=\frac{1}{2}Kr_{\infty}\mathop{\rm{}^{2\pi}}_{0}\mathop{\rm{}^{2\pi }}_{0}\cos^{2}\phi\;G(Kr_{\infty}\sin\phi){\rm d}\phi\] One root of this equation is \(r_{\infty}=0\). A second branch of solutions corresonding to a partially coherent state is \[1=\frac{K}{2}\mathop{\rm{}^{2\pi}}_{0}\mathop{\rm{}^{2\pi}}_{0}\cos^{2}\phi\; G(Kr_{\infty}\sin\phi){\rm d}\phi \tag{65}\] Roots \(r_{\infty}>0\) of Eq. (6.42) exist only for \(K>K_{*}\). To verify this we note that putting \(r_{\infty}=0\) in (6.42) gives \(1=(K_{*}/2)\pi G(0)\) which is Eq. (6.36). Furthermore, expanding Eq. (6.42) for small \(r_{\infty}\) yields \[1=\frac{K}{2}\left[\mathop{\rm{}^{2\pi}}_{0}\cos^{2}\phi\;\;\;G(0)+\frac{1}{2 }\,G^{\prime\prime}(0)K^{2}r_{\infty}^{2}\sin^{2}\!\phi\;\;\;{\rm d}\phi\right],\] from which we obtain \[r_{\infty}=\alpha\surd K\quad K_{*},\;{\rm for}\;1\gg(K\;\;\;\;K_{*})/K\; \geq 0, \tag{66}\]where \(=4[\pi K_{*}^{4}(\ \ G^{*}(0))]\)\({}^{1/2}\). For the case of a Lorentzian distribution of oscillator frequencies, Eq. (6.37), the integral (6.42) can be done exactly, yielding \(r_{\infty}=[1\ \ \ (K_{*}/K)^{2}\)\({}^{1/2}\). According to Eq. (6.43), as \(K\) increases through \(K_{*}\), \(r_{\infty}\) bifurcates from \(r_{\infty}=0\) to a continuously increasing positive value, as in Figure 6.15(_b_). Note from the above analysis that the integrals over \(-\infty<\omega<\ \ Kr_{\infty}\) and \(Kr_{\infty}<\omega<\infty\) made no contribution to \(r_{\infty}\). Hence only those oscillators whose frequencies lie in the range \(\ \ \ Kr_{\infty}<\omega<Kr_{\infty}\) contribute to the coherent frequency locking. The fraction of oscillators that are coherently oscil lating is thus \(\int_{Kr_{\infty}}^{+Kr_{\infty}}G(\omega){\rm d}\omega\). ## Problems 1. Assuming that in Figure 6.2 the nonlinear resistor has a resistance \[R(V)=(V/V_{0})R_{0}\exp(\ \ V/V_{0})\] (where \(R_{0}\) and \(V_{0}\) are constants), find the Fourier transform of the current. What is the coefficient of the delta function \(\omega\ \ (m_{1}\Omega_{1}+m_{2}\Omega_{2})]\)? _Hint_: The modified Bessel function of order \(n\) can be expressed as \[I_{n}(x)=\frac{1}{2\pi}\ \mathop{\rm exp}\limits_{0}^{2\pi}\exp({\rm i}n\theta) \exp(x\sin\theta){\rm d}\theta\] 2. Consider the damped pendulum equation with a forcing on the right-hand side which consists of a sinusoidal part, \(T\sin(\Omega t)\), and a constant part, \(K\), \[{\rm d}^{2}\theta/{\rm d}t^{2}+\ \ {\rm d}\theta/{\rm d}t+\sin\theta=K+T\sin( \Omega t)\] Define phase space variables \(x^{(1)}={\rm d}\theta/{\rm d}t\), \(x^{(2)}=\theta\) modulo \(2\pi\), and \(x^{(3)}=\Omega t\) modulo \(2\pi\). 1. Show that volumes in phase space shrink exponentially with time. 2. If a surface of section \(x^{(3)}=(\)const \()\) is taken, show that areas shrink with each iterate of the surface of section map. 3. Say that there is a solution of the equation denoted \(\theta=\tilde{\theta}(t)\). Show that, if \(K=0\), then \(\theta=\mathrel{\mathop{\kern 0.0pt--}}\tilde{\theta}(t+\pi/\Omega)\) is also a solution. 4. Using the results above show that quasiperiodic solutions filling a toroidal surface are not possible for \(K=0\). (Remark: Quasiperiodic solutions for \(K\neq 0\) do exist and have been investigated in a number of papers; see for example D'Humieres _et al._ (1982).) 3. Derive Eq. (6.14). 4. Write a program to calculate the rotation number \(R\) of the circle map Eq. (6.11). Plot \(R\) versus \(w\) for \(k=0\) 4, for \(k=0\) 8, and for \(k=1\) 0. 5. Show that the van der Pol equation, Eq. (1.13), undergoes a Hopf bifurcation of the steady state (\(x\), \({\rm d}x/{\rm d}t\)) = (0, 0) as the parameter \(\eta\) is increased from what value of \(\eta\) does the bifurcation occur? Notes 1. There are quasiperiodic orbits in \(k>1\), but these only occur on a zero Lebesgue measure Cantor set of \(\theta\)-values (Kadanoff, 1983). Consequently these orbits do not generate a smooth density. Also, since a randomly chosen initial condition clearly has zero probability of falling on such an orbit, and, since these orbits are nonattracting, we conclude that these orbits are not realized for typical initial conditions. 2. We note, however, that although quasiperiodicity is present in Brandstater and Swinney's experiment, frequency locking does not seem to occur. The reason for this appears to be connected with the circular symmetry of the Couette Taylor configuration. 3. The equation with \(h(\theta)=\sin\theta\) has been studied because of its relevance as a model for Josephson junctions. It is found that, when \(h(\theta)\) is a pure sinusoid, the widths of the frequency lockings degenerates to zero, and there are no Arnold tongues except at rotation numbers one and zero (see, for example, Azbel and Bak (1984) and references therein). 4. The case \(a<0\) is referred to as'subcritical'. In this case, when \(p\) increases through \(p_{1}\), an orbit initially near \({\bf x}_{*}\) will typically move far from \({\bf x}_{*}\) (this is unlike the case where \(a>0\)). 5. In the paper of Newhouse _et al._ (1978) the small perturbations to the flow were required to have small first- and second-order derivatives but could have large higher-order derivatives. In contrast, in the case of the torus \(T^{4}\) treated by Ruelle and Takens (1971), the perturbations could be such that derivatives of all orders were small. 6. See Grebogi _et al._ (1984), Bondeson _et al._ (1985) and Romeiras _et al._ (1989) and references therein. Bondeson _et al._ (1985) shows that strange nonchaotic behavior in their system can be shown on the basis of Anderson localization (see Chapter 11) in a quasiperiodic potential. 7. \(\partial M/\partial x=0\) at \(\phi=\pi/2\), \(3\pi/2\). Thus, if the orbit \(\phi_{n}\) ever lands on one of these points, then Eq. (6.24) yields \(h=-\infty\). In this case Eq. (6.24) does not imply Eq. (6.25). The set of initial conditions \(\phi_{0}\) that yield such orbits is countable (\(\phi_{0}=(\pi\pm\pi/2\)\(2\pi k\omega\)) modulo 1 for \(k\) a positive integer). Since this set is countable it is of zero Lebesgue measure, and for all other \(\phi_{0}\) we have that \(h\) for the \(x=0\) orbit is given by Eq. (6.26). ## Chapter 7 Chaos in Hamiltonian systems Hamiltonian systems are a class of dynamical systems that occur in a wide variety of circumstances.[1] The special properties of Hamilton's equations endow these systems with attributes that differ qualitatively and funda mentally from other sytems. (For example, Hamilton's equations do not possess attractors.) Examples of Hamiltonian dynamics include not only the well known case of mechanical systems in the absence of friction, but also a variety of other problems such as the paths followed by magnetic field lines in a plasma, the mixing of fluids, and the ray equations describing the trajectories of propagating waves. In all of these situations chaos can be an important issue. Furthermore, chaos in Hamiltonian systems is at the heart of such fundamental questions as the foundations of statistical mechanics and the stability of the solar system. In addition, Hamiltonian mechanics and its structure are reflected in quantum mechanics. Thus, in Chapter 11 we shall treat the connection between chaos in Hamiltonian systems and related quantum phenomena. The present chapter will be devoted to a discussion of Hamiltonian dynamics and the role that chaos plays in these systems. We begin by presenting a summary of some basic concepts in Hamiltonian mechanics.[2, 3] ### 7.1 Hamiltonian systems The dynamics of a Hamiltonian system is completely specified by a single function, the Hamiltonian, \(H(\mathbf{p},\,\mathbf{q},\,t)\). The state of the system is specified by its'momentum' \(\mathbf{p}\) and 'position' \(\mathbf{q}\). Here the vectors \(\mathbf{p}\) and \(\mathbf{q}\) have the same dimensionality which we denote \(N\). We call \(N\) the number ofdegrees of freedom_ of the system. For example, Hamilton's equations for the motion of \(K\) point masses interacting in three dimensional space via gravitational attraction has \(N=3K\) degrees of freedom, corresponding to the three spacial coordinates needed to specify the location of each mass. Hamilton's equations determine the trajectory (**p**(_t_), **q**(_t_)) that the system follows in the \(2N\) dimensional phase space, and are given by \[{\rm d}{\bf p}/{\rm d}t=-\partial H({\bf p},\,{\bf q},\,t)/\partial{\bf q}, \tag{7.1a}\] \[{\rm d}{\bf q}/{\rm d}t=\partial H({\bf p},\,{\bf q},\,t)/\partial {\bf p}. \tag{7.1b}\] In the special case that the Hamiltonian has no explicit time depen dance, \(H=H({\bf p},\,{\bf q})\), we can use Hamilton's equations to show that, as \({\bf p}\) and \({\bf q}\) vary with time, the value of \(H({\bf p}(t),\,{\bf q}(t))\) remains a constant: \[\frac{{\rm d}H}{{\rm d}t}=\frac{{\rm d}{\bf q}}{{\rm d}t}\cdot\frac{\partial H }{\partial{\bf q}}+\frac{{\rm d}{\bf p}}{{\rm d}t}\cdot\frac{\partial H}{ \partial{\bf p}}=\frac{\partial H}{\partial{\bf p}}\cdot\frac{\partial H}{ \partial{\bf q}}-\frac{\partial H}{\partial{\bf q}}\cdot\frac{\partial H}{ \partial{\bf p}}=0.\] Thus, identifying the value of the Hamiltonian with the energy \(E\) of the system, we see that the energy is conserved for time independent systems, \(E=H({\bf p},\,{\bf q})=(\)constant). #### Symplectic structure We can write Eqs. (7.1) in the form \[{\rm d}\tilde{{\bf x}}/{\rm d}t={\bf F}(\tilde{{\bf x}},\,\,t), \tag{7.2}\] by taking \(\tilde{{\bf x}}\) to be the \(2N\) dimensional vector \[\tilde{{\bf x}}=\begin{pmatrix}{\bf p}\\ {\bf q}\end{pmatrix},\] and by taking \({\bf F}(\tilde{{\bf x}})\) to be \[{\bf F}(\tilde{{\bf x}},\,\,t)={\bf S}_{N}\cdot\partial H/\partial\tilde{{\bf x }}, \tag{7.3}\] with \[{\bf S}_{N}=\begin{bmatrix}{\bf O}_{N}&-{\bf I}_{N}\\ {\bf I}_{N}&{\bf O}_{N}\end{bmatrix} \tag{7.4}\] where \({\bf I}_{N}\) is the \(N\) dimensional identity matrix, \({\bf O}_{N}\) is the \(N\times N\) matrix of zeros, and \[\frac{\partial H}{\partial\tilde{{\bf x}}}=\begin{bmatrix}\partial H/\partial {\bf p}\\ \partial H/\partial{\bf q}\end{bmatrix}. \tag{7.5}\] From this we see how restricted the class of Hamiltonian systems is. In particular, a general system of the form (7.2) requires the specification of all the components of the _vector_ function \({\bf F}(\tilde{{\bf x}},\,t)\), while by (7.3), if the system is Hamiltonian, it is specified by a single scalar function of \({\bf p}\), \({\bf q}\) and \(t\) (the Hamiltonian). One of the basic properties of Hamilton's equations is that they preserve \(2N\) dimensional volumes in the phase space. This follows by taking the divergence of \({\bf F}(\tilde{\ })\) in Eq. (7.2), which gives \[\frac{\partial}{\partial^{-}}\cdot{\bf F}=\frac{\partial}{\partial{\bf p}} \cdot\ \ -\frac{\partial H}{\partial{\bf q}}\ \ +\frac{\partial}{\partial{\bf q}} \cdot\ \ \frac{\partial H}{\partial{\bf p}}\ \ =0. \tag{7.6}\] Thus, if we consider an initial closed surface \(S_{0}\) in the \(2N\) dimensional phase space and evolve each point on the surface forward with time, we obtain at each instant of time \(t\) a new closed surface \(S_{t}\) which contains within it precisely the same \(2N\) dimensional volume as does \(S_{0}\). This follows from \[\frac{{\rm d}}{{\rm d}t}\int_{S_{t}}d^{2N-}=\oint_{S_{t}}\frac{{\rm d}^{-}}{{ \rm d}t}\cdot{\rm d}{\bf S}=\oint_{S_{t}}{\bf F}\cdot{\rm d}{\bf S}=\int_{S_{t }}\frac{\partial}{\partial^{-}}\cdot{\bf F}\,{\rm d}^{2N-}=0,\] where \(\int_{S_{t}}\cdots\) denotes integration over the volume enclosed by \(S_{t}\), \(\oint_{S_{t}}\cdots\) denotes a surface integral over the closed surface \(S_{t}\), and the third equality is from the divergence theorem (cf. Eq. (1.12)). As a consequence of this result, Hamiltonian systems do not have attractors in the usual sense. This incompressibility of phase space volumes for Hamiltonian systems is called Liouville's theorem. Perhaps the most basic structural property of Hamilton's equations is that they are _symplectic_. That is, if we consider three orbits that are infinitesimally displaced from each other, \(({\bf p}(t),{\bf q}(t))\), \(({\bf p}(t)+\partial{\bf p}(t)\), \({\bf q}(t)+\partial{\bf q}(t))\) and \(({\bf p}(t)+\partial{\bf p}^{\prime}(t)\), \({\bf q}(t)+\partial{\bf q}^{\prime}(t))\), where \(\partial{\bf p}\), \(\partial{\bf q}\), \(\partial{\bf p}^{\prime}\) and \(\partial{\bf q}^{\prime}\) are infinitesimal \(N\) vectors, then the quantity, \[\delta{\bf p}\cdot\delta{\bf q}^{\prime}-\delta{\bf q}\cdot\delta{\bf p}^{ \prime},\] which we call the differential symplectic area, is independent of time, \[\frac{{\rm d}}{{\rm d}t}(\delta{\bf p}\cdot\delta{\bf q}^{\prime}-\delta{\bf q }\cdot\delta{\bf p}^{\prime})=0. \tag{7.7}\] The differential symplectic area can also be written as \[\delta{\bf p}\cdot\delta{\bf q}^{\prime}-\delta{\bf q}\cdot\delta{\bf p}^{ \prime}=\delta^{-\dagger}\cdot{\bf S}_{N}\cdot\delta^{-\prime} \tag{7.8}\] where \(\dagger\) denotes transpose. To derive (7.7) we differentiate (7.8) with respect to time and use Eqs. (7.2) (7.5):\[\frac{\mathrm{d}}{\mathrm{d}t}\delta^{-\dagger}\cdot\mathbf{5}_{N} \cdot\delta^{-\prime} = \frac{\mathrm{d}\delta^{-\dagger}}{\mathrm{d}t}\cdot\mathbf{5}_{N} \cdot\delta^{-\prime}+\delta^{-\dagger}\cdot\mathbf{5}_{N}\cdot\frac{\mathrm{d} \delta^{-\prime}}{\mathrm{d}t}\] \[= \frac{\partial\mathbf{F}}{\partial^{-}}\cdot\delta^{-}\quad\cdot \mathbf{5}_{N}\cdot\delta^{-\prime}+\delta^{-\dagger}\cdot\mathbf{5}_{N}\cdot \frac{\partial\mathbf{F}}{\partial^{-}}\cdot\delta^{-\prime}\] \[= \delta^{-\dagger}\cdot\left[\begin{array}{cc}\frac{\partial \mathbf{F}}{\partial^{-}}\quad\cdot\mathbf{5}_{N}+\mathbf{5}_{N}\cdot\frac{ \partial\mathbf{F}}{\partial^{-}}\end{array}\right]\cdot\delta^{-\prime}\] \[= \delta^{-\dagger}\cdot\left[\begin{array}{cc}\mathbf{5}_{N} \cdot\frac{\partial^{2}H}{\partial^{-}\partial^{-}}\quad\cdot\mathbf{5}_{N}+ \mathbf{5}_{N}\cdot\mathbf{5}_{N}\cdot\frac{\partial^{2}H}{\partial^{-} \partial^{-}}\end{array}\right]\cdot\delta^{-\prime}\] \[= \delta^{-\dagger}\cdot\left[\begin{array}{cc}\frac{\partial^{2 }H}{\partial^{-}\partial^{-}}\quad\cdot\mathbf{5}_{N}^{\dagger}\cdot\mathbf{5}_ {N}+\mathbf{5}_{N}\cdot\mathbf{5}_{N}\cdot\frac{\partial^{2}H}{\partial^{-} \partial^{-}}\end{array}\right]\cdot\delta^{-\prime}\] \[= 0,\] where we have used \(\mathbf{5}_{N}\cdot\mathbf{5}_{N}=-\;_{2N}\) (where \(\;_{2N}\) is the \(2N\) dimensional identity matrix), \(\mathbf{5}_{N}^{\dagger}=-\mathbf{5}_{N}\) and noted that \(\partial^{2}H/\partial^{-}\partial^{-}\) is a symmetric matrix. (In terms of the notation of Chapter 4, \(\partial\mathbf{F}/\partial^{-}=\mathbf{DF}\).) For the case of one degree of freedom systems (\(N=1\)), Eq. (7.7) says that infinitesi mal areas are preserved following the flow. (Figure 7.1 shows two infinitesimal vectors defining an infinitesimal parallelogram. The parallel elogram area is \(\delta p^{\prime}\delta q-\delta q^{\prime}\delta p\).) Since infinitesimal areas are preserved by a Hamiltonian flow with \(N=1\), so are noninfinitesimal areas. Thus for \(N=1\) Liouville's theorem and the symplectic condition are the same condition. For \(N>1\) the symplectic condition is not implied by Liouville's theorem. It can be shown,[2] however, that the symplectic condition implies volume conservation; so the symplectic condition is the more fundamental requirement for Hamiltonian mechanics. We interpret (7.7) as saying that the algebraic sum of the parallelogram areas formed by projecting the vectors \(\partial\mathbf{p}\), \(\partial\mathbf{q}\), \(\partial\mathbf{p}^{\prime}\), \(\partial\mathbf{q}^{\prime}\) to the \(N\) coordinate planes (\(p_{i}\), \(q_{i}\)) is conserved, \[\partial\mathbf{p}\cdot\partial\mathbf{q}^{\prime}-\partial\mathbf{q}\cdot \partial\mathbf{p}^{\prime}=\sum_{i=1}^{N}(\partial p_{i}\delta q_{i}^{\prime} -\delta q_{i}\delta p_{i}^{\prime}).\] The quantity \(\partial\mathbf{p}\cdot\partial\mathbf{q}^{\prime}-\partial\mathbf{q}\cdot \partial\mathbf{p}^{\prime}\) is the differential form of _Poincare's integral invariant_,2 \[\underset{\gamma}{\mathbf{p}}\cdot\mathbf{dq}=\sum_{i=1}^{N}\underset{\gamma}{ p_{i}}\,\mathrm{d}q_{i},\] (7.9a) where the integral is taken around a closed path \[\gamma\] in ( **p**, **q** ) space. We also refer to the quantity \[\underset{\gamma}{\mathbf{p}}\cdot\mathbf{dq}\] as the _symplectic area_. Poincare's integral invariant is independent of time if the closed path \[\gamma\] is taken following the flow in phase space.2 That is, \[\gamma(t)\] is the path obtained from \[\gamma(0)\] by evolving all the points on \[\gamma(0)\] forward in time by the amount \[t\] via Hamilton's equations. A useful generalization of the above statement of the invariance of \(\mathbf{p}\cdot\mathbf{dq}\) following the flow is the _Poincare Cartan integral theorem_.2 Consider the \((2N+1)\) dimensional extended phase space (**p**, **q**, _t_). Let \(\Gamma_{1}\) be a closed curve in this space and consider the tube of trajectories through points on \(\Gamma_{1}\) as shown in Figure 7.2 for \(N=1\). The Poincare Cartan integral theorem states that the 'action integral' around the path Figure 7.2: Trajectory tube through \(\Gamma_{1}\). \(\Gamma_{1}\), \(\quad_{\Gamma_{1}}(\mathbf{p}\cdot\mathrm{d}\mathbf{q}-H\,\mathrm{d}t)\), is the same value for any other path \(\Gamma_{2}\) encircling the same tube of trajectories, Thus the change of variables is given implicitly: to obtain \(\overline{\mathbf{p}}\) in terms of \(\mathbf{p}\) and \(\mathbf{q}\) solve \(\mathbf{p}=\partial S/\partial\mathbf{q}\) for \(\overline{\mathbf{p}}\); to obtain \(\overline{\mathbf{q}}\) in terms of \(\mathbf{p}\) and \(\mathbf{q}\) substitute the solution for \(\overline{\mathbf{p}}\) into \(\overline{\mathbf{q}}=\partial S/\partial\mathbf{p}\). Note that the change of variables specified by Eq. (7.10) is guaranteed to be symplectic. That is \[\partial\mathbf{p}\cdot\partial\mathbf{q}^{\prime}-\partial\mathbf{q}\cdot \partial\mathbf{p}^{\prime}=\partial\overline{\mathbf{p}}\cdot\partial\overline {\mathbf{q}}^{\prime}-\partial\overline{\mathbf{q}}\cdot\partial\overline{ \mathbf{p}}.\] This can be checked by differentiating Eq. (7.10), \[\partial\overline{\mathbf{q}}=\frac{\partial^{2}S}{\partial\overline{\mathbf{ p}}\partial\overline{\mathbf{p}}}\cdot\partial\overline{\mathbf{p}}+\frac{ \partial^{2}S}{\partial\overline{\mathbf{p}}\partial\mathbf{q}}\cdot\partial \mathbf{q},\] \[\partial\mathbf{p}=\frac{\partial^{2}S}{\partial\overline{\mathbf{q}}\partial \overline{\mathbf{p}}}\cdot\partial\overline{\mathbf{p}}+\frac{\partial^{2}S} {\partial\overline{\mathbf{q}}\partial\mathbf{q}}\cdot\partial\mathbf{q},\] and substituting into the symplectic condition given above. In terms of the generating function the new Hamiltonian is given by2 \[\overline{H}(\overline{\mathbf{p}},\,\overline{\mathbf{q}},\,t)=H(\mathbf{p},\,\mathbf{q},\,t)+\partial S/\partial t. \tag{7.11}\] #### Hamiltonian maps Say we consider a Hamiltonian system and define the 'time \(T\) map' \(\mathcal{M}_{T}\) for the system as \[\mathcal{M}_{T}(\tilde{\ }(t),\,t)=\tilde{\ }(t+T). \tag{7.12}\] (The explicit dependence on \(t\) in the second argument of \(\mathcal{M}_{T}\) is absent if the Hamiltonian is time independent.) Taking a differential variation of Eq. (7.12) with respect to \(\tilde{\ }\), we have \[\frac{\partial\mathcal{M}_{T}}{\partial\tilde{\ }}\cdot\partial\tilde{\ }(t)=\tilde{\ }(t+T).\] The symplectic condition for a Hamiltonian flow \[\delta^{-\dagger}(t+T)\cdot\mathbf{S}_{N}\cdot\delta^{-\prime}(t+T)=\delta^{-} (t)\cdot\mathbf{S}_{N}\cdot\delta^{-\prime}(t)\] yields \[\delta^{-\dagger}(t)\cdot\mathbf{S}_{N}\cdot\delta^{-\prime}(t)=\ \ \ \frac{\partial\mathcal{M}_{T}}{\partial\tilde{\ }}\cdot\delta^{-}(t)\ \\[\mathbf{S}_{N}=\mathbf{A}^{\dagger}\cdot\mathbf{S}_{N}\cdot\mathbf{A}. \tag{7.13}\] The product of symplectic matrices is also symplectic. To see this, suppose that \(\mathbf{A}\) and \(\mathbf{B}\) are symplectic. Then \[\mathbf{(AB)}^{\dagger}\cdot\mathbf{S}_{N}\mathbf{(AB)}=\mathbf{B}^{\dagger} \cdot(\mathbf{A}^{\dagger}\cdot\mathbf{S}_{N}\cdot\mathbf{A})\cdot\mathbf{B}= \mathbf{B}^{\dagger}\cdot\mathbf{S}_{N}\cdot\mathbf{B}=\mathbf{S}_{N}.\] So \(\mathbf{AB}\) is symplectic. One consequence of the conservation of phase space volumes for Hamiltonian systems is the _Poincare recurrence theorem_. Say we consider a time independent Hamiltonian \(H=H(\mathbf{p},\,\mathbf{q})\) for the case where all orbits are bounded. (This occurs if the energy surface is bounded; i.e., there are no solutions of \(E=H(\mathbf{p},\,\mathbf{q})\) with \(|\mathbf{p}|\rightarrow\infty\) or \(|\mathbf{q}|\rightarrow\infty\).) Now pick _any_ initial point in phase space, and surround it with a ball \(R_{0}\) of small radius \(\varepsilon>0\). Poincare's recurrence theorem states that, if there are points which leave the initial ball, there are always some of these which will return to it if we wait long enough, and this is true no matter how small we choose \(\varepsilon\) to be. In order to see that this is so consider the time \(T\) map, Eq. (7.12), which evolves points forward in time by an amount \(T\). Say that under the time \(T\) map the initial ball \(R_{0}\) is mapped to a region \(R_{1}\) outside the initial ball (\(R_{1}\cap R_{0}\) is empty). Continue mapping so as to obtain regions \(R_{2}\), \(R_{3}\), \(\ldots\) By Liouville's theorem all these regions have the same volume, equal to the volume of the initial ball \(R_{0}\). Since the orbits are bounded, they are confined to a finite volume region of phase space. Thus, as we repeatedly apply the time \(T\) map, we must eventually find that we produce a region \(R_{r}\) which overlaps a previously produced region \(R_{s}\), \(r>s\). (If this is not so, then we would eventually come to the impossible situation where the sum of the volumes of the nonoverlapping \(R_{s}\) would eventually be larger than the volume of the bounded region that they are confined to.) Now apply the inverse of the time \(T\) map to \(R_{r}\) and \(R_{s}\). This inverse must produce intersecting regions (namely \(R_{r-1}\) and \(R_{s-1}\)). Applying the inverse map \(s\) times we conclude that \(R_{r-s}\) (recall that \(r-s>0\)) intersects the original ball \(R_{0}\). Thus, as originally claimed, there are points in \(R_{0}\) which return to \(R_{0}\) after some time (\(r-s\))\(T\). As in the case of general non Hamiltonian systems, the surface of section technique also provides an extremely useful tool for analysis in Hamiltonian systems. There are two cases that are of interest. (_a_) The Hamiltonian depends periodically on time: \(H=H(\mathbf{p},\,\mathbf{q},\,t)=H(\mathbf{p},\,\mathbf{q},\,t+\tau)\), where \(\tau\) is the period. (_b_) The Hamiltonian has no explicit dependence on time: \(H=H(\mathbf{p},\,\mathbf{q})\). First, we consider the case of a time periodic Hamiltonian. In that case, we can consider the phase space as having dimension \(2N+1\) by replacing the argument \(t\) in \(H\) by a dependent variable \(\xi\), taking the phase space variables to be (**p**, **q**, \(\xi\)), and supplementing Hamilton's equations by the addition of the equation \({\rm d}\xi/{\rm d}t=1\). Since the Hamiltonian is periodic in \(\xi\) with period \(\tau\), we can consider \(\xi\) as an angle variable and replace its value in the Hamiltonian by \[\overline{\xi}=\xi\ {\rm modulo}\ \tau.\] We then use for our surface of section the surface, \(\overline{\xi}=t_{0}\), where \(t_{0}\) is a constant between zero and \(\tau\). (This is the same construction we used for the periodically driven damped pendulum equation in Chapter 1.) Since the Hamiltonian is time periodic, the time \(T\) map defined by (7.12) satisfies \[{\cal M}_{\tau}(\tilde{\ },\ t_{0})={\cal M}_{\tau}(\tilde{\ },\ t_{0}+n\tau),\] where \(n\) is an integer and we have taken \(T=\tau\). Hence the surface of section map, which we denote \({\bf M}(\tilde{\ })\), is \[{\bf M}(\tilde{\ })={\cal M}_{\tau}(\tilde{\ },\ t_{0}),\] and \({\bf M}\) is endowed with the same symplectic properties as \({\cal M}_{\tau}\) (i.e., the matrix \(\partial{\bf M}/\partial\tilde{\ }\) satisfies (7.13)). Example:Consider the 'kicked rotor' illustrated in Figure 7.3. There is a bar of moment of intertia \(\tilde{I}\) and length \(l\), which is fastened at one end to a frictionless pivot. The other end is subjected to a vertical periodic impulsive force of impulse strength \(K/l\) applied at times \(t=0\), \(\tau\), \(2\tau\), \(\ldots\) (There is no gravity.) Using canonically conjugate variables \(p_{\theta}\) (representing the angular momentum) and \(\theta\) (the angular position of the rotor), we have that the Hamiltonian for this system and the corresponding equations of motion obtained from (7.1) are given by \[H(p_{\theta},\theta,\ t)=p_{\theta}^{2}/(2\tilde{I})+K\cos\theta\sum_{n}\delta (t-n\tau),\] \[\frac{{\rm d}p_{\theta}}{{\rm d}t}=K\sin\theta\sum_{n}\delta(t-n\tau).\] \[\frac{{\rm d}\theta}{{\rm d}t}=p_{\theta}/\tilde{I},\] where \(\delta(\ldots)\) denotes the Dirac delta function. From Eqs. (7.14) we see that \(p_{\theta}\) is constant between the kicks but changes discontinuously at each kick. The position variable \(\theta\) varies linearly with \(t\) between kicks (because \(p_{\theta}\) is constant) and is continuous at each kick. For our surface of section we examine the values of \(p_{\theta}\) and \(\theta\) just after each kick. Let \(p_{n}\) and \(\theta_{n}\) denote the values of \(p_{\theta}\) and \(\theta\) at times \(t=n\tau+0^{+}\), where \(0^{+}\) denotes a positive infinitesimal. By integrating (7.14a) through the delta function at \(t=(n+1)\tau\), we then obtain Figure 7.3: The kicked rotor. There is no gravity and no friction at the pivot. \[p_{n+1}-p_{n}=K\sin\theta_{n+1},\] and from (7.14b), \(\theta_{n+1}-\theta_{n}=p_{n}\tau/\bar{I}\). Without loss of generality we can take \(\tau/\bar{I}=1\) to obtain the map \[\theta_{n+1}=(\theta_{n}+p_{n})\ \text{modulo}\ 2\pi, \tag{7.15a}\] \[p_{n+1}=p_{n}+K\sin\theta_{n+1}, \tag{7.15b}\] where we have added a modulo \(2\pi\) to Eq. (7.15a) since \(\theta\) is an angle, and we wish to restrict its value to be between zero and \(2\pi\). The map given by Eqs. (7.15) is often called the'standard map' and has proven to be a very convenient model for the study of the typical chaotic behavior of Hamilonian systems that yield a two dimensional map. It is a simple matter to check that Eqs. (7.15) preserve area in (\(p\), \(\theta\)) space. To do this we calculate the determinant of the Jacobian of the map and verify that it is 1: \[\text{det}\begin{array}{cc}\partial\theta_{n+1}/\partial\theta_{n}&\partial \theta_{n+1}/\partial p_{n}\\ \partial p_{n+1}/\partial\theta_{n}&\partial p_{n+1}/\partial p_{n}\end{array} =\text{det}\begin{array}{cc}1&1\\ K\cos\theta_{n+1}&1+K\cos\theta_{n+1}\end{array}=1.\] Since \(N=1\) this also implies that the map is symplectic, as required. We now consider the second class of surface of sections that we have mentioned, namely the case where the Hamiltonian has no explicit time dependence. In this case, since the energy is conserved, the motion of the system is restricted to the (\(2N-1\)) dimensional surface given by \(E=H(\mathbf{p},\ \mathbf{q})\). Taking a surface of section we would then obtain a (\(2N-2\)) dimensional map. Say we choose for our surface of section the plane \(q_{1}=K_{0}\) (where \(K_{0}\) is a constant), and say we give the values of the \(2N-2\) quantities, \(p_{2}\), \(p_{3}\),..., \(p_{N}\), \(q_{2}\), \(q_{3}\),..., \(q_{N}\), on this plane. Let \(\hat{\ }\) denote the vector specifying these coordinate values on the surface of section \(\hat{\ }\)\(\equiv\) (\(p_{2}\), \(p_{3}\),..., \(p_{N}\), \(q_{2}\), \(q_{3}\),..., \(q_{N}\)). Is there a map, \(\hat{\ }\)\(n+1=\mathbf{M}(\hat{\ }\)\(n)\), evolving points forward on the surface of section; i.e., does a knowledge of \(\hat{\ }\)\(n\) uniquely determine the location of the next point on the surface of section? Given \(\hat{\ }\) on the surface of section, the only unknown is \(p_{1}\) (\(q_{1}\) is known since we are on the surface of section \(q_{1}=K_{0}\)). If we can determine \(p_{1}\) then the phase space position \(\hat{\ }\) is known, and this uniquely determines the system's future evolution, and hence \(\hat{\ }\) at the next piercing of the surface of section. To find \(p_{I}\) we attempt to solve the equation \(E=H(\mathbf{p},\ \mathbf{q})\) for the single unknown \(p_{I}\). The problem is that this equation will in general have multiple solutions for \(p_{1}\). For example, for the commonly encountered case where the Hamiltonian is in the form of a kinetic energy \(p^{2}/2m\), plus a potential energy, \[H(\mathbf{p},\ \mathbf{q})=p^{2}/2m+\text{V}(\mathbf{q}),\] for given \(\hat{\ }\) there are two roots for \(p_{1}\), \[p_{1}=\pm\{2m[E-\text{V}(\mathbf{q})]-(p_{2}^{2}+p_{3}^{2}+\cdots+p_{N}^{2})\} ^{1/2}. \tag{7.16}\]To make our determination of \(p_{1}\) unique we adopt the following proce dure. We specify \(\hat{\ }_{n}\) to be the coordinates (\(p_{2}\),..., \(p_{N}\), \(q_{2}\),..., \(q_{N}\)) at the \(n\)th time at which \(q_{1}(t)=K_{0}\)_and_\(p_{1}>0\). That is, we only count surface of section piercings which cross \(q_{1}=K_{0}\) from \(q_{1}<K_{0}\) to \(q_{1}>K_{0}\) and not vice versa (for the Hamiltonian under consideration \({\rm d}q_{1}/{\rm d}t=p_{1}\)). Hence, we _define_ the surface of section so that we always take the positive root in (7.16) for \(p_{1}\) (we could equally well have chosen \(p_{1}<0\), rather than \(p_{1}>0\), in our definition). With this definition, specification of \(\hat{\ }_{n}\) uniquely determines a point in phase space. This point can be advanced by Hamilton's equations to the next time that \(q_{1}(t)=K_{0}\) with \(p_{1}>0\) thus determining \(\hat{\ }_{n+1}\). In this way we determine a map, \[\hat{\ }_{n+1}={\bf M}(\hat{\ }_{n}).\] This (\(2N-2\)) dimensional map is symplectic in the remaining canon \[\hat{\bf p}=(p_{2}\),..., \(p_{N})\] and \(\hat{\bf q}=(q_{2}\),..., \(q_{N})\). (This also implies that the map conserves (\(2N-2\)) dimensional volumes.) To show that the map is symplectic, we need to demonstrate that the symplectic area, \[\begin{array}{c}\hat{\bf p}\cdot{\rm d}\hat{\bf q},\\ \Gamma\end{array}\] is invariant when the closed path \(\Gamma\) around which the integral is taken is acted on by the map \({\bf M}\). This follows immediately from the Poincare Cartan theorem in the form of Eq. (7.9c). Writing \({\bf p}\cdot{\rm d}{\bf q}=p_{1}\,{\rm d}q_{1}+\hat{\bf p}\cdot{\rm d}\hat{ \bf q}\) and noting that \(q_{1}=K_{0}\) on the surface of section, we have \({\rm d}q_{1}=0\), and the desired result follows. (Note that use of the Poincare Cartan theorem (rather than the integral invariant (7.9a)) is necessary here because two different initial conditions starting in the surface of section take different amounts of time to return to it.) Thus, we see that, in the cases of both a time periodic Hamiltonian and a time independent Hamiltonian, the resulting maps are symplectic. For this reason symplectic maps have played an important role, especially with respect to numerical experiments, in elucidating possible types of chaotic behavior in Hamiltonian systems. One consequence of the symplectic nature of these maps is that the Lyapunov exponents occur in pairs \(\pm h_{1}\), \(\pm h_{2}\), \(\pm h_{3}\),... Thus for each positive exponent there is a negative exponent of equal magnitude, and the number of zero exponents is even. To see why this is so we recall that the Lyapunov exponents are obtained from the product of the matrices \({\bf DM}(\hat{\ }_{n}){\bf DM}(\hat{\ }_{n-1})\)...\({\bf DM}(\hat{\ }_{0})\); see Section 4.4. In the Hamiltonian case the matrices \({\bf DM}(\hat{\ }_{j})\) are symplectic. Since the product of symplectic matrices is also symplectic, the overall matrix, \({\bf DM}(\hat{\ }_{n})\)...\({\bf DM}(\hat{\ }_{0})\), is symplectic. Now let us examine what the symplectic condition implies for the eigenvalues of a matrix. The eigenvalues \(\lambda\) of a symplectic matrix **A** are the roots of its characteristic polynomial \[D(\lambda)=\det[\textbf{A}-\lambda\ ].\] Multiplying Eq. (7.13) on the left by \(\textbf{S}_{N}^{-1}(\textbf{A}^{\dagger})^{-1}\) we have \[\textbf{A}=\textbf{S}_{N}^{-1}(\textbf{A}^{\dagger})^{-1}\textbf{S}_{N}.\] The characteristic polynomial then becomes \[D(\lambda) =\det\{\textbf{S}_{N}^{-1}(\textbf{A}^{\dagger})^{-1}\textbf{S}_{N}- \lambda\ ]\] \[=\det[(\textbf{A}^{\dagger})^{-1}-\lambda\ ]\] \[=\det[\textbf{A}^{-1}-\lambda\ ].\] Thus, the eigenvalues of **A** and \(\textbf{A}^{-1}\) are the same. Since the eigenvalues of \(\textbf{A}^{-1}\) and **A** are also inverses of each other, we see that the eigenvalues must occur in pairs, \((\lambda,\lambda^{-1})\). Because the Lyapunov exponents are obtained from the logarithms of the magnitudes of the eigenvalues, \((\hbar=\ln\lvert\lambda\rvert)\) we conclude that they occur in pairs \(\pm\,\hbar\). As an example, we consider the stability of a periodic orbit of a symplectic two dimensional map. If the period of the orbit is \(r\), then the problem reduces to considering the stability of the fixed points of \(\textbf{M}^{\prime}(\tilde{\phantom{\prime}})\) which is also area preserving. Hence, it suffices to examine the stability of the fixed points of symplectic two dimensional maps. Let \(\textbf{J}=\textbf{D}\textbf{M}^{\prime}\) denote the Jacobian matrix of the map at such a fixed point. Since the map is symplectic, we have \(\det\textbf{J}=1\). The eigenvalues of **J** are given by \[\det\begin{array}{cc}J_{11}-\lambda&J_{12}\\ J_{21}&J_{22}-\lambda&=\lambda^{2}-\hat{T}\lambda+1=0,\end{array}\] where \(\hat{T}\equiv J_{11}+J_{22}\) is the trace of **J**, and the last term in the quadratic is one by virtue of \(\det\textbf{J}=1\). The solutions of the quadratic are \[\lambda=[\hat{T}\pm(\hat{T}^{2}-4)^{1/2}]/2.\] Since \(\{[\hat{T}+(\hat{T}^{2}-4)^{1/2}/2\}\{\hat{T}-(\hat{T}^{2}-4)^{1/2}]/2\}=1\), the roots are reciprocals of each other as required for the symplectic map. There are three cases: (_a_) \(\hat{T}>2\); the roots are real and positive (\(\lambda,\,1/\lambda>0\)); (_b_) \(2>\hat{T}>-2\); the roots are complex and of magnitude one (\(\lambda,\,1/\lambda=\exp(\pm\mathrm{i}\theta)\)); (_c_) \(\hat{T}<-2\); the roots are real and negative (\(\lambda,\,1/\lambda<0\)). In case (_a_) we say that the periodic orbit is _hyperbolic_; in case (_b_) we say the periodic orbit is _elliptic_; and in case (_c_) we say that the periodic orbit is _hyperbolic with reflection_. Note that, in the linear approximation, cases(_a_) and (_c_) lead typical nearby orbits to diverge exponentially from the periodic orbit (linear instability); while in case (_b_), in the linear approximation a nearby orbit remains nearby forever (linear stability). In the latter case, the nearby linearized orbit remains on an ellipse encircling the periodic orbit and circles around it at a rate \(\theta/2\pi\) per iterate of **M**. (Because the product of the two roots is one, in no case can the periodic orbit be an attractor, since that requires that the magnitude of _both_ roots be less than one.) #### Integrable systems In the case where the Hamiltonian has no explicit time dependence, \(H = H(\textbf{p},\,\textbf{q})\), we have seen that Hamilton's equations imply that \(\text{d}H/\text{d}t = \textbf{0}\), and the energy \(E = H(\textbf{p},\,\textbf{q})\) is a conserved quantity. Thus, orbits with a given energy \(E\) are restricted to lie on the \((2N - 1)\) dimensional energy surface \(E = H(\textbf{p},\,\textbf{q})\). A function \(f(\textbf{p},\,\textbf{q})\) is said to be a _constant of the motion_ for a system with Hamiltonian \(H\), if, as \(\textbf{p}(t)\) and \(\textbf{q}(t)\) evolve with time in accordance with Hamilton's equations, the value of the function \(f\) does not change, \(f(\textbf{p},\,\textbf{q}) = (\text{constant})\). For example, for time independent Hamiltonians, \(H\) is a constant of the motion. More generally, differentiating \(f(\textbf{p}(t),\,\textbf{q}(t))\) with respect to time, and assuming that there is no explicit time dependence of the Hamiltonian, we have \[\frac{\text{d}f}{\text{d}t} = \frac{\text{d}\textbf{p}}{\text{d}t}\cdot\frac{\partial f}{ \partial\textbf{p}} + \frac{\text{d}q}{\text{d}t}\cdot\frac{\partial f}{\partial\textbf{q}} = \frac{\partial H}{\partial\textbf{p}}\cdot\frac{\partial f}{\partial \textbf{q}} - \frac{\partial H}{\partial\textbf{q}}\cdot\frac{\partial f}{\partial \textbf{p}}.\] We call the expression appearing on the right hand side of the second equality the _Poisson bracket_ of \(f\) and \(H\), and we abbreviate it as [\(f\), \(H\)], where \[[g_{1},\,g_{2}] \equiv \frac{\partial g_{1}}{\partial\textbf{q}}\cdot\frac{\partial g_{2}}{ \partial\textbf{p}} - \frac{\partial g_{1}}{\partial\textbf{p}}\cdot\frac{\partial g_{2}}{\partial \textbf{q}}. \tag{7.17}\] Note that \([g_{1},\,g_{2}] = -[g_{2},\,g_{1}]\). Thus the condition that \(f\) be a constant of the motion for a time independent Hamiltonian is that its Poisson bracket with \(H\) be zero, \[[f,\,H] = 0. \tag{7.18}\] (The Hamiltonian is a constant of the motion since \([H,\,H] = 0\).) A time independent Hamiltonian system is said to be _integrable_ if it has \(N\)_independent_ global constants of the motion \(f_{i}(\textbf{p},\,\textbf{q})\), \(i = 1\), \(2\),..., \(N\) (one of these is the Hamiltonian itself; we choose this to be the \(i=1\) constant, \(f_{1}(\textbf{p},\,\textbf{q}) \equiv H(\textbf{p},\,\textbf{q})\)), and, furthermore, if \[[f_{i},\,f_{j}] = 0, \tag{7.19}\] for all \(i\) and \(j\). We already know that the Poisson bracket of \(f_{i}\) with \(f_{1}\) is zero for all \(i=1\), \(2\),..., \(N\), since the \(f_{i}\) are constants of the motion (see Eq. (7.18)). If the condition (7.19) holds for all \(i\) and \(j\), then we say that the \(N\) constants of the motion \(f_{i}\) are _in involution_. The constants of the motion \(f_{i}\) are 'independent' if no one of them can be expressed as a function of the (\(N-1\)) other constants. The requirement that an integrable system has \(N\) independent constants of the motion implies that the trajectory of the system in the phase space is restricted to lie on the \(N\) dimensional surface \[f_{i}(\mathbf{p},\,\mathbf{q})=k_{i}, \tag{7.20}\] \(i=1,\,2,\,\ldots,\,N\), where \(k_{i}\) are \(N\) constants. The requirement that the \(N\) independent constants \(f_{i}\) be in involution (Eq. (7.19)) restricts the topology of the surface, Eq. (7.20), to be of a certain type: it must be an \(N\) dimensional torus (as defined in Section 6.3). This is demonstrated in standard texts\({}^{2}\) and will not be shown here. For the case \(N=2\), an orbit on the torus is as shown in Figure 7.4(\(a\)). Given an integrable system it is possible to introduce a canonical change of variables \((\mathbf{p},\,\mathbf{q})\rightarrow(\overline{\mathbf{p}},\,\overline{ \mathbf{q}})\) such that the new Hamiltonian \(\overline{H}\) depends only on \(\overline{\mathbf{p}}\) and not on \(\overline{\mathbf{q}}\). One possibility is to choose the constants Figure 7.4: (\(a\)) Orbit on a 2 torus. (\(b\)) Two irreducible paths on a 2 torus. of the motion themselves as the \(N\) components of \(\overline{\bf p}\), \(\bar{p}_{i}=f_{i}({\bf p}\), \({\bf q})\). Since the \(f_{i}\) are constants, \({\rm d}\overline{\bf p}/{\rm d}t=\partial\overline{H}/\partial\overline{\bf q}=0\) and hence \(\overline{H}=\overline{H}(\overline{\bf p})\). In fact, we can construct many equivalent sets of constants of the motion by noting that any \(N\) independent functions of the \(N\) constants \(f_{i}\) could be used for the components of \(\overline{\bf p}\) with the same result (namely, \({\rm d}\overline{\bf p}/{\rm d}t=\partial\overline{H}/\partial\overline{\bf q}=0\)). Of all these choices, one particular choice is especially convenient. This choice is the _action angle variable_ which we denote \[(\overline{\bf p},\,\overline{\bf q})=({\bf I},\,\mathbf{\theta}),\] where \({\bf I}\) is defined by \[{\bf d}{\bf I}/{\bf d}t = 0,\] \[{\bf d}\mathbf{\theta}/{\bf d}t = \partial\,\overline{H}({\bf I})/\partial{\bf I}\equiv\mathbf{\omega}({\bf I}).\] The solution of these equations is \({\bf I}(t)={\bf I}(0)\) and \[\mathbf{\theta}(t)=\mathbf{\theta}(0)+\mathbf{ \omega}({\bf I})t. \tag{7.24}\] Thus we can interpret \(\mathbf{\omega}({\bf I})=\partial\,\overline{H}({\bf I})/\partial{\bf I}\) as an angular velocity vector specifying trajectories on the \(N\) torus. As in our discussion in Chapter 6, trajectories on a torus are \(N\) frequency quasiperiodic if there is no vector of integers \({\bf m}=(m_{1},\ m_{2},\ \ldots,\ m_{N})\) such that \[{\bf m}\cdot\mathbf{\omega}=0, \tag{7.25}\] except when \({\bf m}\) is the trivial vector all of whose components are zero. Assuming a typical smooth variation of \(\overline{H}\) with \({\bf I}\) the condition \({\bf m}\cdot\mathbf{\omega}=0\) with nonzero \({\bf m}\) is only satisfied for a _countable_ set of \({\bf I}\). Thus, if one picks a point randomly with uniform probability in phase space, the probability is 1 that the point chosen will be on a torus for which the orbits are \(N\) frequency quasiperiodic and _fill up the torus_. Thus, for integrable systems, we can view the phase space as being completely occupied by \(N\) tori almost all of which are in turn filled by \(N\) frequency quasiperiodic orbits. In contrast with the case of \(N\) frequency quasiperiodicity is the case of periodic motion, where orbits on the \(N\) torus close on themselves (Figure 6.5). In this case \[\mathbf{\omega}={\bf m}\omega_{0}, \tag{7.26}\] where \({\bf m}\) is again a vector of integers and \(\omega_{0}\) is a scalar. The orbit in this case closes on itself after \(m_{1}\) circuits in \(\theta_{1}\), \(m_{2}\) circuits in \(\theta_{2}\),.... (Alternatively to (7.26), the condition for a periodic orbit can also be stated as requiring that \((N-1)\) independent relations of the form (7.25) hold.4) Again assuming typical smooth variation of \(\overline{H}\) with \({\bf I}\), we have that for integrable systems the set of tori which satisfy (7.26) and hence have periodic orbits, while having zero Lebesgue measure (i.e., zero phase space volume), is _dense in the phase space_. Thus, arbitrarily near any torus on which there is \(N\) frequency quasiperiodicity there are tori on which the orbits are periodic. We now give an example of the procedure used for the reduction of an integrable system to action angle variables. This procedure is based on the _Hamilton Jacobi equation_ obtained by combining (7.10) and (7.11), \[H\ \ \frac{\partial S({\bf I},\ {\bf q})}{\partial{\bf q}},\ {\bf q}\ \ = \overline{H}({\bf I}). \tag{7.27}\] This equation may be regarded as a first order partial differential equation for the generating function \(S({\bf I},\ {\bf q})\). Example:We consider a one degree of freedom Hamiltonian, \[H(p,\;q)=p^{2}/(2m)+V(q),\] where \(V(q)\) is of the form shown in Figure 7.5. From Eq. (7.21) we have \[I=\frac{1}{\pi}\begin{array}{c}q_{2}\\ q_{1}\end{array}\{2m[E-V(q)]\}^{1/2}\,\mathrm{d}q. \tag{7.28}\] In the case of a harmonic oscillator, \(V(q)=\frac{1}{2}m\Omega^{2}q^{2}\), we have \(q_{2}=-q_{1}=[2E/(m\Omega^{2})]^{1/2}\), and the integral for \(I\) yields \(I=E/\Omega\). Thus we have \[\overline{H}(I)=\Omega I.\] For this case \(\omega(I)=\mathrm{d}\,\overline{H}/\mathrm{d}I=\Omega\) is independent of \(I\), and (7.24) becomes \[\theta(t)=\theta(0)+\Omega t.\] From (7.27) we have for the harmonic oscillator \[\partial S/\partial q=[2m(\Omega I-\frac{1}{2}m\Omega^{2}q^{2})]^{1/2},\] which on integration and application of (7.22) gives \[q=(2I/m\Omega)^{1/2}\cos\theta,\] \[p=-(2mI\Omega)^{1/2}\sin\theta.\] The trajectory in \(p,\;q\) phase space is an ellipse on which the orbit circulates one time every period of oscillation \(2\pi/\Omega\) (Figure 7.6(_a_)). (Since \(N=1\) we have a 'one dimensional torus', namely a closed curve. For \(N>1\) we typically have \(N\) frequencies and quasiperiodic motion.) The harmonic oscillator is exceptional in that \(\omega(I)\) is independent of \(I\). As Figure 7.5: Particle of energy \(E\) in a potential well \(V(q)\). an example of the more typical situation where \(\omega(I)\) depends on \(I\) consider the case of a hard wall potential: \(V(q)=0\) for \(|q|<\delta\) and \(V(q)=+\infty\) for \(|q|>\delta\). In this case the trajectory in phase space is as shown in Figure 7.6(\(b\)). The integral for \(I\), Eq. (7.28), is just \((2\pi)^{-1}\) times the phase space area in the rectangle; \(I=4(2\pi)^{-1}(2mE\delta^{2})^{1/2}\). Thus \(\overline{H}(\ )=(\pi I)^{2}/8m\delta^{2}\) and \(\omega(I)=\pi^{2}I/4\,m\delta^{2}\) which increases linearly with \(I\). ### Perturbation of integrable systems #### The KAM theorem We next address a very fundamental question concerning Hamiltonian systems; namely, how prevalent is integrability? One extreme conjecture is that integrability generally applies, and whatever difficulty we might encounter in obtaining the solution to any given problem only arises because we are not clever enough to determine the \(N\) independent constants of the motion which must surely exist. Another conjecture, which is essentially the opposite of this, given any integrable Hamiltonian \(H_{0}\)(**p**, **q**), if we alter it slightly by the addition of a perturbation \[H(\textbf{p},\,\textbf{q})=H_{0}(\textbf{p},\,\textbf{q})+\varepsilon H_{1}( \textbf{p},\,\textbf{q}), \tag{7.29}\] then we should expect that for a typical form of the perturbation, \(H_{1}(\textbf{p},\,\textbf{q})\), all the constants of the motion for the integrable system, \(H_{0}(\textbf{p},\,\textbf{q})\), except for the energy constant, \(E=H(\textbf{p},\,\textbf{q})\), are immediately destroyed as soon as \(\varepsilon\neq 0\). Presumably, if this second conjecture were to hold, then, for small \(\varepsilon\), orbits would initially approximate the orbits of the integrable system, staying close to the unperturbed \(N\) tori that exist for \(\varepsilon=0\) for some time. Eventually, however, the orbit, if followed for a long enough time, could ergodically wander anywhere on the energy surface. These two opposing views both have some support in experimental observation. On the one hand, the solar system appears to have been fairly stable. In particular, ever since its formation the Earth has been in a position relative to the position of the Sun such that its climate has been conducive to life. Thus, in spite of the perturbation caused by the gravitational pull of other planets (particularly that of Jupiter), the Earth's orbit has behaved as we would have expected had we neglected all the other planets. (In that case the system is integrable, and we obtain the elliptical Kepler orbit of the Earth around the Sun.) On the other hand, in support of the second conjecture, we have the amazing success of the predictions of statistical mechanics. In statistical mechanics one considers a Hamiltonian system with a large number of degrees of freedom (\(N\gg 1\)), and then makes the fundamental ansatz that at any given time the system is equally likely to be located at any point on the energy surface (the motion is ergodic on the energy surface). This would not be possible if there were additional constants of the motion constraining the orbit of the system. The success of statistical mechanics in virtually every case to which it may reasonably be applied can be interpreted as evidence supporting the validity of its fundamental ansatz in a wide variety of systems with \(N\gg 1\). Given the discussion above, it should not be too surprising to find out that the true situation lies somewhere between the two extremes that we have discussed. The resolution of the basic question of how prevalent integrability is has come only with the rigorous mathematical work of Kolmogorov, Arnold and Moser (KAM) and with the subsequent extensive computer studies of chaos and integrability in Hamiltonian systems. The basic question considered by Kolmogorov, Arnold and Moser was what happens when an integrable Hamiltonian is perturbed, Eq. (7.29). The research was initiated by Kolmogorov (1954) who conjectured what would happen with the addition of the perturbation. He also outlined an ingenious method which he felt could be used to prove his conjecture. The actual carrying out of this program, accomplished by Arnold and Moser (see Arnold (1963) and Moser (1973)), was quite difficult. The result they obtained is called the KAM theorem. We shall only briefly indicate some of the sources of the difficulty, and then state the main result. We express (7.29) in the action angle variables (**I**, \(\theta\)) of the upper turbed Hamiltonian \(H_{0}\), \[H({\bf I},\,\mathbf{\theta})=H_{0}({\bf I})+\varepsilon H_{1}({\bf I},\,\mathbf{\theta}). \tag{7.30}\] We are interested in determining whether this perturbed Hamiltonian has \(N\) dimensional tori to which its orbits are restricted. If there are tori, there is a new set of action angle variables (**I\({}^{\prime}\)**, \(\theta^{\prime}\)) such that \[H({\bf I},\,\mathbf{\theta})=H^{\prime}({\bf I}^{\prime}),\] where, in terms of the generating function \(S\), we have using (7.10) \[{\bf I}=\frac{\partial S({\bf I}^{\prime},\,\mathbf{\theta})}{ \partial\mathbf{\theta}},\,\mathbf{\theta}^{\prime}=\frac{ \partial S({\bf I}^{\prime},\,\mathbf{\theta})}{\partial{\bf I}^{ \prime}}. \tag{7.31}\] The Hamilton Jacobi equation for \(S\) is \[H\ \ \frac{\partial S}{\partial\mathbf{\theta}},\,\mathbf{ \theta}\ \ =H^{\prime}({\bf I}^{\prime}). \tag{7.32}\] One approach to solving (7.32) for \(S\) might be to look for a solution in the form of a power series in \(\varepsilon\), \[S=S_{0}+\varepsilon S_{1}+\varepsilon^{2}S_{2}\,+\cdots. \tag{7.33}\] For \(S_{0}\) we use \(S_{0}={\bf I}^{\prime}\cdot\mathbf{\theta}\) which when substituted in (7.31) gives \({\bf I}={\bf I}^{\prime}\), \(\mathbf{\theta}=\mathbf{\theta}^{\prime}\), corresponding to the original action angle variables applicable for \(\varepsilon=0\). Substituting the series (7.33) for \(S\) in (7.32) gives, \[H_{0}({\bf I}^{\prime}+\varepsilon\partial S_{1}/\partial\mathbf{\theta}+\varepsilon^{2}\partial S_{2}/\partial\mathbf{\theta }+\cdots)+\varepsilon H_{1}({\bf I}^{\prime}+\varepsilon\partial S_{1}/ \partial\mathbf{\theta}+\cdots,\,\mathbf{\theta})\] \[=H^{\prime}({\bf I}^{\prime}). \tag{7.34}\] Expanding (7.34) for small \(\varepsilon\) and only retaining first order terms, we have \[H_{0}({\bf I}^{\prime})+\varepsilon\frac{\partial H_{0}}{\partial{\bf I}^{ \prime}}\cdot\frac{\partial S_{1}}{\partial\mathbf{\theta}}+ \varepsilon H_{1}({\bf I}^{\prime},\,\mathbf{\theta})=H^{\prime}({ \bf I}^{\prime}). \tag{7.35}\] We next express \(H_{1}({\bf I}^{\prime},\,\mathbf{\theta})\) and \(S_{1}({\bf I}^{\prime},\,\mathbf{\theta})\) as Fourier series in the angle vector \(\theta\), \[H_{1}=\sum_{\bf m}H_{1,{\bf m}}({\bf I}^{\prime})\exp({\rm i}{ \bf m}\cdot\mathbf{\theta}),\] \[S_{1}=\sum_{\bf m}S_{1,{\bf m}}({\bf I}^{\prime})\exp({\rm i}{ \bf m}\cdot\mathbf{\theta}),\]where **m** is an \(N\) component vector of integers. Substituting these Fourier series in (7.35), we obtain \[S_{1}=\mathrm{i}\sum_{\mathbf{m}}\frac{H_{1,\mathbf{m}}(\mathbf{I}^{\prime})}{ \mathbf{m}\cdot\boldsymbol{\omega}_{0}(\mathbf{I}^{\prime})}\exp\left(\mathbf{i} \mathbf{m}\cdot\boldsymbol{\theta}\right) \tag{7.36}\] where \(\boldsymbol{\omega}_{0}(\mathbf{I})\equiv\partial H_{0}(\mathbf{I})/\partial \mathbf{I}\) is the unperturbed \(N\) dimensional frequency vector for the torus corresponding to action **I**. One question is that of whether the infinite sum (7.36) converges. This same question also arises in taking (7.34) to higher order in \(\varepsilon\) to determine successively the other terms, \(S_{2}\), \(S_{3}\), etc., appearing in the series (7.33). This problem is precisely the 'problem of small denominators' en countered in Section 6.2 where we treated frequency locking of quasi periodic orbits for dissipative systems. In particular, clearly (7.36) does not work for values of **I** for which \(\mathbf{m}\cdot\boldsymbol{\omega}_{0}(\mathbf{I})=0\) for some value of **m**. These **I** define _resonant tori_ of the unperturbed system. (These resonant tori are typically destroyed by the perturbation for any small \(\varepsilon>0\).) We emphasize that the resonant tori are dense in the phase space of the unperturbed Hamiltonian. On the other hand, there is still a large set of'very nonresonant' tori. These are tori for which \(\boldsymbol{\omega}\) satisfies the condition \[|\mathbf{m}\cdot\boldsymbol{\omega}|>K(\boldsymbol{\omega})|\mathbf{m}|^{-(N+ 1)}, \tag{7.37}\] for _all_ integer vectors **m** (except the zero vector). Here \(|\mathbf{m}|\equiv|m_{1}|+|m_{2}|+\dots+|m_{N}|\), and \(K(\boldsymbol{\omega})>0\) is a number independent of **m**. The set of \(N\) dimensional vectors \(\boldsymbol{\omega}\) which do not satisfy (7.37) has zero Lebesgue measure in \(\boldsymbol{\omega}\) space, and thus the'very nonresonant' tori are, in this sense, very common. For \(\boldsymbol{\omega}\) satisfying (7.37), the series (7.36), and others of similar form giving \(S_{2}\), \(S_{3}\),..., converges. This follows if we assume that \(H_{1}\) is analytic in \(\boldsymbol{\theta}\) which implies that \(H_{1,\mathbf{m}}\) decreases exponentially with \(m\); i.e., \(|H_{1,\mathbf{m}}|<(\text{constant})\exp(-\alpha|\mathbf{m}|)\) for some con stand \(\alpha>0\). (Refer to the discussion in Section 6.2.) Even given that all the terms \(S_{1}\), \(S_{2}\),... exist and can be found, we would still be faced with the problem of whether there is convergence of the successive approximations to \(S\) obtained by taking more and more terms in the series (7.33). Actually, the scheme we have outlined (wherein \(S\) is expanded in a straightforward series in \(\varepsilon\), Eq. (7.33)) is too crude, and the proof of the KAM theorem relies on a more sophisticated method of successive approximations which has much faster convergence properties. We shall not pursue this discussion further. Suffice it to say that the KAM theorem essentially states that under very general conditions for small \(\varepsilon\)'most' (in the sense of the Lebesgue measure of the phase space) of the tori of the unperturbed integrable Hamiltonian survive. We say that a torus of the unperturbed system with frequency vector \(\boldsymbol{\omega}_{0}\)'survives' perturba tion if there exists a torus of the perturbed (\(\varepsilon\neq 0\)) system which has a frequency vector \(\boldsymbol{\omega}(\varepsilon)=k(\varepsilon)\boldsymbol{\omega}_{0}\), where \(k(\varepsilon)\) goes continuously to \(1\) as \(\varepsilon\to 0\), and such that the perturbed toroidal surface with frequency \(\boldsymbol{\omega}(\varepsilon)\) goes continuously to the unperturbed torus as \(\varepsilon\to 0\). Thus, writing \(\boldsymbol{\omega}=(\omega_{1},\,\omega_{2},\,\dots,\,\omega_{N})\), the unperturbed and perturbed frequency vectors \(\boldsymbol{\omega}_{0}\) and \(\boldsymbol{\omega}(\varepsilon)\) have the same frequency ratios of their components, \(\omega_{0j}/\omega_{01}=\omega_{j}(\varepsilon)/\omega_{1}(\varepsilon)\) for \(j=2,\,3,\,\dots,\,N\). According to the KAM theorem, for small \(\varepsilon\), the perturbed system's phase space volume (Lebes que measure) not occupied by surviving tori is small and approaches zero as \(\varepsilon\) approaches zero. Note, however, that, since the resonant tori on which \(\mathbf{m}\cdot\boldsymbol{\omega}_{0}(\mathbf{I})=0\) are dense, we expect that, arbitrarily near surviving tori of the perturbed system, there are regions of phase space where the orbits are not on surviving tori. We shall, in fact, see that these regions are occupied by chaotic orbits as well as new tori and elliptic and hyperbolic periodic orbits all created by the perturbation. In the language of Section 3.10, the set in the phase space occupied by surviving perturbed tori is a fat fractal. That is, it is the same kind of set on which values of the parameter \(r\) yielding chaos for the logistic map (2.10) exist and on which values of the parameter \(w\) in the circle map (6.11) yield two frequency quasiperiodic orbits (for \(k<1\)). The Poincare Birkhoff theorem discussed in the next subsection sheds light on the exceedingly complex and intricate situation which arises in the vicinity of resonant tori when an integrable system is perturbed. #### The fate of resonant tori We have seen that most tori survive small perturbation. The resonant tori, however, do not. What happens to them? To simplify the discussion of this question we consider the case of a Hamiltonian system described by a two dimensional area preserving map. We can view this map as arising from a surface of section for a time independent Hamiltonian with \(N=2\), as illustrated for the integrable case in Figure 7.7. The tori of the integrable system intersect the surface of section in a family of nested closed curves. Without loss of generality we can take these curves to be concentric circles represented by polar coordinates (\(r\), \(\phi\)). In this case we obtain a map (\(r_{n+1}\), \(\phi_{n+1}\)) = \(\mathbf{M}_{0}(r_{n}\), \(\phi_{n}\)), \[\begin{array}{ll}r_{n+1}&=r_{n},\\ \phi_{n+1}&=[\phi_{n}+2\pi R(r_{n})]\text{ modulo }2\pi.\end{array}\biggr{\}} \tag{7.38}\] Here \(R(r)\) is the ratio of the frequencies \(\omega_{1}/\omega_{2}\) where \(\boldsymbol{\omega}_{0}=(\omega_{1},\,\omega_{2})=(\partial H_{0}/\partial I_{1},\,\partial H_{0}/\partial I_{2})\) for the torus which intersects the sur face of section in a circle of radius \(r\), and we have taken the surface of section to be \(\theta_{2}=(\text{const.})\), where \(\boldsymbol{\theta}=(\theta_{1},\,\theta_{2})\) are the angle variablesconjugate to the actions \(\mathbf{I}=(I_{1},\,I_{2})\). The quantity \(\phi_{n}\) is the value of \(\theta_{1}\) at the \(n\)th piercing of the surface of section by the orbit. On a resonant torus the rotation number \(R(r)\) is rational: \[R=\omega_{1}/\omega_{2}=\tilde{p}/\tilde{q};\,\tilde{q}\omega_{1}-\tilde{p} \omega_{2}=0,\] where \(\tilde{p}\) and \(\tilde{q}\) are integers which do not have a common factor. At the radius \(r=\hat{r}(\tilde{p}/\tilde{q})\) corresponding to \(R(\hat{r})=\tilde{p}/\tilde{q}\) we have that application of the map (7.38) \(\tilde{q}\) times returns every point on the circle to its original position, \[\mathbf{M}_{0}^{\tilde{q}}(r,\,\phi)=[r,\,(\phi+2\pi\tilde{p})\,\,\text{modulo} \,\,2\pi]=(r,\,\phi).\] Now we consider a perturbation of the integrable Hamiltonian \(H_{0}\), Eq. (7.30). This will perturb the map \(\mathbf{M}_{0}\) to a new area preserving map \(\mathbf{M}_{\varepsilon}\) which differs slightly from \(\mathbf{M}_{0}\), \[\begin{array}{ll}r_{n+1}&=r_{n}+\varepsilon g(r_{n},\,\phi_{n}),\\ \phi_{n+1}&=[\phi_{n}+2\pi R(r_{n})+\varepsilon h(r_{n},\,\phi_{n})]\,\,\text{ modulo}\,\,2\pi.\end{array} \tag{7.39}\] We have seen that, on the intersection \(r=\hat{r}(\tilde{p}/\tilde{q})\) of the resonant torus with the surface of section, every point is a fixed point of \(\mathbf{M}_{0}^{\tilde{q}}\). We now inquire, what happens to this circle when we add the terms proportional to \(\varepsilon\) in (7.39)? Assume that \(R(r)\) is a smoothly increasing function of \(r\) in the vicinity of \(r=\hat{r}=(\tilde{p}/\tilde{q})\). (Equation (7.38) is called a 'twist map' if \(R(r)\) increases with \(r\).) Then for the unperturbed map we can choose a circle at \(r=r_{+}>\hat{r}(\tilde{p}/\tilde{q})\) which is rotated by \(\mathbf{M}_{0}^{\tilde{q}}\) in the direction of increasing \(\phi\) (i.e., counterclockwise) and a circle at \(r=r_{-}<\hat{r}(\tilde{p}/\tilde{q})\) which is rotated Figure 7.7: Surface of section for an integrable system. by \({\bf M}_{0}^{\tilde{q}}\) in the direction of decreasing \(\phi\) (i.e., clockwise). The circle \(r=\hat{r}(\hat{p}/\tilde{q})\) is not rotated at all. See Figure 7.8(_a_). If \(\varepsilon\) is sufficiently small, then \({\bf M}_{0}^{\tilde{q}}\) still maps all the points initially on the circle \(r=r_{-}\) to new positions whose \(\phi\) coordinate is clockwise displaced from its initial position (the radial coordinate, after application of the perturbed map, will in general differ from \(r_{-}\)). Similarly, for small enough \(\varepsilon\) all points on \(r_{+}\) Figure 7.8: (_a_) Three invariant circles of the unperturbed map. (_b_) The curve \(r=\hat{r}_{\varepsilon}(\phi)\). will be counterclockwise displaced. Given this situation, we have that for any given fixed value of \(\phi\), as \(r\) increases from \(r_{-}\) to \(r_{+}\), the value of the angle that the point (\(r\), \(\phi\)) maps to increases from below \(\phi\) to above \(\phi\). Hence, there is a value of \(r\) between \(r_{+}\) and \(r_{-}\) for which the angle is not changed. We conclude that, for the perturbed map, there is a closed curve, \(r=\hat{r}_{e}(\phi)\), lying between \(r_{+}\)\(\Rightarrow\)\(r\)\(\Rightarrow\)\(r_{-}\) and close to \(r=\tilde{r}(\tilde{p}\), \(\tilde{q})\), on which points are mapped by \(\mathbf{M}_{0}^{\tilde{q}}\) purely in the radial direction. This is illustrated in Figure 7.8(\(b\)). We now apply the map \(\mathbf{M}_{0}^{\tilde{q}}\) to this curve obtaining a new curve \(r=\hat{r}_{e}^{\prime}(\phi)\). The result is shown schematically in Figure 7.9. Since \(\mathbf{M}_{e}\) is area preserving the areas enclosed by the curve \(\hat{r}_{e}(\phi)\) and by the curve \(\hat{r}_{e}^{\prime}(\phi)\) are equal. Hence, these curves must intersect. Generically these curves intersect at an even number of distinct points. (Here by use of the word generic we mean to rule out cases where the two curves are tangent or else (as in the integrable case) coincide. These nongeneric cases can be destroyed by small changes in \(\varepsilon\) or in the form of the perturbing functions \(g\) and \(h\) in Eq. (7.39).) The intersections of \(\hat{r}_{x}\) and \(\hat{r}_{e}^{\prime}\) correspond to fixed points of \(\mathbf{M}_{e}^{\tilde{q}}\). Thus, the circle of fixed points \(r=\tilde{r}(\tilde{p}/\tilde{q})\) for the unperturbed map \(\mathbf{M}_{0}^{\tilde{q}}\) is replaced by a finite number of fixed points when the map is perturbed. What is the character of these fixed points of \(\mathbf{M}_{e}^{\tilde{q}}\)? Recall that for \(r>\hat{r}_{e}\) points are rotated counterclockwise by \(\mathbf{M}_{e}^{\tilde{q}}\). Also recall that \(\mathbf{M}_{e}^{\tilde{q}}\) maps \(\hat{r}_{x}\) to \(\hat{r}_{e}^{\prime}\). Thus, we have the picture shown in Figure 7.10, where the arrows indicate the displacements experienced by points as a result of applying the map \(\mathbf{M}_{e}^{\tilde{q}}\). We see that elliptic and hyperbolic fixed points of Figure 7.9: Points on the curve \(\hat{r}_{e}(\phi)\) map under \(\mathbf{M}_{e}^{\tilde{q}}\) purely radially to the curve \(\hat{r}_{e}^{\prime}(\phi)\). \(\mathbf{M}_{\varepsilon}^{\tilde{q}}\) alternate. Hence, perturbation of the resonant torus with rational rotation number \(\tilde{p}/\tilde{q}\) results in an equal number of elliptic and hyperbolic fixed points of \(\mathbf{M}_{\varepsilon}^{\tilde{q}}\). This result is known as the Poincare Birkhoff theorem (Birkhoff, 1927). Since fixed points of \(\mathbf{M}_{\varepsilon}^{\tilde{q}}\) necessarily are on period \(\tilde{q}\) orbits of \(\mathbf{M}_{\varepsilon}\), we see that there are \(\tilde{q}\) (or a multiple of \(\tilde{q}\)) elliptic fixed points of \(\mathbf{M}_{\varepsilon}^{\tilde{q}}\) and the same number of hyperbolic fixed points. Thus, for example, the two elliptic fixed points of \(\mathbf{M}_{\varepsilon}^{\tilde{q}}\) shown in Figure 7.10 might be a single periodic orbit of \(\mathbf{M}_{\varepsilon}\) of period two (and similarly for the two hyperbolic fixed points in the figure). Thus, in this case, we have \(\tilde{q}=2\). Near each resonant torus of the unperturbed map we can expect a structure of elliptic and hyperbolic orbits to appear as illustrated schematically in Figures 7.11(_a_) and (_b_) where we only include the \(\tilde{q}=3\) and the \(\tilde{q}=4\) resonances. Points near the elliptic fixed points rotate around them as shown by the linear theory (cf. Section 7.1.3). _Very_ near an elliptic fixed point the linear approximation is quite good, and in such a small neighborhood the map can again be put in the form of Eq. (7.39). Thus, if we examine the small region around one of the elliptic points of a periodic orbit, such as the area indicated by the dashed box in Figure 7.11(_b_), then what we will see is qualitatively similar to what we see in Figure 7.11(_b_) itself. Thus, sur rounding an elliptic point there are encircling KAM curves between which are destroyed resonant KAM curves that have been replaced by elliptic and hyperbolic periodic orbits. Furthermore, this repeats _ad infinitum_, since any elliptic point has surrounding elliptic points of destroyedresonances which themselves have elliptic points of destroyed resonances, and so on. What influence do the hyperbolic orbits created from the destroyed resonant tori have on the dynamics? If we follow the stable and unstable manifolds emanating from the hyperbolic points, they typically result in heteroclinic intersections, as shown in Figure 7.12. As we have seen in Figure 7.12, the orbits of the unstable manifolds are stable, and the orbits of the unstable manifolds are stable. Figure 7.11: Perturbation of \(\bar{q}=3\) and \(\bar{q}=4\) resonant tori. Chapter 4 (see Figure 4.10(_d_)), one such heteroclinic intersection between the stable and unstable manifolds of two hyperbolic points implies an infinite number of intersections between them.5 Furthermore (as we have discussed for the homoclinic case, Figure 4.11), this also implies the presence of horseshoe type dynamics and hence chaos. Thus, not only do we have a dense set of destroyed resonance regions containing elliptic and hyperbolic orbits, but now we find that these regions of destroyed resonances also have embedded within them chaotic orbits. Furthermore, this repeats on all scales as we successively magnify regions around elliptic points. A very fascinating and intricate picture indeed! ### 7.3 Chaos and KAM tori in systems describable by two-dimensional Hamiltonian maps Numerical examples clearly show the general phenomenology described for small perturbations of integrable systems in the previous section. In addition, numerical examples give information concerning what occurs when the perturbations are not small. Such information in turn points the way for theories applicable in the far from integrable regime. The clearest and easiest numerical experiments are those that result in a two dimensional map (a two dimensional Poincare surface of section). #### The standard map As an example, we consider the standard map, Eq. (7.15), which results from periodic impulsive kicking of the rotor in Figure 7.3. Setting the kicking strength to zero, \(K=0\), the standard map becomes \[\theta_{n+1} = (\theta_{n}+p_{n})\text{ modulo }2\pi, \tag{7.40a}\] \[p_{n+1} = p_{n}. \tag{7.40b}\] This represents an integrable case. The intersections of the tori in the (\(\theta\), \(p\)) surface of section are just the lines of constant \(p\) (according to (7.40b) \(p\) is a constant of the motion). On each such line the orbit is given by \(\theta_{n}=(\theta_{0}+np_{0})\) modulo \(2\pi\), and, if \(p_{0}/2\pi\) is an irrational number, a single orbit densely fills the line \(p=p_{0}\). If \(p_{0}/2\pi\) is a rational number, then orbits on the line return to themselves after a finite number of iterates (the unperturbed orbit is periodic), and we have a resonant torus. Increasing \(K\) slightly from zero introduces a small nonintegrable perturbation to the integrable case (7.40). Figure 7.13 shows plots of \(\vec{p}\equiv p\) modulo \(2\pi\) versus \(\theta\) modulo \(2\pi\) resulting from iterating a number of different initial conditions for a long time and for various values of \(K\). If the initial condition is on an invariant torus it traces out the closed curve corresponding to the torus. If the initial condition yields a chaotic orbit, then it will wander throughout an area densely filling that area. We see that, for the relatively small perturbation, \(K=0.5\), Figure 7.13(_a_), there are many KAM tori running roughly horizontally from \(\theta=0\) to \(\theta=2\pi\). These tori are those that originate from the nonresonant tori of the unperturbed system (\(p=p_{0}\), \(p_{0}/2\pi\) irrational) and have survived the perturbation. Also, clearly seen in Figure 7.13(_a_) are tori, created by the perturbations nested around elliptic periodic orbits originating from resonant tori. In particular, the period one elliptic orbits, (\(\theta\), \(p\)) = (\(\pi\), 0) and (\(\theta\), \(p\)) = (\(\pi\), 2\(\pi\)), and the period two elliptic orbit, (0, \(\pi\)) \(\rightleftharpoons\) (\(\pi\), \(\pi\)), are clearly visible. We call the structure surrounding a period \(\tilde{q}\) elliptic periodic orbit a _period \(\tilde{q}\) island chain_. An important property of two dimensional smooth area preserving maps is that the area bounded by two invariant KAM curves is itself invariant. This is illustrated in Figure 7.14 where we show two invariant curves (tori) bounding a shaded annular shaped region. Since the two curves are invariant and areas are preserved, the shaded region must map into itself. Thus, while there may be chaotic orbits sandwiched between KAM curves (as, for example, in the island structures surrounding elliptic orbits), these chaotic orbits are necessarily restricted to lie between the bounding KAM curves. (As we shall discuss later, this picutre is funda mentally different for systems of higher dimensionality.)As \(K\) is increased, more of the deformed survivors originating from the unperturbed tori are destroyed. At \(K=1\) (Figure 7.13(_b_)) we see that there are none left; that is, there are no tori running as continuous curves from \(\theta=0\) to \(\theta=2\pi\). In their place we see chaotic regions with interspersed island chains. As \(K\) is increased further (Figures 7.13(_c_) and (_d_)) many of the KAM surfaces associated with the island chains disappear, and the chaotic region enlarges. At \(K=4.0\), for example, we see (Figure 7.13(_d_)) that the only discernable islands are those associated with the period one orbits at \((\theta,\ p)=(\pi,\ 0)\), \((\pi,\ 2\pi)\). Increasing \(K\), Chirikov (1979) numerically found values of \(K\) (e.g., \(K\simeq 8\)g) for which there are no discernable tori, and the entire square \(0\leq(\theta,\ p)\leq 2\pi\) is, to within the available numerical resolution, ergodically covered densely by a single orbit. Thus, if any island chains are present, they are very small. #### The destruction of KAM surfaces and island chains Considering the standard map, the absence of a period \(\tilde{q}\) island chain at some value \(K=K^{\prime}\), implies that the period \(\tilde{q}\) elliptic periodic orbit has become unstable as \(K\) increases from \(K=0\) to \(K=K^{\prime}\). How does this occur? The answer is that as \(K\) increases, the eigenvalues of the \(\tilde{q}\)th iterate of the linearized map \(\mathbf{DM}^{\tilde{q}}\) evaluated on the period \(\tilde{q}\) orbit eventually change from complex and of magnitude \(1\) (i.e., \(\exp(\pm\mathrm{i}\theta)\)) to real and negative with one eigenvalue with magnitude larger than \(1\) and one with magnitude less than \(1\) (i.e., \(\lambda\) and \(1/\lambda\) with \(|\lambda|>1\)). That is, the periodic orbit of period \(\tilde{q}\) changes from elliptic to hyperbolic with reflection as \(K\) passes through some value \(K=K_{\tilde{q}}\). When a periodic orbit becomes hyperbolic with reflection its eigenvalues in the elliptic range, \(\exp(\pm\mathrm{i}\theta)\), both approach \(-1\) by having \(\theta\) approach \(\pi\) as \(K\) approaches \(K_{\tilde{q}}\). The migration of the eigenvalues in the complex plane as \(K\) passes through \(K_{\tilde{q}}\) is illustrated in Figure 7.15. This leads to a period doubling bifurcation and is typically followed by an infinite period doubling cascade (Bountis, 1981; Greene _et al._, 1981). In such a cascade, the period \(\tilde{q}\) elliptic orbit destabilizes (becomes hyperbolic) simultaneously with the appearance of a period \(2\tilde{q}\) elliptic orbit, which then period doubles to produce a period \(2^{2}\tilde{q}\) elliptic orbit, and so on. Eventually, at some finite amount past \(K_{\tilde{q}}\), all orbits of period \(2^{n}\tilde{q}\) have been stably created and then rendered unstable (hyperbolic) as they period double. This is a Hamiltonian version of the period doubling cascade phenomena we have discussed for one dimensional maps in Chapter 2. As in that situation, there are universal numbers that describe the scaling properties of such cascades (cf. Chapter 8), although these numbers differ in the Hamiltonian case from those given in Chapter 2. Note that, in this period doubling cascade, whenever \(K\) is in the range where there is an elliptic period \(2^{n}\tilde{q}\) periodic orbit, there is a nestedset of invariant tori surrounding that orbit (i.e., there is a period \(2^{n}\tilde{q}\) island chain). When \(K=0\) the standard map is integrable. As \(K\) is increased, chaotic regions occupy increasingly large areas, and the original KAM tori of the integrable system are successively destroyed. Say we identify a particular nonresonant KAM torus by its rotation number \[R=\lim_{m\to\infty}\frac{1}{2\pi m}\sum_{n=1}^{m}p_{n}\] (\(p_{n}\) is the amount by which \(\theta\) increases on each iterate; see Eq. (7.15a)). As we increase \(K\), the torus deforms from the straight horizontal line, \(p=2\pi R\), that it occupied for \(K=0\). Past some critical value \(K>K_{\rm crit}(R)\) the torus no longer exists. How does one numerically calculate \(K_{\rm crit}(R)\)? To answer this question we note the result from number theory that the irrational number \(R\) can be represented as an infinite continued fraction, \[R=a_{1}+\cfrac{1}{a_{2}+\cfrac{1}{a_{3}+\cfrac{1}{a_{4}+\cdots}}}\] where the \(a_{i}\) are integers. As a shorthand we write \(R=[a_{1},a_{2},a_{3},\ldots]\). If one cuts off the continued fraction at \(a_{n}\),\[R_{n}=[a_{1},\,a_{2},\,\ldots,\,a_{n},\,0,\,0,\,0,\,\ldots],\] then one obtains a rational approximation to \(R\) which converges to \(R\) as \(n\rightarrow\infty\), \[R=\lim_{n\rightarrow\infty}R_{n}.\] If we examine the \(K>0\) island chain with rotation number \(R_{n}\), we find that the elliptic Poincare Birkhoff periodic orbits (Figures 7.10 7.12) for the island chain approach the nonresonant torus of irrational rotation number \(R\) as \(n\) increases. This leads one to investigate the stability of these periodic orbits. As illustrated in Figure 7.15, the complex eigen values \(\exp({\rm i}\theta)\) of the Jacobian matrix corresponding to such a periodic orbit change to real negative eigenvalues \(\lambda\) and \(1/\lambda\) at some critical \(K\) value (which depends on the particular periodic orbit). It is found numerically that the critical \(K\) values of these Poincare Birkhoff periodic orbits of rational rotation number \(R_{n}\) rapidly approach the value \(K_{\rm crit}(R)\) as \(n\) increases. Since efficient numerical procedures exist for finding such orbits, this provides an efficient way of accurately determining \(K_{\rm crit}(R)\). Schmidt and Bialek (1982) have used this procedure to investigate the pattern accompanying torus destruction of arbitrary irrational tori. Greene (1979) conjectured that, since the golden mean \(R_{\rm g}=(\surd 5-1)/2\) is the'most irrational' number in the sense that it is most slowly approached by cutoffs of its continued fraction expansion, \[R_{\rm g}=[1,\,1,\,1,\,1,\,\ldots]=1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{ 1}{1+\cdots}}}}\] the torus with \(R=R_{\rm g}\) will be the last surviving torus as \(K\) is increased (i.e., \(K_{\rm crit}(R)\) is largest for \(R=R_{\rm g}\)). Using the periodic orbit technique described above, Greene finds that \(K_{\rm crit}(R_{\rm g})=0.97\ldots\) Figure 7.16 shows the standard map for \(K=0.97\) (Greene, 1979) with the \(R=R_{\rm g}\) tori and some chaotic orbits plotted (there are two such tori in \(0\leq p\leq 2\pi\)). An important result concerning the \(R=R_{\rm g}\) torus is that the phase space structure in its vicinity exhibits intricate scaling properties at and near \(K=K_{\rm crit}(R_{\rm g})\), and this phenomenon has been investigated by the renorallowed us to use \(p\) modulo \(2\pi\) for the vertical coordinate in Figure 7.13 rather than \(p\). Thus, if we were to ask for the structure of the solutions for all \(p\), our answer would be given by pasting together an infinite string of pictures obtained by successively translating the basic unit (as in Figure 7.13) by \(2\pi\). For example, for the case of \(K=4.0\) (Figure 7.13(_c_)), we see that a single chaotic component connects regions of the line \(p=0\) with regions of the line \(p=2\pi\). By the periodicity in \(p\), this implies that this chaotic region actually runs from \(p=-\infty\) to \(p=+\infty\). Thus, in terms of the rotor model (Figure 8.3), if we start an initial condition in this chaotic component, it can wander with time to arbitrarily large rotor energies, \(p^{2}/2\) (here we have taken the rotor's moment of intertia to be 1). On the other hand, if we were to start an initial condition for \(K=4.0\) inside the period one island surrounding one of the period one fixed points, it would remain there forever; its energy would thus be bounded for all time. Note that if we plot the actual momentum, \(+\infty>p>-\infty\), versus \(\theta\), then we are treating the phase space of the two dimensional standard map as a cylinder. On the other hand, our plot where we utilized \(p\) modulo \(2\pi\) reduced the phase space to the surface of a torus. While the toroidal surface representation is convenient for displaying the structure of inter mixed chaotic and KAM regions, we emphasize that \(p\) and \(p+2\pi k\) (\(k=\) an integer) are not physically equivalent, since they generally repre sent different kinetic energies of the rotor. The case shown in Figure 7.16 corresponds to the largest value of \(K\) for which there are KAM curves running completely around the (\(\theta\), \(p\)) cylinder in the \(\theta\) direction. The presence of a KAM curve running around the (\(\theta\), \(p\)) cylinder implies an infinite number of such curves by translation of \(p\) by multiples of \(2\pi\). Furthermore, any orbit lying between two such curves cannot cross them (Figure 7.14) and so is restricted to lie between them forever. Thus, the energy of the rotor cannot increase without bound. When \(K\) increases past the critical value \(K_{\rm c}\simeq 0.97\) the last invariant tori encircling the cylinder are destroyed, and a chaotic area connecting \(p=-\infty\) and \(p=+\infty\) exists. This means that for \(K>K_{\rm c}\) the rotor energy can increase without bound if the initial condition lies in the chaotic component connecting \(p=-\infty\) and \(p=+\infty\). #### Diffusion in momentum Let us now consider the case of large \(K\) such that there are no discernible KAM surfaces present, and the entire region of a plot of \(p\) modulo \(2\pi\) versus \(\theta\) appears to be densely covered by a single chaotic orbit. Referring to Eq. (7.15b), we see that the change in momentum (not taken modulo \(2\pi\)), \(p_{n}\equiv p_{n+1}-p_{n}=K\sin\theta_{n+1}\), is typically large (i.e., of the order of \(K\)). If we assume \(K\gg 2\pi\), then \(p\) will also typically be large compared to \(2\pi\). Thus, by Eq. (7.15a), we expect \(\theta\) (which is taken modulo \(2\pi\)) to vary very wildly in [0, \(2\pi\)]. We, therefore, treat \(\theta_{n}\) as effectively random, uniformly distributed, and uncorrelated for different times (i.e., different \(n\)). With these assumptions, the motion in \(p\) becomes a random walk with step size \(p_{n}=K\sin\theta_{n+1}\). Thus, over momentum scales larger than \(K\), the momentum evolves according to a diffusion process with diffusion coefficient, \[\frac{\langle(\ \ p_{n})^{2}\rangle}{2}=\frac{K^{2}}{2}\langle\sin^{2}\theta_{n +1}\rangle, \tag{7.41}\] where the angle brackets denote a time average, and by virtue of the randomness assumption for the \(\theta_{n}\) we have \(\langle\sin^{2}\theta_{n+1}\rangle=\frac{1}{2}\). Inserting the latter in (7.41) gives the so called _quasilinear_ approximation to the diffusion coefficient, \[D\cong D_{\rm QL}=K^{2}/4. \tag{7.42}\] If we imagine that we spread a cloud of initial conditions uniformly in \(\theta\) and \(p\) in the cell \(-\pi\leq p\leq\pi\), then the momentum distribution function \(f(p,\ n)\) at time \(n\), coarse grained over intervals in \(p\) greater than \(2\pi\), is \[f(p,\ n)\simeq\frac{1}{(2\pi nD)^{1/2}}\ \exp\ \ -\frac{p^{2}}{2nD}\ . \tag{7.43}\] That is, the distribution is a spreading Gaussian. This result follows from the fact that the process is diffusive. Taking the second moment of the distribution function, \(p^{2}f\,{\rm d}p\), we see that the average rotor energy increases linearly with time,\[\langle p^{2}/2\rangle\simeq Dn. \tag{7.44}\] The quasilinear result (7.42) is valid for very large \(K\). For moderately large, but not very large, values of \(K\), neglected correlation effects can significantly alter the diffusion coefficient from the quasilinear value. These effects have been analytically calculated by Rechester and White (1980) (see also Rechester _et al_. (1981), Karney _et al_. (1981) and Carey _et al_. (1981)). Figure 7.17 shows a plot of the diffusion coefficient \(D\) normalized to \(D_{\rm QL}\) as a function of \(K\) from the paper of Rechester and White. The solid curve is their theory, and the dots are obtained by numerically calculating the spreading of a cloud of points and obtaining \(D\) from Eq. (7.44). Note the decaying oscillations about the quasilinear value as \(K\) increases.7 Footnote 7: The diffusion coefficient \(D\) is calculated by integrating the diffusion coefficient \(D\) over the whole range of \(K\). #### Other examples So far in this section we have dealt exclusively with the standard map. We now discuss some other examples, also reducible to two dimensional maps, where similar phenomena are observed. We first consider a time independent two degree of freedom system investigated by Schmidt and Chen (1994). This system, depicted in Figure 7.18, consists of two masses, a large mass \(M\) connected to a linearly behaving spring of spring constant \(k_{\rm s}\) and a small mass \(m\) which elastically Figure 7.17: \(D/D_{\rm QL}\) versus \(K\) for the standard map (Rechester and White, 1980). bunces between a fixed wall on the left and the oscillating large mass on the right. The motion in space is purely one dimensional. This represents a time independent Hamiltonian system which Schmidt and Chen call the 'autonomous Fermi system'. Since the Hamiltonian is time independent, the total energy of the system, consisting of the sum of the kinetic energy of the two masses and the potential energy in the spring, is conserved. (This is unlike the rotor system (Figure 7.3) which has external kicking that enables the energy to increase without bound for \(K>K_{c}\).) Schmidt and Chen numerically calculate a Poincare surface of section and plot the state of the system at the instants of time just after the masses \(m\) and \(M\) collide. Plots corresponding to three cases are shown in Figure 7.19. In this figure \(\upsilon\) is the velocity of the small mass and \(\phi\) is the phase of the large mass in the sinusoidal oscillation that it experiences between bounces. The maximum value of \(\upsilon\) is 1 (for the normalization used) and is attained if all the system energy is in the small mass. The three cases shown correspond to successively larger values of \(\omega_{0}\bar{T}\), where \(\omega_{0}=(k_{s}/M)^{1/2}\) is the natural oscillation frequency of the large mass and \(\bar{T}\) is the mean time between bounces. We note that at low \(\omega_{0}\bar{T}\) (Figure 7.19(_a_)) we see many KAM surfaces as well as island chains and chaos for lower \(\upsilon(\upsilon\approx 0.25)\). At higher \(\omega_{0}\bar{T}\) (Figure 7.19(_b_)), the chaotic region enlarges substantially, while at the highest value plotted (Figure 7.19(_c_)) a single orbit appears to cover the available area of the surface of section ergodically. This latter situation corresponds to ergodic wandering of the orbit over the energy surface in the full four dimensional phase space. Accordingly, for the case of Figure 7.19(_c_) Schmidt and Chen numerically confirm that there is a time average energy equipartition between the energies of the two masses and the energy of the spring, each having on average very close to one third of the total energy of the system. The equipartition of time averaged kinetic energy is a familiar result Figure 7.18: The system considered by Schmidt and Chen. \(L/2\) represents the distance between the left hand wall and the right hand surface of mass \(M\) when the spring is in its equilibrium position. (Courtesy of Q. Chen and G. Schmidt.) in the statistical mechanics of many degree of freedom systems (\((\langle p_{1}^{2}\rangle/2m_{1}=\langle p_{2}^{2}\rangle/2m_{2}=\cdots=\langle p_{ N}^{2}\rangle/2m_{N}\)). Here equipartition of kin etc energy for a system of only two degrees of freedom holds because the system is essentially ergodic on the energy surface. (Indeed it is the most important fundamental assumption of statistical mechanics that typical many degree of freedom systems are ergodic on their energy surface. The justification of this assumption, however, is far from obvious, and remains an open problem.) We could go on to cite many other examples of mechanical systems displaying the type of behavior seen in the two examples of the kicked rotor (Figure 7.3) and the autonomous Fermi system (Figure 7.19). It is, perhaps, somewhat more surprising that these same phenomena apply to situations in which one is not dealing with straightforward problems of mechanics. The point is that these problems are also described by Hamilton's equations. Three examples of this type are the following. (1) Nonturbulent mixing in fluids. (2) The trajectories of magnetic field lines in plasmas. 3. The ray equations for the propagation of short wavelength waves in inhomogeneous media. In the case of mixing in fluids we restrict ourselves to the situation of a two dimensional incompressible flow: \(\mathbf{v}(\,\ t)=\upsilon_{x}(x,\ y,\ t)_{0}+\upsilon_{y}(x,\ y,\ t)\mathbf{y}_{0}\) with \(\partial\upsilon_{x}/\partial x+\partial\upsilon_{y}/\partial y=\nabla\cdot \mathbf{v}=0\). The incompressibility condition, \(\nabla\cdot\mathbf{v}=0\), means that we can express \(\mathbf{v}\) in terms of a stream function \(\psi\), \[\mathbf{v}=\mathbf{z}_{0}\times\nabla\psi(x,\ y,\ t)\] or \[\upsilon_{x}=-\partial\psi/\partial y,\upsilon_{y}=\partial\psi/\partial x.\] Now consider the motion of an impurity particle convected with the fluid. The location of this particle is given by \(\mathrm{d}\ /\mathrm{d}t=\mathbf{v}(\,\ t)\), or, using the stream function, \[\mathrm{d}x/\mathrm{d}t=-\partial\psi/\partial y, \tag{7.45a}\] \[\mathrm{d}y/\mathrm{d}t=\partial\psi/\partial x. \tag{7.45b}\] Figure 7.19: (cont.) Comparing Eqs. (7.45) with Eqs. (7.1), we see that (7.45) are in the form of a one degree of freedom (\(N=1\)) time dependent Hamiltonian system, if we identify the stream function \(\psi\) with the Hamiltonian \(H\), \(x\) with the momentum \(p\), and \(y\) with the 'position' \(q\): \[\psi(x,\,y,\,t) \leftrightarrow H(p,\,q,\,t),\] \[x \leftrightarrow p,\] \[y \leftrightarrow q.\] Thus, in our fluid problem the canonically conjugate variables are \(x\) and \(y\). As an example, we consider the 'blinking vortex' flow of Aref (1984). In this flow there are two vortices of equal strength, one located at \((x,\,y)=(a,\,0)\) and the other located at \((x,\,y)=(-a,\,0)\). The vortices (which may be thought of as thin rotating stirring rods) are taken to 'blink' on and off with time with period \(2T\). That is, for \(2kT\approx t\approx(2\,k+1)T(k=0,\,1,\,2,\,3,\,\ldots)\), the vortex at \((a,\,0)\) is on, while the vortex at \((-a,\,0)\) is turned off, and, for \((2\,k+1)T\approx t\approx 2(k+1)T\), the vortex at \((a,\,0)\) is off while the vortex at \((-a,\,0)\) is on. The flow induced by a single vortex of strength \(\Gamma\) can be expressed in (\(\rho\), \(\theta\)) polar coordinates centered at the vortex as \[\nu_{\theta}=\Gamma/2\pi\rho\text{ and }\nu_{\rho}=0.\] Thus, the blinking vortex has the effect of alternatively rotating points in concentric circles first about one vortex center and then about the other vortex center, each time by an angle \(\theta=\Gamma T/2\pi\rho^{2}\). Sampling the position of a particle at times \(t=2kT\) defines a two dimensional area preserving map which depends on the strength parameter \(\mu=\Gamma T/2\pi a^{2}\). Figure 7.20 from Doherty and Ottino (1988) shows results from iterating several different initial conditions for successively larger values of the strength parameter \(\mu\). For very small \(\mu\) (Figure 7.20(_a_), \(\mu=0.1\)) the result is very close to the completely integrable case where both vortices act simultaneously and steadily in time. As \(\mu\) is increased we see that the area occupied by chaotic motion increases. The practical effect of this type of result for fluid mixing can be seen by considering a small dollop of dye in such a flow. For example, say the dye is initially placed in the location indicated by the shaded circle in Figure 7.20(_a_). In the near integrable case Figure 7.20(_a_), as time goes on, this dye would always necessarily be located between the two KAM curves that initially bound it. Due to the different rotation rates on the different KAM surfaces, the dye will mix throughout the annular region bounded by these KAM curves, but (in the absence of molecular diffusion) it can never mix with the fluid outside this annular region. In contrast, for the case \(\mu=0.4\) (Figure 7.20(_e_)), we see that there is a large single connected chaotic region, and an initial dollop of dye in the same location as before would thus mix uniformly throughout this much larger region. Thus, we see that, for the purposes of achieving the most uniform mixing in fluids, chaos is a desirable attribute of the flow that one should strive to maximize. Several representative references on chaotic mixing in fluids are Aref and Balachandran (1986), Chaiken _et al._ (1986), Dombre _et al._ (1986), Feingold _et al._ (1988), Ott and Antonsen (1989), Rom Kedar _et al._ (1990), and the comprehensive book on the subject by Ottino (1989). We now discuss the second of the three applications mentioned above, namely, the trajectory of magnetic field lines in plasmas. Let **B**( ) denote the magnetic field vector. The field line trajectory equation gives a parametric function (_s_) for the curve on which a magnetic field line lies, where \(s\) is a parameter which we can think of as a (distorted) measure of distance along the field line. The equation for (_s_) is \[\text{d }(s)/\text{d}s=\text{B}(\text{ }). \tag{7.46}\] ### Two dimensional Hamiltonian maps Figure 20: Blinking vortex orbits for (_a_) \(\mu=0.01\), (_b_) \(\mu=0.15\), (_c_) \(\mu=0.25\), (_d_) \(\mu=0.3\) and (_e_) \(\mu=0.4\) (Doherty and Ottino, 1988). (Alternatively, we can multiply the right hand side of (7.46) by any positive scalar function of.) Since \(\nabla\cdot{\bf B}=0\), Eq. (7.46) represents a conservative flow, if we make an analogy between \(s\) and time. Thus, the magnetic field lines in physical space are mathematically analogous to the trajectory of a dynamical system in its phase space. In Problem 3 you are asked to establish for a simple example that (7.46) can be put in Hamiltonian form. The Hamiltonian nature of'magnetic field line flow' means that under many circumstances we can expect that some magnetic field line trajec \(\\)tories fill up toroidal surfaces, while other field lines wander chaotically over a volume which may be bounded by tori. In other words, the situation can be precisely as depicted in Figure 7.12. These considerations are of great importance in plasma physics and controlled nuclear fusion research. In the latter, the fundamental problem is to confine a hot plasma (gas of electrons and ions) for a long enough time that sufficient energy releasing nuclear fusion reactions take place. If the magnetic field is strong, then, to a first approximation, the motion of the charged particles constituting the plasma is constrained to follow the magnetic field lines. (This approximation is better for the lighter mass electrons than for the ions.) In this view, the problem of confining the plasma becomes that of creating a magnetic field line configuration such that the magnetic field lines are confined. That is, the magnetic field lines do not connect the plasma interior to the walls of the device. The most simple example of such a configuration is provided by the tokamak device, originally invented in the Soviet Union. (This device is currently the one on which most of the attention of the nuclear fusion community is focused.) Figure 7.21(_a_) illustrates the basic idea of the tokamak. An external current system (in the figure the wire with current \(I_{0}\)) creates a magnetic field \(B_{\rm T}\) running the long way (called the 'toroidal direction') around a toroid of plasma. At the same time, another current is induced to flow in the plasma in the direction running the long way around the torus. (This toroidal plasma current is typically created by transformer action wherein the plasma loop serves as the secondary coil of a transformer.) The toroidal plasma current then creates a magnetic field component \(B_{\rm p}\) which circles the short way around the torus (the 'poloidal direction'). Assuming that the configuration is perfectly symmetric with respect to rotations around the axis of the system, the superposition of the toroidal and poloidal magnetic fields leads to field lines that typically circle on a toroidal surface, simultaneously in both the toroidal and poloidal directions, filling the surface ergodically. Thus, the field lines are restricted to lie on a nested set of tori and never intersect bounding walls of the device. This is precisely analogous to the case of an integrable Hamiltonian system. This is the situation if there is perfect toroidal symmetry. Unfortu nately, symmetry can be destroyed by errors in the external field coils, by necessary asymmetries in the walls, and, most importantly, by toroidal dependences of the current flowing the plasma. (The latter can arise due to collective motions of the plasma as a result of a variety of instabilities that have been very extensively investigated.) Such symmetry breaking magnetic field perturbations play a role analogous to nonintegrable perturbations of an integrable Hamiltonian system. Thus, they can destroy some of the nested set of toroidal magnetic surfaces that exists in the symmetric case. If the perturbation is too strong, chaotic field lines can wander from the interior of the plasma to the wall. This leads to rapid heat and particle loss of the plasma. (Refer to Figure 7.13 and think of \(K\) as the strength of the asymmetric field perturbation.) Some representative papers which discuss chaotic magnetic field line trajectories in plasmas and their physical effects are Rosenbluth _et al_. (1966), Sinclair _et al_. (1970), Finn (1975), Rechester and Rosenbluth (1978), Cary and Littlejohn (1983), Hanson and Cary (1984) and Lau and Finn (1991). As our final example, we consider the ray equations describing the propagation of short wavelength waves in a time independent spatially inhomogeneous medium. In the absence of inhomogeneity, we assume that the partial differential equations governing the evolution of small ampli tude perturbations of the dependent quantities admit plane wave solutions in which the perturbations vary as \(\exp({\rm i}{\bf k}\cdot\ \ -{\rm i}\omega t)\), where \(\omega\) and \({\bf k}\) are the frequency and wavenumber of the wave. The quantities \(\omega\) and \({\bf k}\) are constrained by the governing equations (e.g., Maxwell's equations if we are dealing with electomagnetic waves) to satisfy a dispersion relation \[D({\bf k},\,\omega)=0.\] Now assume that the medium is inhomogeneous with variations occurring on a scale size \(L\) which is much longer than the typical wavelength of the wave, \(|{\bf k}|L\gg 1\). For propagation distances small compared to \(L\), waves behave approximately as if the medium were homogeneous. For propaga tion distances of the order of \(L\) or longer, the spatial part of the homogeneous medium expression for the phase, namely \({\bf k}\cdot\ \,\) is distorted. We, therefore, assume that the perturbations have a rapid (compared to \(L\)) spatial variation of the form \[\exp[{\rm i}\tilde{S}(\ \ )-{\rm i}\omega t], \tag{7.47}\] where the function \(\tilde{S}(\ )\) is called the _eikonal_ and replaces the homo geneous medium phase term \({\bf k}\cdot\ \.\) The _local_ wavenumber \({\bf k}\) is given by \[{\bf k}=\nabla\tilde{S}(\ \ ). \tag{7.48}\] We wish to find an equation for the propagation of a wave along some path (called the 'ray path'). Along this path we seek parametric equations for and \({\bf k}\). That is, we seek ( (\(s\)), \({\bf k}(s)\)), where \(s\) measures the distance along the ray. In terms of these ray path functions, we can determine the function \(\tilde{S}(\ )\) using Eq. (7.48),\[\tilde{S}(\ \ )=\tilde{S}(\ \ _{0})+\ \which is just the group velocity of a wavepacket. Thus, letting \(\tau\) denote time, we have \({\rm d}\ \left/{\rm d}\tau\right.=\partial\hat{\omega}/\partial{\bf k}\), and hence \(s=\tau\) in this case. This yields \[{\rm d}{\bf k}/{\rm d}\tau=-\partial\hat{\omega}/\partial\ \, \tag{7.53a}\] \[{\rm d}\ \left/{\rm d}\tau=\partial\hat{\omega}/\partial{\bf k}\right. \tag{7.53b}\] which can be interpreted as giving the temporal evolution of the position and wavenumber \({\bf k}\) of a wavepacket. Both (7.52) and (7.53) are Hamil tonian with \(D\) and \(\hat{\omega}\) respectively playing the role of the Hamiltonian, and (\({\bf k}\), ) being the canonically conjugate momentum (\({\bf k}\)) and position ( ) variables. We now discuss a particular example where chaotic solutions of the ray equations play a key role (Bonoli and Ott, 1982; Wersinger _et al._, 1978). One of the central problems in creating a controlled thermonuclear reactor lies in raising the temperature of the confined plasma sufficiently to permit fusion reactions to take place. One way of doing this is by launching waves from outside the plasma that then propagate to the plasma interior, where they dissipate their energy to heat (similar, in principle, to a kitchen microwave oven). Clearly, conditions must be such that the wave is able to reach the plasma interior. In this case, in the terminology of the field, the wave is said to be 'accessible.' Of the various types of waves that can be used for plasma heating, the so called 'lower hybrid' wave is one of the most attractive from a technological point of view. The accessibility problem for this wave was originally considered by Stix (1965) for the case in which there are two symmetry directions. For example, in a straight circular cylinder, \(k_{z}\) and \(m=k_{\theta}r\) are constants of the ray equations due to the translational symmetry along the axis of the cylinder (the \(z\) axis) and to the rotational symmetry around the cylinder (in \(\theta\)). The accessibility situation for this case is illustrated in Figure 7.22 for a cylinder with an applied magnetic field \({\bf B}=B_{0}{\bf z}_{0}+B_{\theta}\theta_{0}\), and a wave launched from vacuum with \(m=0\). Let \(n_{\parallel}=k_{z}c/\omega\) (where \(c\) is the speed of light) and \(n_{\perp}=k_{r}c/\omega\). We assume that the plasma density \(N_{0}\) increases with decreasing radius \(r\) from \(N_{0}=0\) at the plasma edge (\(r=a\)) to its maximum value in the center of the cylinder (\(r=0\)). Figure 7.22 shows plots of \(n_{\perp}^{2}\) (obtained from the dispersion relation) as a function of \(N_{0}\). For \(n_{\parallel}<n_{a}\), \(n_{\parallel}=n_{a}\), and \(n_{\parallel}>n_{a}\), Figures 7.22(\(a\)), (\(b\)) and (\(c\)) apply, respectively, where \(n_{a}\) is a certain critical value (Stix, 1965). Between \(N_{0}=0\) and \(N_{0}=N_{S}\), there is a narrow cutoff region through which a slow wave (i.e., lower hybrid wave), launched from the vacuum region, typically has little trouble in tunneling. Figure 7.22(\(a\)) shows that for \(n_{\parallel}<n_{a}\) an additional, effectively much wider, cutoff region between \(N_{0}=N_{\rm T1}\) and \(N_{0}=N_{\rm T2}\) exists. This cutoff region presents a barrier for propagation to the plasma center and preventsaccessibility. Figure 7.22(\(c\)) shows that for \(n_{\parallel}>n_{a}\) this barrier is absent, and the lower hybrid (slow) wave becomes accessible. Now we consider heating a circular cross section toroidally symmetric plasma (a tokamak). We use toroidal coordinates wherein \(r\), \(\theta\) are circular polar coordinates centered in their circular cross section of the tokamak plasma such that the distance of a point from the major axis of the torus is \(R=R_{0}+r\cos\theta\), where \(R_{0}\) is the distance from the major axis of the torus to the center of the plasma cross section (Figure 7.21(\(b\))). We refer to \(\theta\) as the poloidal angle, and we denote by \(\phi\) the toroidal angle (i.e., the angle running the long way around the plasma torus). Let \(\varepsilon=a/R_{0}\), where \(r=a\) denotes the plasma boundary. As \(\varepsilon\to 0\) with \(a\) fixed, the straight cylinder limit is approached. However, for finite \(\varepsilon\) the plasma equilibrium depends on \(\theta\). Thus, it is no longer expected that \(m=rk_{\theta}\) is a constant of the motion, although the toroidal symmetry still guarantees that a constant of the motion analogous to \(k_{z}\) in the cylinder still exists; namely, \(n=Rk_{\phi}\) is a constant. The questions that now arise are what happens to the constant \(m\), and how is the accessibility condition for lower hybrid waves affected? For finite \(\varepsilon\) there may still be some other constant \(\tilde{m}=\tilde{m}(r,\,\theta,\,k_{r},\,m)\), which takes the place of \(m\). For small \(\varepsilon\), regions where \(\tilde{m}\) exist (KAM tori) occupy most of the phase space. As \(\varepsilon\) increases the regions occupied by chaotic trajectories increase, until almost all regions where KAM tori exist are gone. A ray in the region with no tori may eventually approach the plasma interior and be absorbed even if \(n_{\parallel}\) at launch does not satisfy the straight cylinder accessibility condition. Thus we need to know at what value of \(\varepsilon\) most of the tori are gone. Figure 23 shows numerical results (Bonoli and Ott, 1982) testing for the existence of tori by the surface of section method with \(\theta=0\pmod{2\pi}\) as the surface of section. Figure 23(\(a\)) shows that for \(\varepsilon=0.10\), most tori are not destroyed, and initially inaccessible rays (i.e., \(n_{\parallel}<n_{a}\) at launch) do not reach the plasma interior. Figures 23(\(b\)) and (\(c\)) show a case for \(\varepsilon=0.15\), illustrating the coexistence of chaotic and integrable orbits including (Figure 23(\(c\))) higher order island structures. For \(\varepsilon=0.25\) all appreciable KAM surfaces are numerically found to be completely destroyed, and even waves launched with \(n_{\parallel}\) substantially below \(n_{a}\) are absorbed in the plasma interior after a few piercings of the surface of section. Figure 23: Surface of section (\(\theta=0\)) plots for several different initial conditions. \(n_{a}=2.0\). (\(a\)) \(1.3\leq n_{\parallel}\leq 1.4\), \(a/R_{0}=0.10\); (\(b\)) \(1.25\leq n_{\parallel}\leq 1.4\); \(a/R_{0}=0.15\); (\(c\)) same as (\(b\)) but with a different initial condition (Bonoli and Ott. 1981). ### Higher-dimensional systems There is a very basic topological distinction to be made between the case of time independent Hamiltonians with \(N=2\) degrees of freedom, on the one hand, and \(N\geq 3\) degrees of freedom, on the other. (For a time periodic Hamiltonian the same distinction applies for the cases \(N=1\) and \(N\geq 2\).) In particular, since the energy is a constant of the motion for a time independent system, the motion is restricted to the \((2N-1)\) dimensional energy surface \(H({\bf p},\,{\bf q})=E\). Thus, we can regard the dynamics as taking place in an effectively \((2N-1)\) dimensional space. In general, in order for a closed surface to divide a \((2N-1)\) dimensional space into two distinct parts, one inside the closed surface and another outside, the closed surface must have a dimension one less than the dimension of the space; i.e., its dimension is \(2N-2\). Thus, KAM surfaces which are \(N\) dimensional tori only satisfy this condition for \(N=2\). In particular, for \(N=2\), the energy surface has a dimension 3, and a two dimensional toroidal surface in a three dimension space has an inside and an outside. As an example of a toroidal'surface' which does not divide the space in which it lies, consider a circle (which can be regarded as a 'one dimensional torus') in a three dimensional Cartesian space. For \(N>2\) the situation for KAM tori in the energy surface is similar (e.g., for \(N=3\) we have \(2N-1=5\) and \(2N-2=4>3=N\)). Now consider the situation where an integrable system is perturbed. In this case tori begin to break up and are replaced by chaotic orbits. For the case \(N=2\) these chaotic regions are necessarily sandwiched between surviving KAM tori (Figure 7.15). In particular, if such an orbit is outside (inside) a particular torus it remains outside (inside) that torus forever. Because of this sandwiching effect, the chaotic orbit of a slightly perturbed integrable two degree of freedom system must lie close to the orbit on a torus of the unperturbed integrable system for all time. Hence, two degree of freedom integrable systems are relatively stable to pertur stations. The situation for \(N\geq 3\) is different because chaotic orbits are not enclosed by tori, and hence their motions are not restricted as in the case \(N=2\). In fact, it is natural to assume that all the chaos created by destroyed tori can form a single connected ergodic chaotic region which is dense in the phase space. Under this assumption a chaotic orbit can, in principle, come arbitrarily close to _any_ point in phase space, if we wait long enough. This phenomenon was first demonstrated for a particular example by Arnold (1964) and is known as 'Arnold diffusion.' For further discussion of Arnold diffusion and other aspects of chaos in Hamiltonian systems with more than two degrees of freedom we refer the reader to the exposition of these topics in the book by Lichtenberg and Lieberman (1983). ### Strongly chaotic systems We have seen in Sections 7.2 and 7.3 that when elliptic periodic orbits are present there is typically an exceedingly intricate mixture of chaotic regions and KAM tori: surrounding each elliptic orbit are KAM tori, between which are chaotic regions and other elliptic orbits, which are themselves similarly surrounded, and so on _ad infinitum_. It would seem that the situation would be much simpler if there were no elliptic periodic orbits (i.e., all were hyperbolic). In such a case one would expect that the whole phase space would be chaotic and no KAM tori would be present at all. As a model of such a situation, one can consider two dimensional area preserving maps which are hyperbolic. One example is the cat map, Eq. (4.29), discussed in Chapter 4. In that case we saw that the map took the picture of the cat and stretched it out (chaos) and reassembled it in the square (Figure 4.13). (Recall that the square is an unwrapping of a two dimensional toroidal surface.) More iterations mix the striations of the unfortunate cat more and more finely within the square. Given any small fixed region \(\mathcal{B}\) within the square, as we iterate more and more times, the fraction of the area of the region \(\mathcal{B}\) occupied by black striations that were originally part of the face of the cat approaches the fraction of area of the entire square that was originally occupied by the cat's face (the black region in Figure 4.13). We say that the cat map is _mixing_ on the unit square with essentially the same meaning that we use when we describe the mixing of cream as a cup of coffee is stirred. More formally, an area preserving map \(\mathbf{M}\) of a compact region \(S\) is mixing on \(S\) if given any two subsets \(\sigma\) and \(\sigma^{\prime}\) of \(S\) where \(\sigma\) and \(\sigma^{\prime}\) have positive Lebesgue measure (\(\mu_{\mathrm{L}}(\sigma)>0\), \(\mu_{\mathrm{L}}(\sigma^{\prime})>0\)), then \[\frac{\mu_{\mathrm{L}}(\sigma)}{\mu_{\mathrm{L}}(S)}=\lim_{m\to\infty}\frac{ \mu_{\mathrm{L}}[\sigma^{\prime}\cap\mathbf{M}^{m}(\sigma)]}{\mu_{\mathrm{L}}( \sigma^{\prime})}. \tag{7.54}\] As another example of an area preserving mixing two dimensional hyper bolic map, we mention the generalized baker's map (Figure 3.4) in the area preserving case, \(\lambda_{a}=\alpha\), \(\lambda_{b}=\beta\). (It is easy to check that the Jacobian determinant, Eq. (4.28), is one in this case.) Another group of strongly chaotic systems can be constructed from certain classes of 'billiard' problems. A billiard is a two dimensional planar domain in which a point particle moves with constant velocity along straight line orbits between specular bounces ((angle of inci \(\mathrm{dence})=(\mathrm{angle\ of\ reflection})\)) from the boundary of the domain, Figure 7.24(_a_). Figures 7.24(_b_) (_g_) show several shapes of billiards. The circle (_b_) and the rectangle (_c_) are completely integrable. The two constants of the motion for the circle are the particle energy and the angular momentumwith the action angle variables are the inverses of the time between bounces and the time to make a complete rotation around the center of the circle. For the rectangle the two constants are the vertical and horizontal kinetic energies, and the two frequencies are the inverses of twice the time between successive bounces off the vertical walls and twice the time between successive bounces off the horizontal walls. The billiard shapes shown in Figures 7.24(_d_) (_g_) are strongly chaotic (Bunimovich, 1979;Sinai, 1970) in the same sense as the cat map and the area preserving generalized baker's map: almost every initial condition yields a chaotic orbit which eventually comes arbitrarily close to every point in the phase space. (In Figures 7.24(_d_) (_g_) the curved line segments are arcs of circles.) In particular, for these chaotic billiard problems the orbit gener surface of negative Gaussian curvature (the two principal curvature vectors at each point on the surface point to opposite sides of the surface, Figure 7.25) (Hadamard, 1898). Two geodesics on such a surface that are initially close and parallel separate exponentially as they are followed forward in time. Note, however, that a closed surface of negative curvature cannot be embedded in a three dimensional Cartesian space (four dimen sions are required), so this example is somewhat nonphysical, although it has proven to be very fruitful for mathematical study. ### 7.6 The succession of increasingly random systems In the previous section we have discussed'strongly chaotic' systems by which we meant chaotic systems that were mixing throughout the phase space. One often encounters in the literature various terms used to describe the degree of randomness of a Hamiltonian system. In particular, one can make the following list in order of 'increasing randomness:' \begin{tabular}{l} ergodic systems, \\ mixing systems, \\ \(K\) systems, \\ Bernoulli systems. \\ \end{tabular} We now discuss and contrast these terms, giving some examples of each. Ergodicity is defined in Section 2.3.3 for maps. For an ergodic invariant measure of a dynamical system, phase space averages are the same as time averages. That is, for the case of a continuous time system, ergodicity implies \[\lim_{T\to\infty}\frac{1}{T}\stackrel{{ T}}{{{}_{0}}}f(\tilde{ \ }(t))\mbox{d}t=\langle f(\tilde{\ })\rangle, \tag{7.55}\] where \(f(\tilde{\ })\) is any smooth function of the phase space variable \(\tilde{\ }\), \(\tilde{\ }(t)\)represents a trajectory in phase space, \(\langle f(\tilde{\ })\rangle\) represents the average of \(f(\tilde{\ })\) over the phase space weighted by the invariant measure under consideration, and (7.55) holds for almost every initial condition with respect to the invariant measure. As an example, consider the standard map, Eqs. (7.15). In the case \(K=0\), we have (Eq. (7.40)) \(\theta_{n+1}=(\theta_{n}+p_{n})\) modulo \(2\pi\), \(p_{n+1}=p_{n}\). Thus the lines \(p=\mbox{const.}\) are invariant. Any region \(p_{n}>p>p_{b}\) is also invariant. Orbits are not ergodic in \(p_{a}>p>p_{b}\). Orbits are, however, ergodic on the lines \(p=\mbox{const.}\), provided that \(p/2\pi\) is irrational. Ergodicity is the weakest form of randomness and does not necessarily imply chaos. This is clear since the example, \(\theta_{n+1}=(\theta_{n}+p)\) modulo \(2\pi\), \(p/2\pi\) irrational, is ergodic on the line \(p=\mbox{const.}\) but is nonchaotic (its Lyapunov exponent is zero). In the case of \(K>0\) there are connected regions of positive Lebesgue measure in the \((\theta,\ p)\) space over which orbits wander chaotically (see Figure 7.13). In this case, for each such region, ergodicity applies (where the relevant measure of a set \(A\) is just the fraction of the area (Lebesgue measure) of the ergodic chaotic region in \(A\)). Mixing is defined by (7.54). An example of a nonmixing system is the map, \(\theta_{n+1}=(\theta_{n}+p)\) modulo \(2\pi\), \(p/2\pi=\mbox{irrational}\), which is just a rigid rotation of the circle by the angle increment \(p\). An example, which is chaotic but not mixing, occurs when we have a \(\tilde{p}/\tilde{q}\) island chain (\(\tilde{q}>1\)) for the standard map. In this case, there are typically chaotic sets consisting of component areas (within the \(\tilde{p}/\tilde{q}\) island chain), each of which map successively one to another, returning to themselves after \(\tilde{q}\) iterates. To see that these chaotic sets are not mixing according to (7.54), let \(\sigma^{\prime}\) be the area of one of the \(\tilde{q}\) components, and \(\sigma\) be the area of another. Then, as \(m\) (time) increases, the quantity \([\mu_{\rm L}(\sigma^{\prime}\cap{\bf M}^{m}(\sigma)]\)\(/\mu_{\rm L}(\sigma^{\prime})\) is equal to one once every \(\tilde{q}\) iterates and is equal to zero for the other iterates. Hence, the limit in (7.54) does not exist.8 Footnote 8: The \(\tilde{p}/\tilde{q}\) invariant measure is defined by (7.54). A system is said to be a \(K\) system if every partition (see Section 4.5) has positive metric entropy. Basically, in terms of our past terminology, this is the same as saying that the system is chaotic (possesses a positive Lyapunov exponent for typical initial conditions). A \(C\) system is one which is chaotic and is hyperbolic at every point in the phase space (not just on the invariant set). Examples of \(C\) systems are the cat map, a compact surface of negative geodesic curvature, and the billiard of Figure 7.24(\(e\)). An example, which is a \(K\) system, but not a \(C\) system, is the stadium billiard, Figure 7.24(\(f\)). A Bernoulli system is a system which can be represented as a symbolic dynamics consisting of a full shift on a finite number of symbols (see Section 4.1). An example, of such a system, for the case of an area preserving map, is the generalized baker's map with \(\lambda_{u}=a\) and \(\lambda_{b}=\beta\). ## Problems 1. (_a_) Show that the change of variables specified \[{\bf q}=\frac{\partial\hat{S}({\bf p},\,\overline{{\bf q}},\,t)}{\partial{\bf p}},\,\overline{{\bf p}}=\frac{\partial\hat{S}({\bf p},\,\overline{{\bf q}},\,t)} {\partial\overline{{\bf q}}}\] is symplectic. (_b_) Find a function \(\hat{S}(p_{u},\,\theta_{u+1})\) in terms of which the map (7.15) is given by \(\theta_{u}=\partial\hat{S}/\partial p_{u}\), \(p_{u+1}=\partial\hat{S}/\partial\theta_{u+1}\). 2. Consider the following four-dimensional map (Ding _et al._, 1990a), \[x_{u+1} = 2\alpha x_{u}-p_{x,u}-\rho x_{u}^{2}+y_{u}^{2},\] \[p_{x,u+1} = x_{u},\] \[y_{u+1} = 2\beta y_{u}-p_{y,u}+2x_{u}y_{u},\] \[p_{y,u+1} = y_{u}.\] Is it volume preserving? Using Eq. (7.13) test to see whether the map is symplectic. 3. Consider a magnetic field in a plasma given by \[{\bf B}(x,\,y,\,z)=B_{0}{\bf z}_{0}+\nabla\times{\bf A},\] where \(B_{0}\) is a constant and the vector potential \({\bf A}\) is purely in the \(z\)-direction, \({\bf A}=A(x,\,y,\,z){\bf z}_{0}\). Denote the path followed by a field line as \((z)=x(z)\,\,_{0}+y(z){\bf y}_{0}+z{\bf z}_{0}\). Show that the equations for \(x(z)\) and \(y(z)\) are in the form of Hamiltonian's equations where \(z\) plays the role of time and \(A(x,\,y,\,z)/B_{0}\) plays the role of the Hamiltonian. 4. Consider the motion of a charged particle in an electrostatic wave field in which the electric field is given by \({\bf E}(x,\,t)=E_{x}(x,\,t)\,\,_{0}\) with \[E_{x}(x,\,t)=\sum_{\kappa,\omega}E_{\kappa,\omega}\exp({\rm i}x-{\rm i}\omega t).\] (This situation arises in plasma physics where the wave field \(E_{x}\) is due to collective oscillations of the plasma.) In the special case where there is only one wavenumber, \(\kappa=\pm k_{0}\), the frequencies \(\omega\) form a discrete set, \(\omega=2\pi n/T\) (where \(T\) is the fundamental period and \(n\) is an integer; \(n=\ldots,\,-2,\,-1,\,0,\,1,\,2,\,\ldots\)), and the amplitudes \(E_{\kappa,\omega}\) are real and independent of \(\omega\) and \(\kappa,\,\,E_{\kappa,\omega}=E_{0}/2\), the above expression for \(E_{x}\) reduces to \[E_{x}(x,\,t) = E_{0}\cos(k_{0}x)\sum_{n}\,\exp(2\pi{\rm i}nt/T)\] \[= E_{0}\cos(k_{0}x)\sum_{m}\,\partial(t-mT).\] Show that the motion of a charged particle is described by a map which is of the same form as the standard map Eq. (7.15). 5. Find the fixed points of the standard map (7.15) that lie in the strip \(\pi>p>-\pi\). Determine their stability as a function of \(K\). In what range of \(K\) is there an elliptic fixed point (assume \(K\geq 0\))?* Write a computer program to iterate the standard map, Eqs. (7.15). * Plot \(p\) modulo \(2\pi\) versus \(\theta\) for orbits with \(K=1\) and the following five initial conditions, \((\theta_{0},\ p_{0})=(\pi,\ \pi/5)\), \((\pi,\ 4\pi/5)\), \((\pi,\ 6\ \pi/5)\), \((\pi,\ 8\pi/5)\), \((\pi,\ 2\pi)\). * For \(K=21\) plot versus iterate number the average value of \(p^{2}\) averaged over 100 different initial conditions, \((\theta_{0},\ p_{0})=(2\pi\pi/11,\ 2\pi\pi/11)\) for \(n=1,\ 2,\ \dots,\ 10\) and \(m=1,\ 2,\ \dots,\ 10\), and hence estimate the diffusion coefficient \(D\). How well does your numerical result agree with the quasilinear value Eq. (7.42)? * The'sawtooth map' is obtained from the standard map, Eqs. (7.15), by replacing the function, \(\sin\theta_{n+1}\) in (7.15b) by the sawtooth function, saw \(\theta_{n+1}\), where \[\mbox{saw}\,\theta\equiv\cases{\theta,&for $0\leq\theta<\pi$,\cr\theta-\pi,&for $\pi<\theta\leq 2\pi$,}\] and saw \(\theta\equiv\mbox{saw}\ (\theta+2\pi)\). Show that the sawtooth map is an example of a \(C\)-system if \(K>0\) or \(K<-2\) and calculate the Lyapunov exponents. * A two-degree-of-freedom system has the following Hamiltonian in action-angle variables, \(H(J_{1},\,J_{2},\,\theta_{1},\,\theta_{2})=H_{0}(J_{1},\,J_{2})+\epsilon V( \theta_{1},\,\theta_{2})\) where \[H_{0}(J_{1},\,J_{2})=\Lambda J_{1}^{3/2}+\Omega J_{2},\ V(\theta_{1},\,\theta_ {2})=\cos\theta_{1}\sum_{n=-\infty}^{\infty}\ V_{n}\exp(\mbox{i}n\theta_{2}),\] \(\Lambda\) and \(\Omega\) are constants, and \(\epsilon\) is small. * Obtain an expression for the trajectory \(J_{1}(t)\) to first order in \(\epsilon\). * Which tori in the phase space are destroyed by the perturbation? * What does the KAM theorem tell us about the phase space for small \(\epsilon\)? Answer in several complete sentences. ## Notes 1. Additional useful material on chaos in Hamiltonian systems can be found in the texts by Sagdeev _et al._ (1990), by Ozorio de Almeida (1988), by Lichtenberg and Lieberman (1983) and by Arnold and Avez (1968), in the review articles by Berry (1978), by Chirikov (1979) and by Helleman (1980), and in the reprint selection edited by MacKay and Meiss (1987). 2. See books which cover the basic formulation and analysis of Hamiltonian mechanics, such as Ozorio de Almeida (1988) and Arnold (1978, 1982). 3. Our review in Section 7.1 is meant to refresh the memory, rather than to be a self-contained first-principles exposition. Thus, the reader who wishes more detail or clarification should refer to one of the texts cited above.2 4. If only \(k\)-independent relations of the form \({\bf m}\cdot\mathbf{\omega}=0\) hold with \(1<k<N-1\), then orbits on the \(N\)-torus are \((N-k)\)-frequency quasiperiodic and do not fill the \(N\)-torus. Rather individual orbits fill \((N-k)\)-tori which lie in the \(N\)-torus. 5. In the area preserving case, the areas of lobes bounded by stable and unstable manifold segments must be the same if these lobes map to each other under iteration of the map. For example, consider one of the finger shaped areas bounded by stable and unstable manifold segments in Figure 4.10(_c_). This area must be the same as the areas of the regions shown in the figure to which it successively maps. 6. Long-time power law correlations of orbits have been observed numerically in two-dimensional maps (Karney, 1983; Chirikov and Shepelyanski, 1984) and in higher-dimensional systems (Ding _et al._, 1991a). This comes about due to the'stickiness' of KAM surfaces: an orbit in a chaotic component which comes near a KAM surface bounding that component tends to spend a longer time there, and this time is typically longer the nearer the orbit comes. This behavior has been examined theoretically using self-similar random walk models (Hanson _et al._, 1985; Meiss and Ott, 1985). This type of behavior has also been shown to result in anomalous diffusion wherein the average of the square of a map variable increases as \(n^{a}\) with \(\alpha>1\) (in contrast to ordinary diffusive behavior where \(\alpha=1\) as in Eq. (7.44)). See Geisel _et al._ (1990), Zaslavski _et al._ (1989) and Ishizaki _et al._ (1991). 7. Note that near the peaks of the graph in Figure 7.17 it appears that the numerically computed \(D\) values can be much larger than the analytical estimate. In fact, it was subsequently found that \(D\) diverges to infinity in these regions, and the actual behavior of \(\langle p^{2}/2\rangle\) is anomalous in that \(\langle p^{2}/2\rangle\sim n^{a}\) with \(\alpha>1\). This behavior is due to the presence of 'accelerator modes' in the range of \(K\)-values near the peaks of the graph. Accelerator modes are small KAM island chains such that, when an orbit originates in an island, it returns periodically to that island but is displaced in \(p\) by an integer multiple of \(2\pi\). Hence, the orbit experiences a free acceleration, \(p\sim n\). Orbits in the large chaotic region can stick close to the outer bounding KAM surfaces of these accelerator islands, thus leading to the above mentioned anomalous behavior (Ishizaki _et al._, 1991). 8. We note, however, that, if we consider the \(\bar{q}\) times iterated map, then there may be mixing regions in \(\sigma\) for the map \({\bf M}^{\bar{q}}\), since \({\bf M}^{\bar{q}}(\sigma)=\sigma\). ## Chapter 8 Chaotic transitions A central problem in nonlinear dynamics is that of discovering how the qualitative dynamical properties of orbits change and evolve as a dynamical system is continuously changed. More specifically, consider a dynamical system which depends on a single scalar parameter. We ask, what happens to the orbits of the system if we examine them at different values of the parameter? We have already met this question and substantially answered it for the case of the logistic map, \(x_{n=1}=rx_{n}(1-x_{n})\). In particular, we found in Chapter 2 that as the parameter \(r\) is increased there is a period doubling cascade, terminating in an accumulation of an infinite number of period doublings, followed by a parameter domain in which chaos and periodic 'windows' are finely intermixed. Another example of a context in which we have addressed this question is our discussion in Chapter 6 of Arnold tongues and the transition from quasiperiodicity to chaos. Still another aspect of this question is the types of generic bifurcations of periodic orbits which can occur as a parameter is varied. In this regard recall our discussions of the generic bifurcations of periodic orbits of one dimensional maps (Section 2.3) and of the Hopf bifurcation (Chapter 6). In this chapter we shall be interested in transitions of the system behavior with variation of a parameter such that the transitions involve chaotic orbits. Some changes of this type are the following: As the system parameter is changed, a chaotic attractor appears. As the system parameter is changed, a chaotic transient is created from a situation where there were only nonchaotic orbits. As the system parameter is changed, a formerly nonfractal basin boundary becomes fractal. As the system parameter is changed, a scattering problem changes from being nonchaotic to chaotic. As the system parameter is changed, a chaotic set experiences a sudden jump in its size in phase space (the set may or may not be an attractor, e.g., it could be a fractal basin boundary). Of the above types of chaotic transitions, the one which initially received the most interest was the first, namely, the question of characterizing the various'scenarios' by which chaotic attractors can appear with variation of a system parameter. One such scenario is the period doubling cascade to chaos, which is so graphically illustrated by the logistic map (see Chapter 2). Furthermore, we have seen in Chapter 2 that the period doubling cascade route to a chaotic attractor occurs in many other dynamical systems (Section 2.4). ### 8.1 The period doubling cascade route to chaotic attractors Perhaps, the most notable aspect of the period doubling cascade is the existence of the 'universal' numbers \(\hat{\delta}\) and \(\hat{a}\) (cf. Chapter 2). These numbers apply not only for the logistic map, but typically give a _quantitative_ characterization near the accumulation of period doublings for any dissipative system undergoing a period doubling cascade _indepen dent of the details of the system_. A similar universality applies in statistical mechanics in the study of critical phenomena near phase transitions. In that case 'critical exponents' governing the behavior near the phase transition point have been derived which are universal in the same sense; _viz._, their numerical values are precisely the same for a large class of physical systems. The general mathematical technique which has allowed the calculation of critical exponents in statistical mechanics has been used by Feigenbaum (1978, 1980a) to derive the values of \(\hat{\delta}\) and \(\hat{\alpha}\) for period doubling cascades of dissipative systems. This general technique is called the _renormalization group_ and has also been used for other problems in nonlinear dynamics. These include period doubling cascades in conserva active systems (Section 7.3.2), the study of the scaling properties of the last surviving KAM surface (Section 7.3.2), the destruction of two frequency quasiperiodic attractors (Section 6.2), and the transition from Hamiltonian to dissipative systems (Chen _et al._, 1987). In this section we supplement our previous discussion in Chapter 2 of period doubling cascades by summarizing part of Feigenbaum's use of the renormalization group tech unique for studying period doubling cascades. Since we only wish to indicate briefly the spirit of the renormalization group technique, we shall limit our discussion to the derivation of the universal constant \(\hat{a}\). (Thereader is referred to Feigenbaum (1980a) and Schuster (1988, Section 3.2) for nice expositions of the further treatment giving the constant \(\hat{\ }\).) To focus the discussion we consider the logistic map, \[M(x,\ r)=rx(1-x).\] Let \(\tilde{r}_{\rm n}\) denote the value of \(r\) at which the periodic orbit of period \(2^{n}\) is superstable; i.e., the period \(2^{n}\) orbit passes through the critical point where \(M^{\prime}(x,\ r)=0\) (namely, \(x=\frac{1}{2}\)). Figures 8.18.3 show \(M(x,\ r)\) and iterates of \(M\) for \(r=\tilde{r}_{0}\), \(\tilde{r}_{1}\) and \(\tilde{r}_{2}\). In Figure 8.1 we show \(M\) at \(r=\tilde{r}_{0}\). Figure 8.2(\(a\)) shows \(M(x,\ \tilde{r}_{1})\) and the superstable period two orbit. Figure 8.2(\(b\)) shows \(M^{2}(x,\ \tilde{r}_{1})\). Note that the elements of the superstable period two of Figure 8.2(\(b\)) fall on critical points of \(M^{2}\) (i.e., points where \((M^{2})^{\prime}=0\)). Also note the square box in Figure 8.2(\(b\)) whose upper right hand corner is located at the (now unstable) period one fixed point of \(M(x,\ \tilde{r}_{1})\). The important point is that, if we reflect this square about the point \((\frac{1}{2},\frac{1}{2})\), then the appearance of the resulting graph in the square in nearly the same as that for Figure 8.1, although on smaller scale. Figure 8.3(\(a\)) shows \(M^{2}(x,\ \tilde{r}_{2})\). Now the period two orbit shown in Figure 8.2 is unstable. In its place the attractor is now the period four orbit given by the four critical points on the dashed line in Figure 8.3(\(b\)). Note that we have again blocked out a small square. This square has at its center the element of the period four orbit which falls on \((\frac{1}{2},\frac{1}{2})\), the critical point of \(M^{4}(x,\ \tilde{r}_{2})\). The lower left hand corner of this square is the element of the unstable period two orbit which was at the critical point (\(\frac{1}{2}\), \(\frac{1}{2}\)) when \(r\) was \(\hat{r}_{1}\). Again note that the graph of \(M^{4}(x,\,\hat{r}_{2})\) in the small square is similar to the reflection about the point (\(\frac{1}{2}\), \(\frac{1}{2}\)) of the graph in the blocked out square of Figure 8.2(\(b\)) and is similar to the entire graph of \(M\) (\(x\), \(\hat{r}_{0}\)) shown in Figure 8.1. The situation with respect to the small squares shown in Figures 8.2(\(b\)) and 8.3(\(b\)) repeats as we consider higher period \(2^{n}\) superstable cycles. Furthermore, for large \(n\), the edge length of the square reduces geometrically at some limiting factor, and this factor is the universal constant defined in Chapter 2. In fact, not only the size of the square scales, but also the functional form of \(M^{2^{n}}(x,\,\hat{r}_{n})\) within the square. Thus shifting the coordinate \(x\) so that the critical point of \(M\) now falls on \(x=0\) (rather than \(x=\frac{1}{2}\)), we have that near \(x=0\) \[M^{2^{*}}(x,\,\,\tilde{r}_{n})\simeq(-\,\,^{\circ})M^{2^{*+1}}(-x/\,\,^{\circ},\, \tilde{r}_{n+1}) \tag{8.1}\] for large \(n\). The minus signs accounts for the reflection of the square about the point \((\frac{1}{2},\,\frac{1}{2})\) on each increase of \(n\) by 1. This implies that the limit \[\lim_{n\to\infty}(-\,\,^{\circ})^{n}M^{2^{*}}(x/(-\,^{\circ})^{n},\,\,\tilde{r }_{n})=\,g_{0}(x) \tag{8.2}\] exists and gives the large \(n\) behavior of \(M^{2^{*}}(x,\,\,\tilde{r}_{n})\) near the critical point. In analogy with (8.2), we define \[g_{m}(x)\equiv\ \lim_{n\to\infty}(-\ ^{\hat{}})^{n}M^{2^{2}}(x/(-\ ^{\hat{}})^{n},\ \hat{r}_{n+m}). \tag{8.3}\] The functions \(g_{m}(x)\) are related by the doubling 'transformation' \(\hat{T}\) \[g_{m-1}(x)=(-\ ^{\hat{}})g_{m}[g_{m}(-x/\ ^{\hat{}})]\equiv\hat{T}[g_{m}(x)], \tag{8.4}\] which can be verified from (8.3). This transformation has a fixed point in function space which we denote \(g(x)\), \[g(x)=\hat{T}[g(x)]. \tag{8.5}\] Here \(g(x)\) is just the \(m\to\infty\) limit of \(g_{m}\) \[g(x)=\lim_{n\to\infty}g_{m}(x), \tag{8.6}\] and (8.5) follows by letting \(m\to\infty\) in (8.4). The objective is now to solve (8.5) for the function \(g(x)\) and the constant \(\hat{}\). (Equation (8.5), although nonlinear, may be regarded as similar to an eigenvalue equation, with \(g(x)\) playing the role of the eigenfunction, and \(\hat{}\) playing the role of the eigenvalue.) If \(g(x)\) is a solution of (8.5), then, by the definition of \(\hat{T}\) given in (8.4), for any constant \(k\), the function \(k^{-1}g(kx)\) is also a solution. Thus, \(g(x)\) is arbitrary to within changes of the scale in \(x\). To remove this ambiguity we arbitrarily specify that \[g(0)=1. \tag{8.7}\] Since the original \(M(x,\ r)\) was quadratic about its maximum we have that \(g(x)\) must also be quadratic. Indeed it follows that it is even. One way of solving the fixed point equation, Eq. (8.5), is to expand \(g(x)\) and \(\hat{T}[g(x)]\) in a Taylor series in powers of \(x^{2n}\) and equate the coefficients of \(x^{2n}\) on each side of the equation \(g=\hat{T}[g]\). As an illustration, we follow this procedure retaining only the first two terms, \[g(x)=1-ax^{2}+\mathrm{O}(x^{4}). \tag{8.8}\] Substituting in \(\hat{T}[g]\equiv-\ ^{\hat{}}g[g(-x/\ ^{\hat{}})]\), we obtain for (8.5), \[1-ax^{2}\simeq-\ ^{\hat{}}(1-a)-(2a^{2}/\ ^{\hat{}})x^{2}.\] Thus, at this (rather crude) level of approximation, we have \(-\ ^{\hat{}}(1-a)\)\(=1\) and \(a=2a^{2}/\ ^{\hat{}}\). Solution of these equations yields \[\map. The only property of the original map that we made use of was that it was quadratic about its maximum. (Note that that property was used in the solution of (8.5) and not in its derivation.) Thus, the value of \(\hat{\ }\) obtained by Feigenbaum from (8.5) must apply for period doubling cascades of all one dimensional maps which have quadratic maxima. That is, the value of \(\hat{\ }\) obtained is universal within this class of dynamical systems. In fact subsequent work has extended this universality class to higher dimensional systems. Renormalization group theory has also been utilized to calculate the effect of noise on the period doubling cascade (Shraiman _et al._, 1981; Crutchfield _et al._, 1981; Feigenbaum and Hasslacher, 1982). A principal result is that noise terminates the period doubling cascade after a finite number of period doublings past which chaos ensues. If \(\sigma\) denotes the noise level, and \(n(\sigma)\) denotes the number of doublings in the noisy cascade, then \(n(\sigma)\) scales with \(\sigma\) according to \[\sigma\sim\hat{\mu}^{-n(\sigma)},\] where \(\hat{\mu}\) is a universal constant. Another result of the period doubling universality is that the frequency power spectra of orbits near the ac cumulation of period doublings have universal features (Feigenbaum, 1980b; Nauenberg and Rudnick, 1981). ### 8.2 The intermittency transition to a chaotic attractor In the intermittency transition to a chaotic attractor (Pomeau and Manneville, 1980) the phenomenology is as follows. For values of the parameter (call it \(p\)) less than a critical transition value \(p_{\rm\ T}\) the attractor is a periodic orbit. For \(p\) slightly larger than \(p_{\rm\ T}\) there are long stretches of time ('laminar phases') during which the orbit appears to be periodic and closely resembles the orbit for \(p<p_{\rm\ T}\), but this regular (approximately periodic) behavior is intermittently interrupted by a finite duration 'burst' in which the orbit behaves in a decidedly different manner. These bursts occur at seemingly random times, but one can define a mean time \(\bar{T}(p)\) between the bursts. As \(p\) approaches \(p_{\rm\ T}\) from above the mean time between bursts approaches infinity, \[\lim_{p\to p_{\rm\ T}^{*}}\bar{T}(p)=+\infty,\] and the attractor orbit thus becomes always 'laminar' so that the motion is periodic. As \(p\) increases substantially above \(p_{\rm\ T}\), the bursts become so frequent that the regular oscillation (laminar phase) can no longer be distinguished. The above phenomenology is nicely illustrated in Figure 8.4by the numerical solution from the paper of Pomeau and Manneville (1980) of the Lorenz system for four successively larger values of the parameter \(\tilde{r}\) in Eq. (2.30b). The smaller value, \(\tilde{r}=166\), labeled \(a\) in the figure, corresponds to stable periodic motion. As \(\tilde{r}\) is increased through \(\tilde{r}_{\mathrm{T}}\cong 166.06\), the plots labeled \(b1\), \(b2\) and \(b3\) are obtained. We see that the bursts become more frequent as \(r\) increases. In the intermittency transition one has a simple periodic orbit which is replaced by chaos as \(p\) passes through \(p\)\({}_{\mathrm{T}}\). This necessarily implies that the stable attracting periodic orbit either becomes unstable or is destroyed as \(p\) increases through \(p\)\({}_{\mathrm{T}}\). Furthermore, when this happens, the orbit is not replaced by another nearby stable periodic orbit, as occurs, for example, in the forward period doubling bifurcation (Figure 2.15); this is implied by the fact that during the bursts the orbit goes far from the vicinity of the original periodic orbit. Three kinds of generic bifurcations which meet these requirements are the saddle node bifurcation (in which stable and unstable orbits coalesce and obliterate each other as illustrated in Figure 8.5 (in the context of one dimensional maps this is also called a tangent bifurcation; see the diagram labeled backward tangent bifurcation in Figure 2.15)), the inverse period doubling bifurcation (in which an unstable periodic orbit collapses onto a stable periodic orbit of one half its period and the two are replaced by an unstable periodic orbit of the lower period, Figure 2.15), and the subcritical Hopf bifurcation of a periodic orbit. In the Hopf bifurcation of a periodic orbit the orbit goes unstable by having a complex conjugate pair of eigenvalues of its linearized surface of section map pass through the unit circle; in the saddle node bifurcation, a stable eigenvalue (inside the unit circle) and an unstable eigenvalue (from outside the unit circle) come together and coalesce at the point \(+1\); in the inverse period doubling bifurcation an eigenvalue goes from inside to outside the unit circle by passing through \(-1\). The word subcritical applied to the Hopf bifurcation of a periodic orbit signifies that, as the parameter is increased, an unstable two frequency quasiperiodic orbit (a closed curve in the surface of section) collapses on to the stable periodic orbit, and the latter is rendered unstable as \(r\) passes through the bifurcation point. Pomeau and Manneville distinguish three types of intermittency transitions corresponding to the three types of generic bifurcations mentioned above: Type I: saddle node, Type II: Hopf, Type III: inverse period doubling. These are illustrated in Figure 8.6 where we schematically show the orbits of the surface of section map as a function of \(p\) using heavy solid lines for stable orbits and dashed lines for unstable orbits. Each of the three types of intermittency transitions displays distinct characteristic behavior near \(p_{T}\). For example, the scaling of the average time between bursts is given by \[\bar{T}(p)\quad\left\{\begin{array}{ll}(p-p\,\tau)^{-1/2}&\mbox{for Type I,}\\ (p-p\,\tau)^{-1}&\mbox{for Type II,}\\ (p-p\,\tau)^{-1}&\mbox{for Type III.}\end{array}\right. \tag{8.9}\] We note that, while the scaling behavior of the average interburst time \(\bar{T}(p)\) is the same for Type II and Type III, the characteristic probability distributions of the interburst times are quite different in the two cases. As in the case of the period doubling route to chaos, intermittency has been studied extensively in experiments. An example is the experiment of Berge _et al._ (1980) on Rayleigh Benard convection, data from which are shown in Figure 8.7. In order to clarify the nature of the intermittency transition to chaos, we now give a derivation of Eq. (8.9) for the case of the Type I transition. To simplify matters we assume that the dynamics is well described by a one dimensional map just before (Figure 8.8(_a_)) and just after (Figure 8.8(_b_)) the saddle node (or 'tangent') bifurcation to Type I intermittency. We see from Figure 8.8(_b_) that, for \(p\) just slightly greater than _p_t, the orbit takes many iterates to traverse the narrow tunnel between the map function and the 45\({}^{\circ}\) line. While in the tunnel, the orbit is close to the value of \(x\) that applies for the stable fixed point for \(p\) slightly less than \(p_{\rm\ T}\). Thus, we identify the average time to traverse the tunnel with \(\bar{T}(p)\). After traversing the tunnel, the orbit undergoes chaotic motion determined by the specific form of the map (not shown in Figure 8.8) away from the vicinity of the tunnel, and is then reinjected into the tunnel when 'by chance' the chaotic orbit lands in the tunnel. To calculate the typical time to traverse the tunnel we approximate the map as being quadratic to lowest order, \[x_{n+1}=x_{n}+\varepsilon+x_{n}^{2}, \tag{8.10}\] where \(\varepsilon\) (\(p-p_{\rm\ T}\)). Note that (8.10) undergoes a saddle node bifurcation as \(\varepsilon\) increases through zero. Considering \(\varepsilon\) as small and positive, we utilize the fact that the steps in \(x\) with successive iterates in the tunnel are small. This allows us to approximate (8.10) as a differential equation. Replacing \(x_{n}\) by \(x(n)\) and considering \(n\) as a continuous variable, we have \(x_{n+1}-x_{n}\quad\mathrm{d}x(n)/\mathrm{d}n\). Equation (8.10) thus becomes \[\mathrm{d}x/\mathrm{d}n=x^{2}+\varepsilon.\] For an orbit reinjected into the tunnel by landing at a point \(x_{0}<0\), \(-x_{0}\gg\varepsilon\) this yields an approximate time to traverse the tunnel given by \[\int_{x_{0}}^{+\infty}\frac{\mathrm{d}x}{x^{2}+\varepsilon}\quad\int_{-\infty }^{+\infty}\frac{\mathrm{d}x}{x^{2}+\varepsilon}=\pi/\varepsilon^{1/2}\quad \varepsilon^{-1/2}\] in agreement with Eq. (8.9). It is interesting to note that Type I intermittency transitions to chaos are already present in the simple quadratic logistic map example that we have extensively examined in Chapter 2. In particular, we noted there that each periodic window is initiated by a tangent bifurcation (e.g., see Figure 2.13 for the case of the period three). For example, referring to Figure 2.12(\(a\)) which shows the period three window, we see that just above \(r*_{3}\) we have a stable period three orbit, while just below \(r*_{3}\) there is chaos. Thus, as the parameter \(r\) is _decreased_ through \(r*_{3}\) we have an intermit tency transition from a periodic to a chaotic attractor. Examination of an orbit for \(r\) just below \(r*_{3}\) shows that it has long stretches where it approximately follows the period three that exists above \(r*_{3}\). Some examples of further works on the theory of intermittency transitions are the following: Hu and Rudnick (1982), who use the renormalization group; Hirsch _et al_. (1982), who treat the effect of noise; and Ben Mizrachi _et al_. (1985) who discuss the low frequency power spectra of chaotic orbits near an intermittency transition. ### 8.3 Crises Sudden changes in chaotic attractors with parameter variation are seen very commonly (two early examples are Simo (1979) and Ueda (1980)). Such changes, caused by the collision of the chaotic attractor with an unstable periodic orbit or, equivalently,1 its stable manifold, have been called crises and were first extensively studied by Grebogi _et al_. (1982, 1983c). Three types of crisis can be distinguished according to the nature of the discontinuous change that the crisis induces in the chaotic attractor: in the first type a chaotic attractor is suddenly destroyed as the parameter passes through its critical value; in the second type the size of the attractor in phase space suddenly increases; in the third type (which can occur in systems with symmetries) two or more chaotic attractors merge to form one chaotic attractor. The inverse of these processes (i.e., sudden creation,shrinking or splitting of a chaotic attractor) occurs as the parameter is varied in the other direction. The sudden destruction of a chaotic attractor occurs when the attractor collides with a periodic orbit on its basin boundary and is called a _boundary crisis_. The sudden increase in the size of a chaotic attractor occurs when the periodic orbit with which the chaotic attractor collides is in the interior of its basin and is called an _interior crisis_. In an _attractor merging crisis_ two (or more) chaotic attractors _simultaneously_ collide with a periodic orbit (or orbits) on the basin boundary which separates them. #### Boundary crises In a boundary crisis, as a parameter \(p\) is raised, the distance between the chaotic attractor and its basin boundary decreases until at a critical value \(p=p_{\rm c}\) they touch (the crisis). At this point the attractor also touches an unstable periodic orbit that was on the basin boundary before the crisis.1 For \(p>p_{\rm c}\) the chaotic attractor no longer exists but is replaced by a chaotic transient. In particular, for \(p\) just slightly greater than \(p_{\rm c}\), consider an initial condition placed in the phase space region occupied by the basin of attraction of the chaotic attractor that existed for \(p<p_{\rm c}\). This initial condition will typically move toward the region of the \(p<p_{\rm c}\) attractor, bounce around in an orbit that looks like an orbit on the \(p<p_{\rm c}\) chaotic attractor, and then, after what could be a relatively long time (for \(p\) sufficiently close to \(p_{\rm c}\)), the orbit rather suddenly starts to move off toward some other distant attractor. (Note that our assumption that the attractor exists for \(p<p_{\rm c}\) and the chaotic transient exists for \(p>p_{\rm c}\) is merely an arbitrary convention that might equally well be reversed.) As an example, Figure 8.9 shows an orbit for the Ikeda map (Hammel _et al._, 1985) Figure 8.9: Orbit for the Ikeda map Eq. (8.11) for parameter values \(a=1.0027\), \(b=0.9\), \(\kappa=0.4\) and \(\eta=6.0\) (Grebogi _et al._, 1986a). \[z_{n+1}=a+bz_{n}\exp\biggl{(}\mathrm{i}\kappa-\frac{\mathrm{i}\eta}{1+|z_{n}|^{2}} \biggr{)}, \tag{8.11}\] where \(z=x+\mathrm{i}y\) is a complex number. (Taking real and imaginary parts (8.11) may be regarded as a two dimensional real map.) The physical origin of this map is illustrated in Figure 8.10. The orbit shown in Figure 8.9 corresponds to a parameter value \(p\) just slightly past that yielding a boundary crisis. The numerals in the figure label the number of iterates to reach the corresponding point, with 1 denoting the initial condition. We see that the orbit bounces around, appearing to fill out a chaotic attractor for over 86 000 iterates. Then at iterate 86 431 the orbit point 'by chance' lands near (and just to the right of) a stable manifold segment of an unstable period one (fixed point) saddle on the basin boundary of the \(p<p_{\mathrm{c}}\) attractor. The orbit then moves towards this fixed point (points 86 432; 86 433; \(\ldots\)), following the direc tion of its stable manifold and then gets ejected to the right along its unstable manifold (points \(\ldots\); 86 442; 86 443; 86 444; \(\ldots\)) moving off to some other attractor. Basically, the attractor becomes 'leaky' (hence no longer an attractor), developing a region from which orbits can escape. The length of a chaotic transient depends sensitively on the initial condition. However, if one looks at many randomly chosen initial conditions in the \(p<p_{\mathrm{c}}\) basin region, then one typically sees that the chaotic transient lengths, \(\tau\), have an exponential distribution for large \(\tau\), \[P(\tau)\ \ \ \ \exp(-\tau/\langle\tau\rangle) \tag{8.12}\] (see Chapter 5), where \(\langle\tau\rangle\) is the characteristic transient lifetime. Note that while the above discussion was in the context of a crisis destroying a chaotic attractor, one may equally well consider that, as the parameter is varied in the other direction, the crisis creates a chaotic Figure 8.10: The Ikeda map can be viewed as arising from a string of light pulses of amplitude \(a\) entering at the partially transmitting mirror \(M_{1}\). The time interval between the pulses is adjusted to the round trip travel time in the systems. Let \(|z_{n}|\) be the amplitude and angle (\(z_{n}\)) can be the the phase of the \(n\)th pulse just to the right of mirror \(M_{1}\). Then the terms in (8.11) have the following meaning: \(1-b\) is the fraction of energy in a pulse transmitted or absorbed in the four reflections from \(M_{1}\), \(M_{2}\), \(M_{3}\), and \(M_{4}\); \(\kappa\) is the round trip phase shift experienced by the pulse in the vacuum region; and \(-\eta/(1+|z_{n}|^{2})\) is the phase shift in the nonlinear medium. attractor. Thus, along with period doubling and Pomeau Manneville inter mittency, crises represent one of several routes to the creation of chaotic attractors. The scaling of the characteristic transient lifetime \(\langle\tau\rangle\) with \(p-p_{\rm c}\) is of great interest. For cases like that shown in Figure 8.9, it is found that \[\langle\tau\rangle\sim\ (p-p_{\rm c})^{-y} \tag{8.13}\] for \(p\) just past \(p_{\rm c}\). The quantity \(\gamma\) is called the critical exponent of the crisis. As a simple, almost trivial, example of the scaling Eq. (8.13), consider the logistic map, \(M(x,\ r)=rx(1-x)\). The basin of attraction of the attractor for finite \(x\) is the interval \([0,1]\), and the basin for the attractor \(x=-\infty\) is its complement. The point \(x=0\) is an unstable (\(r>1\)) fixed point on the basin boundary. Looking at the bifurcation diagram, Figure 2.11(_a_), we see that the chaotic attractor is destroyed as \(r\) increases through \(r=4\) (for \(r>4\) the only attractor is \(x=-\infty\)). Furthermore, we see that the size of the chaotic attractor increases with increasing \(r\) colliding with the fixed point, \(x=0\), at \(r=4\). Thus, this is a simple example of a boundary crisis. For \(r>4\), the chaotic attractor is replaced by a chaotic transient. To estimate \(\langle\tau\rangle\) for \(r\) just slightly greater than 4, we note that \(M(x,\ r)>1\) for a small range of \(x\) values about the maximum point \(x=\frac{1}{2}\) and that this range has a width of order \((r-4)^{1/2}\) (this square root dependence of the width is a general consequence of the maximum of the map function being quadratic). An orbit which falls in this narrow range about \(x=\frac{1}{2}\) is mapped on one iterate to \(x>1\), on the next to \(x<0\), and on subsequent iterates to increasing negative \(x\) values. Thus, this interval represents a loss region through which orbits initially in \([0,1]\) can 'leak' out. For \(r\) just slightly greater than 4 the orbits in \([0,1]\) will be similar to those for \(r=4\) (until they fall in the loss region). For \(r=4\) the probability per iterate of an orbit falling in the small region of length \(\varepsilon\) about \(x=\frac{1}{2}\), namely \([(1-\varepsilon)/2,\ (1+\varepsilon)/2]\) is simply proportional to \(\varepsilon\), the length of the interval. (This is because the natural measure varies smoothly through \(x=\frac{1}{2}\); see Figure 2.7.) Identifying the loss rate \(1/\langle\tau\rangle\) with the probability per iterate of falling in the loss region whose length is of the order of \((r-4)^{1/2}\), we obtain \(\langle\tau\rangle\sim\ (r-4)^{-1/2}\). That is, the critical exponent \(\gamma\) in Eq. (8.13) is \[\gamma=\frac{1}{2}.\] This result is general for crises of one dimensional maps with quadratic maxima and minima. We emphasize, however, that \(\gamma\) is typically greater than \(\frac{1}{2}\) for higher dimensional systems. A theory yielding the critical exponents for crises in a large class of two dimensional map systems has been given by Grebogi _et al_. (1986a,1987b). The class of systems considered is two dimensional maps in which the crisis is due to a tangency of the stable manifold of a periodic orbit on the basin boundary with the unstable manifold of an unstable periodic orbit on the attractor. These types of crisis appear to be the only kinds that can occur for two dimensional maps that are strictly dissipative (i.e., the magnitude of their Jacobian determinant is less than 1 everywhere2) and is a very common feature of many systems such as the forced damped pendulum, the Duffing equation and the Henon map. For these systems, crises occur in either one of the following two typical ways: (1) _Heteroclinic tangency crisis._ In this case, the stable manifold of an unstable periodic orbit (_B_) becomes tangent with the unstable mani fold of an unstable periodic orbit (_A_) (Figure 8.11(_a_)). Before (and also at) the crisis \(A\) was on the attractor and \(B\) was on the boundary. (2) _Homoclinic tangency crisis._ In this case the stable and unstable manifolds of an unstable periodic orbit \(B\) are tangent (Figure 8.11(_b_)). In both cases, both at and before the crisis, the basin boundary is the closure of the stable manifold of \(B\). At \(p=p_{\rm c}\), again in both cases, the chaotic attractor is the closure of the unstable manifold of \(B\). For \(p=p_{\rm c}\) (and also \(p<p_{\rm c}\)) in the heteroclinic case, the attractor is also the closure of the unstable manifold of \(A\). (Note that in both cases \(B\) is on the attractor for \(p=p_{\rm c}\) but not for \(p<p_{\rm c}\).)Grebogi _et al._ (1986a, 1987b) obtain the following formulae for the critical exponent \(\gamma\), \[\gamma={{1\over 2}}+({\rm ln}|\kern0.0pt{}_{1}|)/|{\rm ln}| \kern0.0pt{}_{2}||,\] for the heteroclinic case, and \[\gamma=({\rm ln}|\beta_{2}|)/({\rm ln}|\beta_{1}\beta_{2}|^{2}),\] for the homoclinic case. Here \(\kern 10.0pt{}_{1}\) and \(\kern 10.0pt{}_{2}\) are the expanding and contracting eigenvalues of the Jacobian matrix of the map evaluated at \(A\), and \(\beta_{1}\) and \(\beta_{2}\) are the same quantities for the matrix evaluated at \(B\). (If \(A\) and \(B\) have a period \(P>1\), then we use the Jacobian of the \(P\)th iterate of the map.) The one dimensional map result, \(\gamma={{1\over 2}}\), is recovered from (8.14) by going to the limit of infinitely strong contraction \(\kern 10.0pt{}_{2}\to 0\), \(\beta_{2}\to 0\). We now give a derivation of the formula (8.14a) for the heteroclinic tangency case. (The interested reader is referred to Grebogi _et al._ (1986a), 1987b) for the derivation of (8.14b).) As \(p\) passes \(p_{\rm c}\) the unstable manifold of \(A\) crosses the stable manifold of \(B\) as shown in Figure 8.12. If an orbit lands in the shaded region \(ab\) of the figure it is rapidly lost from the region of the old attractor by being attracted to \(B\) along the stable manifold of \(B\) and then repelled along the unstable manifold segment of \(B\) that goes away from the attractor. For \(p\) near \(p_{\rm c}\) the dimensions of the shaded region \(ab\) are of order \((p-p_{\rm c})\) by \((p-p_{\rm c})^{1/2}\) as shown in the Figure 8.12: Illustration for the derivation of \(\gamma\) in the heteroclinic case. The long dimension of the shaded region \(ab\) is of order \((p-p_{\rm c})^{1/2}\) because we assume that the unstable manifold of \(A\) is smooth and that the original tangency (Figure 8.11(\(a\))) is quadratic. figure. Iterating the region \(ab\) backward \(n\) steps, we have the shaded region \(a^{\prime}b^{\prime}\) whose dimensions are of order \((p-p_{\rm c})^{1/2}/\begin{array}{c}n\\ 1\end{array}\) by \((p-p_{\rm c})/\begin{array}{c}n\\ 2\end{array}\). These estimates follow from the fact that for large enough \(n\), except for the first few backwards iterates, the iterated region is near \(A\) and its evolution is thus governed by the linearized map at \(A\). The region \(a^{\prime}b^{\prime}\) is to be regarded as similar to the loss region of our previous logistic map example. As in that case, we estimate the loss rate \(1/\langle\tau\rangle\) as the attractor measure in the region \(a^{\prime}b^{\prime}\) with the attractor measure evaluated at \(p=p_{\rm c}\). We denote this measure \(m(p-p_{\rm c})\) and, in accord with the above assumption, we assume that \(m(p-p_{\rm c})\)\((p-p_{\rm c})^{\gamma}\). Now, say that we reduce \(p-p_{\rm c}\) by the factor \(\begin{array}{c}2\end{array}\) (i.e., \((p-p_{\rm c}\rightarrow\begin{array}{c}2(p-p_{\rm c})\end{array}\)). Iterating the reduced \(ab\) region backward by \(n+1\) steps (instead of \(n\) steps), the long dimension of the preiterated region is again of order \((p-p_{\rm c})/\begin{array}{c}n\\ 2\end{array}\), but the short dimension is of order \([\begin{array}{c}2(p-p_{\rm c})\end{array}]^{1/2}/\begin{array}{c}n+1\\ 1\end{array}\). Presuming the attractor measure to be smoothly varying along the direction of the unstable manifold of \(A\) we then obtain \[\frac{m(p-p_{\rm c})}{m[\begin{array}{c}2(p-p_{\rm c})\end{array}]}\quad \frac{(p-p_{\rm c})^{1/2}/\begin{array}{c}n\\ 1\end{array}}{[\begin{array}{c}2(p-p_{\rm c})\end{array}]^{1/2}/\begin{array} []{c}n\\ 1\end{array}}=\frac{1}{2}.\] Now assuming \(m(p-p_{\rm c})\)\((p-p_{\rm c})^{\gamma}\), the above gives \(\begin{array}{c}-\gamma\\ 2\end{array}=\begin{array}{c}-1/2\\ 1\end{array}\), which, upon taking logs, is the desired result (Eq. 8.14(\(a\))). An important aspect of the exponent \(\gamma\) is that larger \(\gamma\) makes the chaotic transient phenomena somewhat easier to observe. As an example, say \(\langle\tau\rangle=[(p-p_{\rm c})/p_{\rm c}]^{-\gamma}\), and consider two values of the exponent, \(\gamma=\frac{1}{2}\) (as in one dimensional maps) and \(\gamma=2\). For these hypothetical situations, we see that the range of \(p\) yielding transients of length \(\langle\tau\rangle>100\) is only \(0<(p-p_{\rm c})/p_{\rm c}<0.0001\) for \(\gamma=\frac{1}{2}\) but is \(0<(p-p_{\rm c})/p_{\rm c}<0.1\) for \(\gamma=2\). As an example where the power law dependence, Eq. (8.13), does not apply, Grebogi _et al._ (1983b, 1985a) consider a crisis in which the touching of the attractor and its basin boundary occurs as a result of the coalescence and annihilation of an unstable repelling orbit \(B\) on the basin boundary and an unstable saddle orbit \(A\) on the attractor. The bifurcation of these two orbits at the crisis is illustrated in Figure 8.13. (Note that the map is necessarily expanding in two directions at \(B\) so that this situation falls outside the strictly contractive class for which the tangency crises in Figures 8.11(\(a\)) and (\(b\)) are claimed to be the only possible cases.) The crisis mediated by this type of bifurcation has been called an 'unstable unstable pair bifurcation crisis' and results in a characteristic scaling of \(\langle\tau\rangle\) with \(p\) given by \[\langle\tau\rangle\sim\ \ \exp[k/(p-p_{\rm c})^{1/2}], \tag{8.15}\]where \(k\) is a system dependent constant. Comparing (8.15) with (8.13), one sees that \(\langle\upgamma\rangle\) from (8.15) increases faster than any power of \(1/(p-p_{\rm c})\) as \((p-p_{\rm c})\to 0\). Thus, in a sense, \(\gamma=\infty\) for (8.15). #### Crisis-induced intermittency Following a boundary crisis we have chaotic transients. In contrast, following an interior crisis or an attractor merging crisis, we have characteristic temporal behaviors that may be characterized as 'crisis induced intermittency.' In particular, for an interior crisis, as the parameter \(p\) increases through \(p_{\rm c}\) the chaotic attractor suddenly widens. For \(p\) slightly larger than \(p_{\rm c}\), the orbit on the attractor spends long stretches of time in the region to which the attractor was confined before the crisis. At the end of one of these long stretches the orbit bursts out of the old region and bounces around chaotically in the new enlarged region made available to it by the crisis. It then returns to the old region for another stretch of time, followed by a burst, and so on. The times \(\uptau\) between bursts appear to be random (i.e., 'intermittent') and again have a long time exponential distribution, Eq (8.12), with average value which we again denote \(\langle\uptau\rangle\). As before \(\langle\upgamma\rangle\to\infty\) as \(p\) approaches \(p_{\rm c}\) from above. We further note that the critical exponent theory and formulae (Eqs.(8.14)) discussed for boundary crises also apply for interior (and attractor merging) crises, with the difference that, in the interior crisis case, the periodic orbit \(B\) of Figure 8.11 is not on the basin boundary. In an attractor merging crisis of, say, two attractors, each of the two exist for \(p<p_{\rm c}\), and each has its own basin with a basin boundary Figure 8.13: Schematic illustration of the unstable unstable pair bifurcation. The two orbits \(A\) and \(B\) move towards each other as \(p\) increases, coalesce at \(p=p_{\rm c}\), and cease to exist for \(p>p_{\rm c}\). Note the similarity with the saddle node bifurcation (Figure 8.5). separating the two. At \(p=p_{\rm c}\) the two attractors both _simultaneously_ collide with this boundary. For \(p\) slightly greater than \(p_{\rm c}\), an orbit typically spends long stretches of time moving chaotically in the region of one of the old attractors, after which it abruptly switches to the region of the other old attractor, intermittently switching between the two. Again the times between switches have a long time exponential distribution with an average \(\langle{\bf r}\rangle\) which approaches infinity as \(p\) approaches \(p_{\rm c}\) from above. One may think of the term intermittency as signifying an episodic switching between different types of behavior. Thus, we can schematically contrast crisis induced intermittency with the Pomeau Manneville inter mittency of Section 8.2, as follows: \[\mbox{\rm(chaos)} \rightarrow \mbox{\rm(approximately periodic)}\rightarrow\mbox{\rm(chaos)}\] \[\rightarrow \mbox{\rm(approximately periodic)}\rightarrow\,\cdots.\] Crisis induced intermittency: \[\mbox{\rm(chaos)}_{1} \rightarrow \mbox{\rm(chaos)}_{2}\rightarrow\mbox{\rm(chaos)}_{1}\] \[\rightarrow \mbox{\rm(chaos)}_{2}\rightarrow\cdots.\] For the case of intermittent bursting (interior crises), (chaos)\({}_{1}\) might denote orbit segments during the bursts, and (chaos)\({}_{2}\) might denote orbit segments between the bursts. For the case of intermittent switching (attractor merging crises), (chaos)\({}_{1}\) and (chaos)\({}_{2}\) would denote chaotic behaviors in the regions of each of the two attractors that exist before the crisis. As an example, we again consider the Ikeda map, Eq. (8.11), but now for a different parameter set such that there is an interior crisis (Grebogi _et al._, 1987b). We take \(a=0.85\), \(b=0.9\), \(k=0.4\) and vary the parameter \(\eta\) in a small range about its crisis value \(\eta_{\rm c}=7.26884894\)... Figure 8.14 shows \(\gamma_{n}=\mbox{\rm Im}(z_{n})\) versus \(n\) for a value of \(\eta<\eta_{\rm c}\) and for three succes sively larger values of \(\eta\) with \(\eta>\eta_{\rm c}\). The first plot shows the precrisis chaos to be well confined in a range of \(0.6\approx y\approx-0.2\). As \(\eta\) is increased above \(\eta_{\rm c}\) we see that there are occasional bursts of the orbit outside this range and these bursts become more frequent with increasing \(\eta\). For \(\eta\) slightly above \(\eta_{c}\), a numerical plot of a long orbit in the \((x,\ y)\) space of the system reveals that the attractor consists of a high orbit point density inner core region surrounded by a low density halo representing the region explored during bursting. Comparison of the core region with the precrisis attractor reveals that they are essentially identical. Figure 8.15 shows results from numerical experiments (Grebogi _et al._, 1987b) determining \(\langle{\bf r}\rangle\) as a function of \(\eta-\eta_{\rm c}\). We see that a log log plot of these data is Figure 8.14: \(y_{\rm n}={\rm Im}(z_{\rm n})\) versus \(n\) for different values of \(\eta\): (_a_) \(\eta=7.26<\eta_{c}\); (_b_) \(\eta=7.33>\eta_{c}\); (_c_) \(\eta=7.35\); (_d_) \(\eta=7.38\) (Grebogi _et al._, 1987b). fairly well fit by a straight line in accord with a power law dependence as given by Eq. (8.13). The slope of this straight line gives \(\gamma\) 1.24 which has been demonstrated to be in agreement with the theory Eq. (8.14a). This is done in Grebogi _et al._ (1987b) where it is shown that this crisis is a heteroclinic tangency crisis and the relevant periodic orbits (_A_ and \(B\) of Figure 8.11(_a_)) and their eigenvalues are determined. As another example of an attractor widening crisis, refer to the bifurcation diagram for the logistic map in the range of the period three window, Figure 2.12(_a_). We see that, as \(r\) is increased through the value denoted _r_c3 in the figure, the attractor undergoes a sudden change. Namely, for \(r\) slightly less than _r_c3, the attractor is chaotic and consists of three narrow intervals through which the orbit successively cycles; for \(r\) slightly greater than _r_c3, the attractor is chaotic, but now consists of one much larger interval which includes the three small intervals of the attractor for \(r\) < _r_c3. The event causing this change is an interior crisis. To see how this occurs, we note that at the beginning of the window (i.e., at the value \(r\) = \(r\) * 3 in Figure 2.12(a)) there is a tangent bifurcation creating both a period three attractor _and_ an unstable period three orbit. As \(r\) increases from \(r\) * 3 to _r_c3 the unstable period three created at \(r\) * 3 continues to exist and lies outside the attractor. At the crisis point \(r\) = _r_c3, the unstable period three collides with the three piece chaotic attractor (each of the three points in the period three orbit collides with one of the three pieces of the chaotic attractor). Examination of a typical orbit on the attractor for \(r\) slightly greater than _r_c3 shows crisis induced intermittency in which the orbit stays in and cycles through the three precrisis intervals for long stretches of time, followed by intermittent bursts in which the orbit explores the much wider region in the full interval that the attractor now occupies. Thus, we see that the period three window is initiated by a Pomeau Manneville intermittency transition (at \(r=r_{\rm{s}}\)3) and terminates by an interior crisis (at \(r=r_{\rm{c3}}\)3) causing crisis induced intermittency. (The same is true for all windows of logistic map.) We now discuss an experimental study of crisis induced intermittency (Sommerer _et al._, 1991a,b). The physical system studied, Figure 8.16, is a vertically oriented thin magnetoelastic ribbon clamped at its bottom. Due to gravity, the ribbon will tend to buckle to the left or the right. For the particular material that the ribbon is made of the Young's modulus is a strong function of magnetic field. Dynamical motion of the ribbon is induced by applying a time dependent vertical magnetic field \(H(t)\) with a sinusoidally varying component, \[H(t)=H_{\rm{dc}}+H\ \ \mbox{sin(2$\pi t$f$t)}.\] The frequency \(f\) is taken as the control parameter that is varied, and it is found that \(f\equiv f_{\rm{c}}\) 0.97 Hz corresponds to an interior crisis. Using the voltage on the photonic sensor (Figure 8.16) sampled at the period \(1/f\) of Figure 8.16: Schematic of the magnetoelastic ribbon experiment. The inset shows the Young’s modulus \(E\) versus magnetic field \(H\). (Sommerer _et al._, 1991a.) the driver, Figure 8.17 shows experimental delay coordinate Poincare surfaces of section for the precisis (\(f=0.9760\) Hz) attractor (Figure 8.17(\(a\))) and for the postcrisis (\(f=0.9630\) Hz) attractor (Figure 8.17(\(b\))). We see that the crisis induces a significant expansion of the phase space extent of the attractor. Figure 8.18 shows a log log plot of the experimen tally determined average time between bursts (denoted \(\hat{\tau}\) in the figure) versus distance \(f-f_{\rm c}\) from the crisis. Again, a straight line in accord with Eq. (8.13) is obtained; the measured slope of this line gives a critical exponent of \(\gamma\) 1.1. Furthermore, the authors were able to determine experimentally the unstable orbit mediating the crisis and its eigenvalues. Using these they accurately verified that the theoretical formula for the critical exponent Eq. (8.14a) agreed well with the experimental value. Another aspect of this work was that they experimentally studied the effect of noise on crises. Using a noise generator, they added a noise component of controlled strength denoted \(\sigma\) to the magnetic field \(H(t)\). According to a Figure 8.17: Delay coordinate embedding of time series taken (\(\alpha\)) before and (\(b\)) after the crisis (Sommerer _et al._, 1991a). recent theory (Sommerer _et al._, 1991c), the scaling of the average time between bursts in the presence of noise should be (instead of (8.13)) \[\langle\tau\rangle\sim\ \sigma^{-\gamma}g(|f-f_{\rm c}|/\sigma),\] where \(\gamma\) denotes the noiseless critical exponent, and the function \(g\) depends on the system and the particular form of the noise. Figure 8.19(_a_) shows plots of raw data for four different noise levels. Note that, due to the noise \(\hat{\tau}\) can differ by several orders of magnitude at the same value of the control parameter \(f\). Using the noiseless exponent \(\gamma\) to scale the variables as prescribed by Eq. (8.16), Figure 8.19(_b_) results. We see that the four sets of data collapse onto a single curve (the function \(g\)), thus verifying (8.16). ### The Lorenz system: An example of the creation of a chaotic transient A particularly interesting example is provided by the Lorenz equations (Chapter 2), \[{\rm d}X/{\rm d}t=-\tilde{\sigma}X+\tilde{\sigma}Y,\] \[{\rm d}Y/{\rm d}t=-XZ+\tilde{r}X-Y,\] \[{\rm d}Z/{\rm d}t=XY-\tilde{b}Z.\] This system has three possible steady states (i.e., solutions with \({\rm d}X/{\rm d}t={\rm d}Y/{\rm d}t={\rm d}Z/{\rm d}t=0\)). One is \(X=Y=Z=0\) which we denote \(O\). There are two others, which we denote \(C\) and \(C^{\prime}\), which exist for \(\tilde{r}>1\) and are given by \[C=(X,\,Y,\,Z)=([\tilde{b}(\tilde{r}-1)]^{1/2},\,[\tilde{b}(\tilde{r}-1)]^{1/2 },\,\tilde{r}-1)\] ### The Lorenz system: An example of the creation of a chaotic transient \[C^{\prime}=(X,\,Y,\,Z)=(-[\tilde{b}(\tilde{r}-1)]^{1/2},\,-[\tilde{b}(\tilde{r}-1) ]^{1/2},\,\tilde{r}-1).\] Say we fix \(\tilde{\sigma}\) and \(\tilde{b}\) at the values used by Lorenz (\(\tilde{\sigma}=10,\,\tilde{b}=\frac{8}{3}\)) and examine the behavior of the Lorenz system as \(\tilde{r}\) increases from zero. The stability of the steady state \(O\) is given by the eigenvalues of the Jacobian matrix \[\begin{bmatrix}-\tilde{\sigma}&\tilde{\sigma}&0\\ \tilde{r}&-1&0\\ 0&0&-\tilde{b}\end{bmatrix}\] which are all negative for \(0<\tilde{r}<1\) indicating stability (Figure 8.20(\(a\))). In this case \(O\) is the only attractor of the system. As \(\tilde{r}\) passes through 1, one of the eigenvalues of \(O\) becomes positive with the other two remaining negative. This indicates that \(O\) has a two dimensional stable manifold anda one dimensional unstable manifold. Simultaneous with the loss of stability of \(O\), the two fixed points \(C\) and _C_' are born. \(C\) and _C_' are stable at birth with three real eigenvalues. Thus, as \(\tilde{r}\) increases through 1, \(C\) and _C_' become the attractors of the system. The basin boundary separating the basins of attraction for the attractors \(C\) and _C_' is the two dimensional stable manifold of \(O\). Figure 8.20(_b_) shows the situation for \(\tilde{r}\) slightly past 1. Following the unstable manifold of \(O\) it goes to the steady states \(C\) and _C_'. As \(\tilde{r}\) increases further, a point \(\tilde{r}_{\rm s}\) is reached past which two of the real stable (negative) eigenvalues of \(C\) coalesce and become complex con jugate eigenvalues with negative real parts (by symmetry the same happens for _C_'). In this regime, orbits approach \(C\) and _C_' by spiraling around them. This situation is shown in Figure 8.20(_c_), while Figure 8.20(_d_) shows the unstable manifold of \(O\) projected onto the _XZ_ plane for the same situation. For the pictures shown in Figures 8.20(_a_) (_d_) there is no chaotic dynamics at all, neither transient chaos nor a chaotic attractor. To see how the chaos observed by Lorenz at \(\tilde{r}=28\) is formed, we continue to increase \(\tilde{r}\). As this is done, the initial spiral of the unstable manifold of \(O\) about \(C\) and _C_' increases in size, until at some critical value \(\tilde{r}=\tilde{r}_{0}\) 13.96 (Kaplan and Yorke, 1979a), we obtain the situation shown in Figure 8.20(_e_). In this case the orbit leaving \(O\) along its unstable manifold comes back to \(O\) (i.e., the unstable manifold of \(O\) is on the stable manifold of _O_). The orbit in Figure 8.20(_e_) is called a homoclinic orbit for \(O\) (points on this orbit approach \(O\) for both \(t\rightarrow+\infty\) and \(t\rightarrow-\infty\)). Kaplan and Yorke (1979a) and Afraimovich _et al_. (1977) prove that as \(\tilde{r}\) increases past \(\tilde{r}_{0}\) chaos must be present. However, as emphasized by Kaplan and Yorke (1979), and by Yorke and Yorke (1979), this chaos is not attracting (it is a chaotic transient), and, just past \(\tilde{r}_{0}\), \(C\) and _C_' continue to be the only attractors. As \(\tilde{r}\) increases, however, the transient chaos is converted to a chaotic attractor by a crisis at \(\tilde{r}=\tilde{r}_{1}\) 24.06. Increasing \(\tilde{r}\) still further, the steady states \(C\) and _C_' become unstable at \(\tilde{r}=\tilde{r}_{2}\) 24.74 as the real parts of their complex conjugate eigenvalues pass from negative to posi tive (a Hopf bifurcation). Thus, for the relatively narrow range \(\tilde{r}_{1}<\tilde{r}<\tilde{r}_{2}\), there are three possible attractors (each with its respective basin), namely \(C\), _C_' and a chaotic attractor, while for \(\tilde{r}>\tilde{r}_{2}\) there is only a chaotic attractor (i.e., that found by Lorenz). The situation is illustrated schematically in Figure 8.21. The main interest of this example is the creation of transient chaos by the formation of the homoclinic orbit at \(\tilde{r}=\tilde{r}_{0}\). For the topological arguments leading to this result the reader is referred to the original papers of Kaplan and Yorke (1979a) and Afraimovich _et al_. (1977) and to the book by Sparrow (1982). In the Lorenz case the homoclinic orbit results for a situation where all ### The Lorenz system: An example of the creation of a chaotic transient Figure 8.21: Major bifurcations of the Lorenz attractor as a function of \(\tilde{r}\) (not to scale). In the regime \(0<\tilde{r}<1\) the only attractor is \(O\). In terms of the physical Rayleigh Benard systems that the Lorenz equations are meant (very crudely) to model, this represents a situation in which the fluid is at rest and thermal conduction is the only mechanism which transports heat from the bottom to the top plate (Figure 1.4). The steady states \(C\) and \(C^{\prime}\) both represent time independent convective flow patterns as shown in Figure 1.4. (The direction of flow is reversed for \(C^{\prime}\) as compared to \(C\).)the eigenvalues of the fixed point are real. Another important situation occurs when two of the eigenvalues of an unstable fixed point are complex conjugates with the other being real. In this case the fixed point is said to be a saddle focus. The formation of a homoclinic orbit of a saddle focus is illustrated in Figure 8.22 which shows the stable and unstable manifolds for values of the parameter \(p\) below, at, and above the critical homoclinic value \(p=p_{\rm h}\). Shilnikov (1970) proves that chaos results for \(p\) values in the neighborhood of \(p_{\rm h}\), and Arneodo _et al_. (1982) demonstrate an example of a chaotic attractor with spiral shape based on such a homo clinic structure. In the Lorenz and Shilnikov cases chaos arises as a result of orbits homoclinic to a single steady state. Another related situation arises when there are heteroclinic orbits connecting two steady states. See Lau and Finn (1992) for a discussion of how chaos is created in this case. ### Basin boundary metamorphoses As a system parameter is varied the character of a basin boundary can change. For example, as the parameter passes through a critical value, a nonfractal boundary can become fractal and simultaneously experience a jump in its extent in phase space. In this section we discuss these changes, called basin boundary'metamorphoses' (Grebogi _et al_., 1986b, 1987c), specializing to the case of two dimensional maps. It is found that metamorphoses occur at the formation of homoclinic crossings of stable and unstable manifolds (Chapter 4) (Grebogi _et al_., 1986b, 1987c; Moon and Li, 1985; Guckenheimer and Holmes, 1983, p. 114). Furthermore, as we shall see, a key role is played by certain unstable periodic orbits in the basin boundary that are 'accessible' from one basin or the other (Grebogi _et al_., 1986b, 1987c). **Definition:** A point on the boundary of a region is _accessible_ from that region if one can construct a finite length curve connecting the point on the boundary to a point in the interior of the region such that no point in the curve, other than the accessible boundary point, lies in the boundary. For the case of a smooth nonfractal boundary all points are accessible. However, if the basin boundary is fractal, then there are typically inaccessible points3 in the boundary. As a specific illustrative example we consider the basin boundary for the Henon map, Eq. (1.14), with \(B=0.3\), and we examine the basin structure as the remaining parameter \(A\) is varied (Grebogi _et al_., 1986b, 1987c). As \(A\) increases, there is a saddle node bifurcation at \(A_{1}=(1-B)/4\). This bifurcation creates a saddle fixed point and an attracting node fixed point. For \(A\) just past \(A_{1}\) there are two attractors, one is the attracting node and the other is an attractor at infinity, (\(|x|\), \(|y|)=\infty\). (For \(A<A_{1}\) the only attractor is the attractor at infinity.) Figure 8.23(_a_) shows the basins of the node attractor (blank) and of the attractor at infinity (black) for \(A=1.150\). We see that the period one (fixed point) saddle created at \(A=A_{1}\) is on the basin boundary of the period one attractor. In fact, the boundary is the stable manifold of the period one saddle (cf. Chapter 5). This boundary is apparently a smooth curve. Figure 8.23(_b_) shows the situation for a larger value of \(A\), namely \(A\) = 1.395 (the period two attractor in this figure results from a period doubling of the attractor in Figure 8.23(_a_)). The boundary in Figure 8.23(_b_) is fractal as demonstrated by a numerical application of the uncertainty exponent technique (Chapter 5) which yields a fractal dimension of about 1.53. Evidentally, as \(A\) increases from 1.150 to 1.395, the boundary changes from smooth to fractal. We denote the critical value of \(A\) where this occurs _A_sf, and it is found that _A_sf 1.315. The basin boundary not only becomes fractal as \(A\) passes through _A_sf, but it also experiences a discontinuous jump, in a sense to be discussed below. Figure 8.23(_c_) shows the basin boundary at \(A\) = 1.405 which is just slightly larger than the value \(A\) = 1.395 used for Figure 8.23(_b_). Comparing the two figures, it is evident that the basin of infinity (black region) has enlarged by the addition of a set of thin filaments that are well within the interior of the blank region of Figure 8.23(_b_); note in particular the region \(-1.0\leq x\leq 0.3,\ \ 2.0\leq y\leq 5.0\). This jump in the basin boundary occurs at a value \(A\) \(\equiv\)_A_ff 1.396 (the subscripts ff stand for fractal fractal in accord with the fact that the boundary is fractal both before and after the metamorphosis at _A_ff). It is important to note that, as \(A\) is decreased toward _A_ff, the set of thin black filaments sent into the old basin become ever thinner, their area going to zero, but they remain essentially fixed in position, _not_ contracting to the position of the basin boundary shown in Figure 8.23(_b_). Both the metamorphosis at _A_sf and that at _A_ff are accompanied by a change in the periodic orbit on the boundary that is 'accessible' from the basin of the finite attractor. In particular, for \({\hbox{$A_{1}$}}<{\hbox{$A$}}<{\hbox{$A_{\rm sf}$}}\), the periodic orbit accessible from the basin of the finite attractor is the period one saddle created at \(A\)1 and labeled in Figure 8.23(_a_). For \({\hbox{$A_{\rm sf}$}}<{\hbox{$A$}}<{\hbox{$A_{\rm ff}$}}\), the periodic orbit accessible from the basin of the finite attractor is the period four saddle orbit labeled in Figure 8.23(_b_) (\(1\to 2\to 3\to 4\to 1\to\cdots\)). As \(A\) passes through _A_sf the period one saddle becomes inaccessible from the basin of the finite attractor. (It is still accessible from the basin of the attractor at infinity.) This inaccessibility of the period one saddle is a result of the fact that the passing of \(A\) through _A_sf corresponds with the formation of a homoclinic intersection of the stable and unstable mani folds of the period one saddle. This is illustrated in Figure 8.24. As a result of the homoclinic intersection a series of progressively longer and thinner tongues of the basin of the attractor at infinity accumulate on the right hand side of the stable manifold segment through the period one saddle. A finite length curve connecting a point in the interior of the blank region with the period one saddle cannot now be drawn since it would have to circumvent all these tongues, and the length of the _n_th tongue approachesinfinity as \(n\to\infty\). Thus, the period one is not accessible from the blank region. We also note that, as discussed in Chapter 5, the fractal boundary contains an invariant chaotic set, embedded in which there is an infinite number of unstable periodic orbits. All of these, except for the period one and the period four shown in the figure, are, however, inaccessible from either basin. They are each essentially 'buried' under an infinite number of thin alternating striations of the two basins accumulating on them from both sides. Thus, as \(A\) increases through \(A_{\rm sf}\) the boundary saddle accessible from the blank region suddenly changes from a period one to a period four. Both orbits exist before and after the metamorphosis. We emphasize, however, that the period four remains well within the interior of the blank region as we let \(A\) approach \(A_{\rm sf}\) from below. Thus, as \(A\) increases through \(A_{\rm sf}\), the basin boundary suddenly jumps inward to the period four saddle. The situation with respect to the metamorphosis at \(A=A_{\rm ff}\) is similar: the metamorphosis is due to a homoclinic crossing of stable and unstable manifolds of the boundary saddle accessible from the basin of the finite attractor (now the period four saddle) which then becomes inaccessible and is replaced by the period three saddle. To sum up, we see that basin boundary metamorphoses of two dimensional maps typically result in changes of the accessible boundary saddles and jumps of the basin boundary location with a possible transition of the boundary character from smooth to fractal. Furthermore, these metamorphoses are induced by the formation of homoclinic intersections of manifolds of certain special (i.e., accessible) periodic saddles in the basin boundary. ### Bifurcations to chaotic scattering Referring to Figure 5.17, which shows the scattering angle \(\phi\) as a function of the impact parameter \(b\) for a particular two dimensional potential scattering problem, we see that the \(\phi\) versus \(b\) curve is smooth when the particle energy is large (Figure 5.17(\(a\))), but becomes complicated on arbitrarily fine scale at lower energy (Figures 5.17(\(b\)) and 5.18). As discussed in Chapter 5, the latter indicates the presence of chaotic dynamics. The transition from regular (Figure 5.17(\(a\))) to chaotic (Figure 5.17(\(b\))) scattering with decrease of the particle energy is a typical feature of scattering from smooth finite potential scatterers. Let \(V(\mathbf{x})\) denote the potential; we assume that \(V(\mathbf{x})\) is smooth, bounded, and rapidly approaches zero with increasing \(|\mathbf{x}|\) outside the scattering region. The problem we wish to address is how does the scattering go from being regular at large incident particle energies to being chaotic at smaller energies. For the case of two dimensional (\(\mathbf{x}=(x,\,y)\)) potential scattering it has been found that this can occur in one of two possible ways (Bleher _et al._, 1989, 1990; Ding _et al._, 1990b). We shall illustrate these two routes to chaotic scattering by reference to a simple model situation. As background for the analysis, we now review some relevant facts concerning scattering from a single circularly symmetric monotonic potential hill. Figure 8.25 shows trajectories incident on such a hill for \(E>E_{\text{m}}\), where \(E\) is the particle energy and \(E_{\text{m}}\) is the potential maximum at the hilltop. As illustrated in the figure, we see that, as the impact parameter decreases from large values, the scattering angle increases, reaches a maximum value \(\phi_{\text{m}}(E)<90^{\circ}\), and then decreases (\(\phi\) is zero at both \(b=\infty\) and \(b=0\)). Furthermore, it can be shown that \(\phi_{\text{m}}(E)\) in creases as \(E\) is decreased, approaching \(90^{\circ}\) as \(E\) approaches \(E_{\text{m}}\) from above, \[\lim_{E\to E_{\text{m}}^{*}}\phi_{\text{m}}(E)=90^{\circ}.\]As soon as \(E\) drops below \(E_{\text{m}}\), \(\phi_{\text{m}}\) jumps to \(\phi_{\text{m}}=180^{\circ}\), since now the orbit with \(b=0\) is backscattered. For \(E<E_{\text{m}}\), the scattering angle increases monotonically from \(\phi=0\) to \(\phi=180^{\circ}\) as \(b\) decreases from \(b=\infty\) to \(b=0\). We now consider the following situation. We take the potential to consist of three monotonic hills whose separation is large compared to their widths. Further, we assume that the potential is locally circularly symmetric about each of the three hill tops, Figure 8.26. (The case of noncircularity is discussed in Bleher _et al_. (1990).) We label these hills, 1, 2 and 3 and denote the potential maxima at the hilltops by \(E_{\text{m1}}\), \(E_{\text{m2}}\) and \(E_{\text{m3}}\) where by convention, \(E_{\text{m3}}>E_{\text{m2}}>E_{\text{m1}}\). We distinguish two cases as shown in Figure 8.27. Case 1 is shown in Figure 8.27(\(a\)), and case 2 is Figure 8.27: (\(a\)) Case 1 and (\(b\)) case 2. shown in Figure 8.27(_b_). In case 1, the hill of lowest maximum potential energy (hill 1 with maximum potential energy _E_m1) is outside the circle whose diameter is the line joining the two hills of larger maximum potential energy. In case 2, hill 1 is inside this circle. (In both cases we presume that hill 1 is far from the circle compared with the width of a hill.) We now consider each case in turn. ##### Case 1. Say \(E < E_{\text{m2}} \leq E_{\text{m3}}\) and assume that our orbit is deflected from hill 2 (or hill 3) and travels toward hill 1. In order for this orbit to remain trapped it must be deflected back toward hill 2 or hill 3. Since hill 1 lies outside the circle, the minimum required deflection angle \(\phi_{\text{m}*}\) is greater than 90deg. Thus, recalling the result for a single hill, we see that for \(E > E_{\text{m1}}\) there are no bounded orbits reflecting from hill 1. Consequently, the only periodic orbit that can exist is the one bouncing back and forth between hills 2 and 3. Recalling that there is an infinite number of periodic orbits embedded in the chaotic invariant set, we see that there is no chaos for case 1, when \(E > E_{\text{m1}}\). When \(E\) drops below \(E_{\text{m1}}\), chaos is immedi ately created, since now the number of unstable periodic orbits increases exponentially with period: we can represent the periodic orbits as a sequence of symbols representing the order in which each hill is visited, and any sequence is possible. Furthermore, when chaos is created the bounded invariant set in phase space may be shown to be hyperbolic (e.g., there are no KAM tori present). Bleher _et al_. (1989, 1990) have called this transition an _abrupt_ bifurcation to chaotic scattering. They also investigate how the fractal dimension characterizing the set of singularities of the scattering function behaves near the bifurcation point. They find that it increases from \(d = 0\) at \(E = E_{\text{m1}}\) roughly as \[d\textdistinguish two subcases within case 2: \(E_{\rm m1}\) is small enough that \(\phi_{\rm m*}>\phi_{\rm m1}(E_{\rm m2})\), or \(\phi_{\rm m*}<\phi_{\rm m1}(E_{\rm m2})\). In the latter case, as soon as \(E\) drops below \(E_{\rm m2}\) it can be shown that there is a transition to hyperbolic chaotic scattering; i.e., this is an abrupt bifurcation to chaotic scattering as in case 1. On the other hand, for \(\phi_{\rm m*}>\phi_{\rm m1}(E_{\rm m2})\), there is a qualitatively different kind of bifurcation to chaotic scattering. We therefore restrict our consideration of case 2 in what follows to \(\phi_{\rm m*}>\phi_{\rm m1}(E_{\rm m2})\). In this case as \(E\) decreases from \(E_{\rm m2}\), \(\phi_{\rm m1}(E)\) will increase until at some particle energy \(E\equiv E_{\rm m*}>E_{\rm m1}\), we have \(\phi_{\rm m*}=\phi_{\rm m1}(E_{\rm m*})\). For \(E_{\rm m1}<E\leq E_{\rm m*}\), we can have orbits traveling back and forth between hills 2 and 3 in two possible ways: either the path between hills 2 and 3 can pass through the region of hill 1 or it can bypass hill 1 going directly between hills 2 and 3. This is illustrated schematically in Figure 8.28. Thus, we expect that for \(E\) below (but not too close to) \(E_{\rm m*}\) there will be unstable periodic orbits made up of all possible combinations of the two types of paths between hills 2 and 3 shown in Figure 8.28. Thus, there are an infinite number of periodic orbits and hence chaotic scattering. The way in which this situation arises in this case is very different from what we have for case 1 and is _not_ an abrupt bifurcation. In particular, as \(E\) decreases, producing chaos, there is no change in the topology of the energy surface. How can the infinite number of unstable periodic orbits necessary for chaos be created in this case? In the abrupt bifurcation it is the change in the energy surface topology that occurs when \(E\) passes through one of the \(E_{\rm m}i\) which creates the infinity of periodic orbits. In the absence of such a change in topology, the only mechanisms available for the creation of unstable periodic orbits are the standard generic bifurcations of smooth Hamiltonian systems with two degrees of freedom. In this case chaotic scattering first appears when a saddle node bifurcation occurs in the scattering region (Ding _et al._, 1990b). This bifurcation produces two periodic orbits bouncing between hills 2 and 3 by way of hill 1, one of these periodic orbits is stable (the node) and the other is unstable (thesaddle). Following the bifurcation there is a set of nested KAM tori surrounding the node periodic orbit. Moving outward from the node, we eventually encounter the last KAM torus enclosing the node orbit. Past this last KAM torus there is a region of phase space where chaotic trajectories occur, and which can be reached by scattering particles (the region enclosed by the last KAM surface is inaccessible to scattering particles since they must originate from outside it). It is the chaos in the region surrounding the last KAM torus around the node orbit which is responsible for the appearance of chaotic scattering in this case. Note that in this case the chaotic invariant set is nonhyperbolic since it is bounded by a KAM curve (for which the Lyapunov exponents are necessarily zero). Decreasing \(E\) still further, for the situation of narrow hills, all the KAM surfaces will be destroyed, and we obtain a situation where the chaotic set is hyperbolic. In this case all bounded orbits are composed of legs as shown in Figure 8.28. ## Problems 1. Consider a one-dimensional map with a nonquadratic maximum, \(x_{n+1}=a-|x|^{\varepsilon}\), and use the renormalization group analysis of Section 8.1 to obtain an approximate value for the Feigenbaum number \(\tilde{\ }\) for this case. 2. Obtain Eq. (8.4) from Eq. (8.3). 3. The second iterate of a map with an inverse period doubling bifurcation at \(\varepsilon=0\) may be put in the normal form \(x_{n+1}=x_{n}^{3}+(1+\varepsilon)x_{n}\) (why is this reasonable?). Using this normal form, verify the result \(\overline{T}\quad\varepsilon^{-1}\) for Type III intermittency. 4. Using a computer demonstrate Pomeau Manneville Type I intermittency by plotting orbits for the logistic map \(x_{n+1}=rx_{n}(1-x_{n})\) for a value of \(r\) just below the value at the bottom of the period three window. Also plot \(x_{3n}\) versus \(n\) (i.e., plot every third iterate), and comment on the result. 5. Repeat Problem 4 but for a value of \(r\) just above the value at the top of the period three window thus demonstrating crisis-induced intermittency. 6. Consider the map \(M(x,\,r)\) shown in Figure 8.29. As the parameter \(r\) changes continuously from \(r_{1}\) to \(r_{2}\), the map function \(M(x,\,r)\) changes continuously from the shape shown in Figure 8.29(\(a\)) to that in Figure 8.29(\(b\)). Show that there is a basin boundary metamorphosis for some \(r\)-value between \(r_{1}\) and \(r_{2}\) and verify that the basin boundary experiences a discontinuous jump. Also discuss how the metamorphosis affects the accessible periodic boundary orbits for this case. Show that the fixed point of \(M(x,\,r_{2})\) at \(x=0\) is inaccessible. 7. Consider a smooth \(D\)-dimensional map, and assume that there is an attractor containing an unstable periodic orbit \(A\) which has \(D-1\) unstable directions with eigenvalues \(\begin{array}{c}1,\,\,\,2,\cdots,\,\,\,\,\,\,\,\,D-1\text{ and one stable eigenvalue }\(\(D-1\))-dimensional surface through \(A\), and that a crisis occurs when this surface pokes through a basin boundary as a parameter \(p\) increases through a critical value \(p_{c}\). See Figure 8.30. Obtain the following formula for the crisis exponent \(\gamma\): Show that for dissipative systems (magnitude of the Jacobian determinant \(<1\) everywhere) the exponent lies in the range \[\frac{D+1}{2}\approx\gamma\approx\frac{D-1}{2}.\] ## Notes 1. Since any point on its stable manifold maps forward to the periodic boundary orbit as time tends to \(+\infty\), and since the attractor is an invariant closed set, the attractor must touch the unstable periodic boundary orbit if it touches its stable manifold. 2. Subsequently we shall see a two-dimensional map example where the map is not strictly dissipative (it has a local region in which there is expansion in two dimensions). In this case, a power law dependence, Eq. (8.13), does not hold, and the crisis is not due to a tangency of stable and unstable manifolds. 3. This seems to be true for many typical nonlinear systems. It is not, however, true for Julia set basin boundaries of complex analytic polynomial maps (discussed, for example, in Devaney (1986)). In the latter case, even though the boundary is fractal, all its points can be accessible. By taking real and imaginary parts, complex analytic maps may be regarded as two-dimensional real maps, and we adopt the point of view that, within the space of two-dimensional maps, these are nontypical, since the complex analyticity condition imposes a special restriction on these maps that endows them with (fascinating) special properties. See McDonald (1958) for further discussion. ## Chapter 9 Multifractals In Chapters 3 and 5 we have discussed the fractal dimension of strange attractors, as well as the fractal dimension of nonattracting chaotic sets. We have found that, not only are these sets fractal, but the measures associated with them can also have fractal like properties. To characterize such fractal measures we have discussed the dimension spectrum \(D_{q}\). A measure for which \(D_{q}\) varies with \(q\) is called a _multifractal_ measure.1 In this chapter we shall extend our discussion, begun in Chapter 3, of multifractals. In particular, we shall treat several more advanced develop moments concerning this topic. ### 9.1 The singularity spectrum \(f(\alpha)\) Imagine that we cover an invariant set \(A\) with cubes from a grid of unit size \(\varepsilon\). Let \(\mu_{i}=\mu(C_{i})\), where \(C_{i}\) denotes the \(i\)th cube, and \(\mu\) is a probability measure on \(A\), \(\mu(A)\equiv 1\). We assume that \(\varepsilon\) is small so that the number of cubes is very large. To each cube we associate a _singularity index_\(\alpha_{i}\) via \[\mu_{i}\equiv\varepsilon^{\alpha_{i}}. \tag{9.1}\] We then count the number of cubes for which \(\alpha_{i}\) is in a small range between \(\alpha\) and \(\alpha+\Delta\alpha\). For very small \(\varepsilon\), and, hence, large numbers of cubes, we can, as an idealization, pass to the continuum limit and replace \(\Delta\alpha\) by a differential \(\mathrm{d}\alpha\). We then assume that the number of cubes with singularity index in the range \(\alpha\) to \(\alpha+\mathrm{d}\alpha\) is of the form \[\rho(\alpha)\varepsilon^{-f(\alpha)}\,\mathrm{d}\alpha. \tag{9.2}\]This form can be motivated by the following (see Benzi _et al._ (1984) and Frisch and Parisi (1985) whose considerations included the fractal proper ties of the spatial distribution of viscous energy dissipation of a turbulent fluid in the limit of large Reynolds number (i.e., small viscosity)). For every point **x** on the invariant set \(A\) we calculate its pointwise dimension \(D_{\rm p}({\bf x})\) (refer back to Section 3.6 for the definition of the pointwise dimension). We then consider the set of all points with a particular value \(a\) of the pointwise dimension, \(D_{\rm p}({\bf x})=\alpha\), and we calculate the box counting dimension of this set. We denote this dimension \(\hat{f}(\alpha)\). If we interpret (9.1) as resulting from the point in the center of the box having a pointwise dimension \(a_{i}\), then we might be tempted to identify the quantity \(f(\alpha)\) defined by (9.2) with \(\hat{f}(\alpha)\), the dimension of the set \(D_{\rm p}({\bf x})=\alpha\). This is a justification for the assumed form in (9.2). Work by Bohr and Rand (1987) and by Collet _et al._ (1987) shows that \(f(\alpha)\) is the dimension of the set \(D_{\rm p}({\bf x})=\alpha\) (i.e., \(f(\alpha)=\hat{f}(\alpha)\)) in the case of hyperbolic invariant sets. We caution, however, that this interpretation is not always correct. For a nonhyperbolic attractor (e.g., the Henon attractor), it has been found that \(f(\alpha)\) can indeed be the fractal dimension of the set of **x** for which \(D_{\rm p}({\bf x})=\alpha\), but only when \(a\) is in a certain range, while when \(a\) is outside that range, \(f(\alpha)\) is not the fractal dimension of the set \(D_{\rm p}({\bf x})=\alpha\) (see, for example, Ott _et al._ (1989a)). Following Halsey _et al._ (1986), we now relate the quantity \(f(\alpha)\) defined by (9.2) to the dimension spectrum \(D_{q}\). Again using a covering from a grid of edge length \(e\) we have (Eq. (3.14)) \[D_{q}=\frac{1}{1-q}\lim_{\varepsilon\to 0}\frac{\ln I(q,\,\varepsilon)}{\ln(1/\varepsilon)},\] (9 3) where \[I(q,\,\varepsilon)=\sum_{i}\mu_{i}^{q},\] (9 4) and \(\mu_{i}=\mu(C_{i})\). Making use of (9.2) and (9.1), we have \[I(q,\,\varepsilon) = \int{\rm d}\alpha^{\prime}\rho(\alpha^{\prime})\varepsilon^{-f( \alpha^{\prime})}\varepsilon^{q\alpha^{\prime}}\] (9 5) \[= \int{\rm d}\alpha^{\prime}\rho(\alpha^{\prime})\exp\{[f(\alpha^{ \prime})-qa^{\prime}]\ln(1/\varepsilon)\}\] Since we are interested in the limit as \(e\) goes to zero, we can regard \(\ln(1/\varepsilon)\) as large. In this case, the main contribution to the integral over _a__'_ comes from the neighborhood of the maximum value of the function \(f(\alpha^{\prime})-\alpha^{\prime}q\). Assuming \(f(\alpha)\) is smooth, the maximum is located at \(\alpha^{\prime}=\alpha(q)\) given by \[\frac{\mathrm{d}}{\mathrm{d}\alpha^{\prime}}\left[f(\alpha^{\prime})-\alpha^{\prime} q\right]|_{\alpha^{\prime}=\alpha(q)}=0,\] provided that \[\frac{\mathrm{d}^{2}}{\mathrm{d}(\alpha^{\prime})^{2}}[f(\alpha^{\prime})- \alpha^{\prime}q]|_{\alpha^{\prime}=\alpha(q)}<0,\] or \[f^{\prime}(\alpha(q))=q,\] (9 6a) \[f^{\prime\prime}(\alpha(q))<0\] (9 6b) We then have from (9.5) that \[I(q,\,\varepsilon) \simeq\exp\{[f(q))-q\alpha(q)]\ln(1/\varepsilon)\}\;\;\;\mathrm{ d}\alpha^{\prime}\rho(\alpha^{\prime})\varepsilon^{-1/2f^{\prime}[\alpha^{ \prime}-\alpha(q)]^{2}}\] \[\sim\exp\{[f(\alpha(q))-q\alpha(q)]\ln(1/\varepsilon)\},\] which, when inserted in (9.3), yields \[D_{q}=\frac{1}{q-1}\left[q\alpha(q)-f(\alpha(q))\right]\] (9 7) Thus, if we know \(f(\alpha)\), then we can determine \(D_{q}\) from Eqs. (9.6a) and (9.7). In particular, for each value of \(\alpha\), Eq. (9.6a) determines the corresponding \(q\). Substituting these \(q\) and \(\alpha\) values in (9.7) gives the value of \(D_{q}\) corresponding to the determined value of \(q\). Varying \(\alpha\), we therefore obtain a parametric specification of \(D_{q}\). To proceed in the other direction, we multiply (9.7) by \(q-1\), differentiate with respect to \(q\), and use (9.6a), to obtain \[\alpha(q)=\frac{\mathrm{d}}{\mathrm{d}q}[(q-1)D_{q}]=\tau^{\prime}(q),\] (9 8a) where we have introduced \[\tau^{\prime}(q)\equiv\mathrm{d}\tau/\mathrm{d}q\] and \[\tau(q)=(q-1)D_{q}\] (9 8b) Thus, if \(D_{q}\) is given, \(\alpha(q)\) can be determined from (9.8a), and then \(f(\alpha)\) can be determined from (9.7), \[f(\alpha(q)) =q\frac{\mathrm{d}}{\mathrm{d}q}[(q-1)D_{q}]-(q-1)D_{q}\] \[=q\tau^{\prime}(q)-\tau(q)\] (9 9) For each value of \(q\), (9.8) and (9.9) give a value of \(\alpha\) and the correspond ing \(f(\alpha)\), thus parametrically specifying the function \(f(\alpha)\). As an example, consider the attractor of the generalized baker's map (Figure 3.4). In this case \(D_{q}\) is the solution of the transcendental equation,2\[\tilde{\alpha}^{q}\lambda_{a}^{(1-q)(D_{q}-1)}+\tilde{\beta}^{q}\lambda_{b}^{(1-q)( D_{q}-1)}=1 \tag{323}\] This gives (we assume for definiteness that \(\ln\tilde{\beta}/\ln\lambda_{b}>\ln\tilde{\alpha}/\ln\lambda_{a}\)) \[D_{\infty} = 1+\frac{\ln\tilde{\alpha}}{\ln\lambda_{a}},\] \[D_{1} = 1+\frac{\tilde{\alpha}\ln\tilde{\alpha}+\tilde{\beta}\ln\tilde{ \beta}}{\tilde{\alpha}\ln\lambda_{a}+\tilde{\beta}\ln\lambda_{b}},\] \[D_{-\infty} = 1+\frac{\ln\tilde{\beta}}{\ln\lambda_{b}}\] Figure 9.1(_a_) shows a plot of \(D_{q}\) versus \(q\) (recall that \(D_{q}\) is a nonincreasing function of \(q\) (Eq. (3.16))). This dependence is typical for a hyperbolic attractor with a multifractal measure. Figure 9.1(_b_) shows the corres ponding \(f(\alpha)\). From (9.8), \(\alpha(q)\) increases from \(\alpha_{\min}=D_{\infty}\) to \(\alpha_{\max}=D_{-\infty}\) as \(q\) decreases from \(+\infty\) to \(-\infty\). From (9.1) we associate \(\alpha=\alpha_{\max}\) with the most rarefied regions of the measure, while we associate \(\alpha=\alpha_{\min}\) with the most concentrated regions of the measure. By (9.6a) the slope of \(f(\alpha)\) versus \(\alpha\) is infinite at \(\alpha=D_{\pm\infty}\). We also note the concave down shape of the \(f(\alpha)\) curve in accordance with (9.6b). The maximum value of \(f(\alpha)\) occurs at \(q=0\) (Eq. (9.6a)), and at this point \(f(\alpha)=D_{0}\) (Eq. (9.9)). That the maximum value of \(f(\alpha)\) is \(D_{0}\) is very reasonable since this is the dimension of the set \(A\) which supports the measure; any of the subsets on which \(D_{\rm p}({\bf x})=\alpha\) are contained within \(A\) and hence cannot have a dimension that exceeds that of \(A\). Finally, we note that at \(q=1\) we have from (9.7) that \(f(\alpha)=\alpha\), from (9.6a) that \(f^{\prime}(\alpha)=1\), and from (9.8) that \(\alpha=D_{1}\). Thus, the straight line \(f(\alpha)=\alpha\) (shown dashed in Figure 9.1) is tangent to the \(f(\alpha)\) curve at the point \(f(\alpha)=\alpha=D_{1}\). The fact that \(\alpha\) ranges from \(D_{\infty}=1+\ln\tilde{\alpha}/\ln\lambda_{a}\) to \(D_{-\infty}=1+\ln\tilde{\beta}/\ln\lambda_{b}\) can be understood by noting that iteration of the unit square \(n\) times produces \(2^{n}\) vertical strips of varying widths \(\lambda_{a}^{m}\lambda_{b}^{n-m}\), each of which contain a measure \(\tilde{\alpha}^{m}\tilde{\beta}^{n-m}\) (see Section 3.5). Thus, using (9.1), we obtain a singularity index, \[\alpha=1+\frac{\ln(\tilde{\alpha}^{m}\tilde{\beta}^{n-m})}{\ln(\lambda_{a}^{m }\lambda_{b}^{n-m})}=1+\frac{(m/n)\ln\tilde{\alpha}+(1-m/n)\ln\tilde{\beta}}{ (m/n)\ln\lambda_{a}+(1-m/n)\ln\tilde{\lambda}_{b}}, \tag{90}\] for a square of edge length \(\varepsilon=\lambda_{a}^{m}\lambda_{b}^{n-m}\) centered in one of the vertical strips of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\). As \(m/n\) increases from \(0\) to \(1\), the singularity index \(\alpha\) given by (9.10) decreases monotonically from \(1+\ln\tilde{\beta}/\ln\lambda_{b}=D_{-\infty}\) to \(1+\ln\tilde{\alpha}/\ln\lambda_{a}=D_{\infty}\) (recall that we assume \(\ln\tilde{\beta}/\ln\lambda_{b}>\ln\tilde{\alpha}/\ln\lambda_{a}\)). The \(f(\alpha)\) spectrum has proven useful for characterizing multifractal chaotic attractors and has been determined experimentally in a number of studies. For example, Jensen _et al._ (1985) determine \(f(\alpha)\) for the attractor of an experimental forced Rayleigh \(\,\)Benard system. The experiment is done at parameter values corresponding to quasiperiodic motion at the golden mean rotation number and a critical nonlinearity strength such that the golden mean torus is at the borderline of being destroyed. Jensen _et al._ compare their experimental results to results for the circle map, Eq. (6.11), choosing \(k=1\) (corresponding to the borderline of torus destruction) and adjusting \(w\) to obtain the golden mean rotation number. They find that \(f(\alpha)\) is, to within experimental accuracy, the same for the Rayleigh \(\,\)Benard experiment and for the circle map. They take this as evidence that there is a type of universal global orbit behavior for systems at the borderline of torus destruction. The relationships between \(f(\alpha)\) and \(\alpha\), on the one hand, and \(D_{q}\) and \(q\), on the other hand, are suggestive of the relationship between the free energy and the entropy in thermodynamics. Indeed, a rather complete formal analogy, based upon the partition function formalism to be discussed in the next section, can be constructed (e.g., see Badii, 1987; Fujisaka and Inoue, 1987; Bohr and Tel, 1988; Tel, 1988 and Mori _et al._, 1989 and references therein). Introducing the standard quantities of thermodynamics, \[\begin{array}{l}{\cal F}\equiv\mbox{the free energy,}\\ {\cal F}\equiv\mbox{the temperature (in energy units),}\\ {\cal U}\equiv\mbox{the internal energy per unit volume,}\\ {\cal S}\equiv\mbox{the entropy,}\end{array}\] we have the analogy given by Table 9.1, where \(\hat{\beta}=1/{\cal F}\). In thermodynamics, phase transitions are manifested as nonanalytic dependences of the thermodynamic quantities. In particular, at a first order phase transition, the free energy \({\cal F}\) has a discontinuous derivative with variation of \(\hat{\beta}\) (i.e., with variation of the temperature \({\cal F}\)). From Table 9.1 we see that the analogous phenomenon in the multifractal formalism3 would thus be a discontinuity of \({\rm d}{\rm r}(q)/{\rm d}q\) (equivalently, \({\rm d}D_{q}/{\rm d}q\); see Eq. (9.8)) with variation of \(q\). A very simple example of a phase transition in the multifractal formalism comes from considering the logistic map (2.10) at \(r=4\) (Ott _et al._, 1984b). In this case, we have seen that the natural invariant measure results in a density (Eq. (2.13)), \[\rho(x)=\frac{1}{\pi[x(1-x)]^{1/2}}\,, \tag{9.11}\] \(0\leq x\leq 1\). Dividing the interval [0, 1] into \(k\) intervals, (\(i/k\), (\(i+1)/k\)], of equal size \(\varepsilon_{i}=1/k\), we have \[\mu_{i}=\mathop{{}^{(i+1)/k}}_{{}^{i/k}}\rho(x){\rm d}x\] Using this in \(\sum\mu_{i}^{q}\), it can be shown from the definition of \(D_{q}\), Eq. (3.14), that (Problem 1) \[D_{q}=\cases{1&for $q\leq 2$,\cr q/[2(q-1)]&for $q\geq 2$} \tag{9.12}\] Thus, \(D_{q}\) is continuous but has a discontinuous derivative at \(q=q_{\rm T}\equiv 2\) which we refer to as the phase transition point (Figure 9.2(\(a\))). Utilizing (9.8) and (9.9) we see that for \(q<q_{\rm T}\), Eq. (9.12) yields \(\alpha=1\) and \(f=1\), while for \(q>q_{\rm T}\) we have \(\alpha=\frac{1}{2}\) and \(f=0\) (see Figure 9.3(\(a\))). The result \(\alpha=1\), \(f=1\) for \(q<q_{\rm T}\) corresponds to the fact that the singularity index for points in \(0<x<1\) (a one dimensional set) is 1 by virtue of the ### 9.1 The singularity spectrum \(f(a)\) \begin{table} \begin{tabular}{l l} Thermodynamics & Multifractal formalism \\ \hline \(\hat{\beta}\mathcal{F}\) & \(\tau(q)\) \\ \(\mathcal{U}(\hat{\beta})\) & \(\alpha(q)\) \\ \(\mathcal{S}(\mathcal{W})\) & \(f(a)\) \\ \(\mathcal{U}=\frac{\mathrm{d}(\hat{\beta}\mathcal{F})}{\mathrm{d}\hat{\beta}}\) & \(\alpha(q)=\frac{\mathrm{d}\tau(q)}{\mathrm{d}q}\) \\ \(\mathcal{S}=\hat{\beta}(\mathcal{U}-\mathcal{F})\) & \(f(a)=q\alpha-\tau(q)\) \\ \(\frac{\mathrm{d}\mathcal{S}}{\mathrm{d}\mathcal{U}}=\hat{\beta}\) & \(\frac{\mathrm{d}f(a)}{\mathrm{d}\alpha}=q\) \\ \end{tabular} \end{table} Table 9.1: _Analogy between thermodynamics and the multifractal formalism._smooth variation of \(\rho(x)\) in this range. The result \(\alpha=\frac{1}{2}\) and \(f=0\) for \(q>q_{\rm T}\) corresponds to the fact that \(\rho(x)\) has a singularity index of \(\alpha=\frac{1}{2}\) at the two points \(x=0\), \(1\) (a set of dimension zero; hence \(f=0\)). In Figure 9.3(_a_) we have joined the points \((\alpha,\,f)=(\frac{1}{2},\,0)\) and \((1,\,1)\) by a straight line. The derivations of Eqs. (9.7) (9.9) assume smooth \(f(\alpha)\) with \(f^{\prime\prime}<0\) (Eq. (9.6b)); this is not so for this case (and other phase transition cases as well) for which a more careful treatment of the evaluation of the integral (9.5) yields the dependence shown in Figure 9.3(_a_). For hyperbolic two dimensional maps \({\rm d}D_{q}/{\rm d}q\) is expected to be real analytic (cf. Ruelle, 1978) and hence continuous (e.g., Figure 9.1) so that phase transitions do not occur in the hyperbolic case. We do, however, expect phase transitions for nonhyperbolic maps, such as the Henon map. Indeed, the quadratic maximum of the logistic map (which is responsible for the \(\alpha=\frac{1}{2}\) singularities at \(x=0\), \(1\)) can, in some sense, be thought of as corresponding to the points of tangencies of the stable and unstable manifolds of the Henon map (or similar such maps), and it is these tangencies which are responsible for the nonhyperbolicity. The logistic Figure 9.3: Schematics of the dependences of \(f(\alpha)\) on \(\alpha\) for the case of (_a_) the logistic map at \(r=4\) and (_b_) the Henon map. map at \(r=4\) is not, however, an entirely satisfactory indication of what to expect for typical nonhyperbolic two dimensional maps. A detailed consideration of the phase transition behavior of the Henon map has been carried out by Cvitanovic _et al._ (1988), Grassberger _et al._ (1988) and Ott _et al._ (1989a) with the results indicated in Figures 9.2(_b_) and 9.3(_b_). The phase transition point \(q_{\rm T}\) occurs at approximately \(q_{\rm T}\) 23 at which the following relationship is satisfied \[(q_{\rm T}-1)D_{\rm T}=2q_{\rm T}-3 \tag{9.13}\] (Grassberger _et al._, 1988; Ott _et al._, 1989a). We see (Figure 9.2(_b_)) that \(D_{q}\) decreases smoothly with increasing \(q\) in both \(q<q_{\rm T}\) and \(q>q_{\rm T}\) but has a discontinuous decrease in its derivative at \(q=q_{\rm T}\). The singularity spectrum \(f(\alpha)\) in Figure 9.3(_b_) has the same characteristic concave down shape (\(f^{\prime\prime}(\alpha)<0\)) as for the generalized baker's map (Figure 9.1(_b_)), but only in the range \(\alpha>\alpha_{\rm T}\). For \(\alpha\) just below \(\alpha_{\rm T}\), there is a range where \(f(\alpha)\) is a straight line joining the curve for \(\alpha>\alpha_{\rm T}\) with \({\rm d}f(\alpha)/{\rm d}\alpha\) continuous. The behavior of \(f(\alpha)\) at still smaller \(\alpha\) is not at present understood. The range \(q<q_{\rm T}\) in Figure 9.2(_b_) corresponds to the range \(\alpha>\alpha_{\rm T}\) in Figure 9.3(_b_). As in the case of hyperbolic attractors, \(f(\alpha)\) in \(\alpha>\alpha_{\rm T}\) (but _not_ in \(\alpha<\alpha_{\rm T}\)) may be interpreted as the box counting dimension of the set of points with \(D_{\rm p}({\bf x})=\alpha\). We, therefore, refer to \(\alpha>\alpha_{\rm T}\) and \(q<q_{\rm T}\) as the 'hyperbolic range.' ### The partition function formalism For the purposes of this section an understanding of the Hausdorff definition of the dimension of a set will be essential. Therefore, those readers not already familiar with the Hausdorff dimension should refer to the appendix to Chapter 3 where it is described. As in that appendix (and using the same notation), we consider a set \(A\) and imagine covering it by a collection of subsets, \(S_{1}\), \(S_{2}\),..., \(S_{N}\), such that the diameters of these subsets are all less than or equal to some number \(\delta\), \[|S_{i}|\ \ \ \ \delta\] We denote this collection of subsets \(\{S_{i}\}\). Let \(\mu\) be a measure on the set \(A\), and let \(\mu_{i}\) be the \(\mu\) measure of the subset \(S_{i}\), \[\mu_{i}=\mu(S_{i})\] We define the _partition function_ (Grassberger, 1985; Halsey _et al._, 1986), \[\Gamma_{q}(\tilde{\tau},\,\{S_{i}\},\,\delta)\equiv\sum_{i=1}^{N}\mu_{i}^{q}/ \varepsilon_{i}^{\tilde{\tau}}, \tag{9.14}\] where \(\varepsilon_{i}\) denotes the diameter of \(S_{i}\) or \(\varepsilon_{i}=|S_{i}|\). Note that (9.14) reducesto the quantity \(\Gamma_{\rm H}^{d}\) in Eq. (3.51) if we set \(q=0\) and \(\tilde{\tau}=-d\) in (9.14). (The term 'partition function' is motivated by the thermodynamic analogy.) It will later result that, in the limit of large \(N\), the partition function behaves as in Figure 3.17; i.e., it makes an abrupt transition from \(+\infty\) to zero with variation of \(\tilde{\tau}\). Furthermore, we use the transition point \(\tilde{\tau}=\tilde{\tau}(q)\) to define a dimension quantity \(\tilde{D}_{q}\) via \(\tilde{\tau}(q)=(q-1)\tilde{D}_{q}\). Thus, the partition func tion given by (9.14) is a natural generalization of the quantity \(\Gamma_{\rm H}^{d}\) used in defining the Hausdorff dimension of a set. In particular, for \(q=0\), our Eq. (9.14) shows that the quantity \(\tilde{\tau}\) is \(-D_{\rm H}\) at the transition, where \(D_{\rm H}\) is the Hausdorff dimension of \(A\). Thus, \(\tilde{D}_{0}=D_{\rm H}\). Two regions will be considered, \[\mbox{Region I: }q 1,\,\tilde{\tau} 0,\] (9 15) \[\mbox{Region II: }q 1,\,\tilde{\tau} 0\] (9 16) In region I we choose the set of coverings \(\{S_{i}\}\) so as to maximize the partition function (subject to the constraint that \(|S_{i}|=\varepsilon_{i}<\delta\)). In region II we choose the covering to minimize the partition function (again subject to the constraint that \(\varepsilon_{i}<\delta\)). Thus, we define \[\Gamma_{q}(\tilde{\tau},\,\delta) =\sup_{S_{i}}\Gamma_{q}(\tilde{\tau},\,\{S_{i}\},\,\delta)\quad \mbox{(in region I)},\] (9 17a) \[\Gamma_{q}(\tilde{\tau},\,\delta) =\inf_{S_{i}}\Gamma_{q}(\tilde{\tau},\,\{S_{i}\},\,\delta)\quad \mbox{(in region II)}\] (9 17b) Next, we take the limit of \(\delta\) going to zero and define \[\Gamma_{q}(\tilde{\tau})=\lim_{\delta\to 0}\Gamma_{q}(\tilde{\tau},\,\delta)\] (9 18) As in the definition of the Hausdorff dimension, given in the appendix to Chapter 3, the quantity \(\Gamma_{q}(\tilde{\tau})\) makes a transition between \(+\infty\) and zero as \(\tilde{\tau}\) varies through some critical value which we denote \(\tilde{\tau}(q)\), \[\Gamma_{q}(\tilde{\tau})=\begin{array}{cc}+\infty&\mbox{for }\tilde{\tau}> \tilde{\tau}(q),\\ 0&\mbox{for }\tilde{\tau}<\tilde{\tau}(q)\end{array}\] (9 19) (The transition as \(\tilde{\tau}\)_increases_ through \(\tilde{\tau}(q)\) is from zero to \(+\infty\) (rather than from \(+\infty\) to zero) because, as is evident from (9.14), \(\Gamma_{q}(\tilde{\tau},\,\{S_{i}\},\,\delta)\) increase monotonically with \(\tilde{\tau}\).) We define the dimension \(\tilde{D}_{q}\) as \[(q-1)\tilde{D}_{q}=\tilde{\tau}(q)\] (9 20) Figure 9.4 shows a schematic illustrating Eqs. (9.19). (For comparison with Figure 3.17, recall that the quantity \(d\) in Figure 3.17 is analogous to \(-\tilde{\tau}\).) In the appendix to Chapter 3 we have found that \(D_{0}\) is an upper bound on \(D_{\rm H}\),\[D_{0}\quad\quad D_{\rm H} \tag{353}\] However, we have also found an example (namely, the generalized baker's map) for which (3.53) holds with the equality applying. It is conjectured on this basis and on the basis of other examples that \(D_{0}=D_{\rm H}\) for typical invariant sets arising in dynamics. For the quantities \(\tilde{D}_{q}\) an analogous result may be shown (Problem 4). Namely, \(D_{q}\) (the dimension defined by Eq. (3.14) using a covering from a grid of _equal_ size cubes) provides an upper bound on \(\tilde{D}_{q}\), \[D_{q}\quad\quad\tilde{D}_{q} \tag{92}\] As in the case of the Hausdorff dimension and \(D_{0}\), it is found that (9.21) is satisfied with the equality applying for calculable examples. On the assumption that this holds in general for the invariant sets of interest to us in dynamics, we shall henceforth drop the tildes on \(\tilde{D}_{q}\) and on \(\tilde{\tau}\) and \(\tilde{\tau}(q)\). Granted the above assumption, we now have two methods of obtaining \(D_{q}\). This can be a great advantage, since one method or the other may be easier to apply depending on the situation. As an example, we now use the partition function formalism to calculate \(D_{q}\) for the natural measure of the chaotic attractor of the generalized baker's map. As before (Eq. (3.18)), we write \(D_{q}\) as \[D_{q}=1+\hat{D}_{q},\] where \(\hat{D}_{q}\) represents the dimension of the measure projected onto the \(x\) axis. Let \(\hat{\Gamma}_{q}\) represent the partition function for this projected measure, and express \(\hat{\Gamma}_{q}(\tau,\,\delta)\) as \[\hat{\Gamma}_{q}(\tau,\,\delta)=\hat{\Gamma}_{qa}(\tau,\,\delta)+\hat{\Gamma}_ {qb}(\tau,\,\delta), \tag{93}\] where \(\hat{\Gamma}_{qa}\) is the contribution to \(\hat{\Gamma}_{q}\) from the interval \(0\quad\quad x\quad\lambda_{a}\), and \(\hat{\Gamma}_{qb}\) is the contribution to \(\hat{\Gamma}_{q}\) from \((1-\lambda_{b})\quad\quad x\quad 1\). If we magnify the interval \(0\quad\quad x\quad\lambda_{a}\) by a factor \(1/\lambda_{a}\) we get a replica of the attractor measure in the entire interval \(0\quad\quad x\quad\) 1. Furthermore, since the attractor measure\({}^{2}\) in \(0\quad\quad x\quad\lambda_{a}\) is \(\tilde{\alpha}\), using the definition Eq. (9.14), we have the following scaling \[\hat{\Gamma}_{qq}(\tau,\,\delta)=\tilde{\alpha}^{q}\lambda_{a}^{-\tau}\hat{ \Gamma}_{q}(\tau,\,\delta/\lambda_{a})\] (9 23a) Similarly, we have for the interval \[(1-\lambda_{b})\quad\quad x\quad\quad\) 1, \[\hat{\Gamma}_{qb}(\tau,\,\delta)=\tilde{\beta}^{q}\lambda_{b}^{-\tau}\hat{ \Gamma}_{q}(\tau,\,\delta/\lambda_{b})\] (9 23b) To proceed, we adopt the ansatz that for small \(\delta\) \[\hat{\Gamma}_{q}(\tau,\,\delta)\quad K\partial^{\hat{\tau}(q)-\tau},\] (9 24) where \(\hat{\tau}(q)=(q-1)\hat{D}_{q}=(q-1)(D_{q}-1)\). (Equation (9.24) is motivated by an analysis of the case of coverings by a grid of cubes, and is analogous to the argument leading to Eq. (3.52). Note, in particular, that, in the limit \(\delta\to 0\), Eq. (9.24) yields the dependence shown in Figure 9.4.) Combining (9.22) (9.24), we obtain the following transcental equation for \(\hat{\tau}(q)\) \[1=\tilde{\alpha}^{q}\lambda_{a}^{-\hat{\tau}(q)}+\tilde{\beta}^{q}\lambda_{b}^ {-\hat{\tau}(q)},\] (9 25) which, using \(\hat{\tau}(q)=(q-1)\hat{D}_{q}\), is identical with Eq. (3.23). Thus, for this example, we have confirmed that the partition function Eq. (9.14) yields the same \(D_{q}\) as the box counting definition of \(D_{q}\) given by Eq. (3.14). ### Lyapunov partition functions We have previously seen that it is possible to derive relations between the Lyapunov exponents and the information dimension \(D_{1}\) of both chaotic attractors (Eqs. (4.36) (4.38)) and nonattracting chaotic sets (Eqs. (5.14) and (5.16)). It is natural to ask whether a similar (possibly not as simple) relationship can be obtained giving the spectrum of dimensions \(D_{q}\). In this section we shall show that, using the partition function formalism of Section 9.2, this can indeed be done. We consider a hyperbolic _nonattracting_ chaotic invariant set of an invertible two dimensional map \({\bf M}({\bf x})\). We regard this nonattracting chaotic set as the intersection of a Cantor set of one dimensional stable manifold lines with a similar Cantor set of unstable manifold lines. We call such an invariant set a'strange saddle' (see the schematic illustration in Figure 5.24). The results for a chaotic _attractor_ (Badii and Politi, 1987; Morita _et al._, 1987) can be obtained as a special case of the strange saddle results (Ott _et al._, 1989b) by letting \(\langle\tau\rangle\), the average decay time\({}^{4}\) for orbits on the strange saddle (defined by (5.12)), approach infinity, \(\langle\tau\rangle\to\infty\). (See also Kovacs and Tel (1990) for the case of chaotic scattering.) We begin by recalling the situation studied in Section 5.6 where we considered a strange saddle and defined natural measures, \(\mu_{s}\) and \(\mu_{u}\), on the stable and unstable manifolds of the strange saddle5 (Eqs. (5.13) and (5.15)). We shall be interested in developing results for the dimension spectra \(D_{qs}\) of the measure \(\mu_{\rm s}\) and \(D_{qu}\) of the measure \(\mu_{\rm u}\). We first derive a formula for the unstable measure \(\mu_{\rm u}\) and then use it to estimate the partition function (9.14). As in Section 5.6, we imagine that the strange saddle is contained in some box \(B\) (see Figure 5.24). Let \(\lambda_{1}({\bf x},\,n)>1>\lambda_{2}({\bf x},\,n)\) denote the magnitudes of the eigenvalues of \({\bf DM}^{n}({\bf x})\). Note that the \(n\) dependence of \(\lambda_{1}({\bf x},\,n)\) is an approximate exponential increase with \(n\), while that of \(\lambda_{2}({\bf x},\,n)\) is an exponential decrease with \(n\). An approximation to the unstable manifold in \(B\) is obtained by iterating points in \(B\) forward in time \(n\) steps and seeing which of these forward iterates have not yet left \(B\). This approximation to the unstable manifold is (assuming that points which leave \(B\) never return) \[B\cap{\bf M}^{n}(B)\] and is illustrated in Figure 9.5(\(a\)) where we take \(B\) to be a rectangle of dimensions \(l_{1}\) by \(l_{2}\) and assume the stable manifold to run vertically and the unstable manifold to run horizontally. Iteration of \(B\) forward in time many iterates (\(n\gg 1\)) results in many long thin horizontal strips which contain the unstable manifold. These rectangles are the set \(B\cap{\bf M}^{n}(B)\). Those initial points \({\bf x}_{i}\) which remain in \(B\) for \(n\) iterates, are contained in the \(n\)th preimage of the set \(B\cap{\bf M}^{n}(B)\), \[{\bf M}^{-n}(B\cap{\bf M}^{n}(B))={\bf M}^{-n}(B)\cap B \tag{9.26}\] Recall that, from the definition of the decay time \(\langle\tau\rangle\), the Lebesgue measure (area) of \({\bf M}^{-n}(B)\cap B\) decays as \[\mu_{\rm L}({\bf M}^{-n}(B)\cap B)\quad\exp(-n/\langle\tau\rangle), \tag{9.27}\] where \(\mu_{\rm L}\) denotes Lebesgue measure. We cover the set \(B\cap{\bf M}^{n}(B)\) by a covering \(\{S_{i}\}\) of squares of side \(\varepsilon_{i}\quad l_{2}\lambda_{2}({\bf x}_{i},\,n)\) where \({\bf x}_{i}\) denotes the initial point whose \(n\)th iterate \({\bf M}^{n}({\bf x}_{i})\) is in the center of \(S_{i}\). See Figure 9.5(\(a\)). Here we have chosen \(\varepsilon_{i}\) to be equal to the local width of the horizontal strip containing \({\bf x}_{i}\) (this is indicated in the magnification in Figure 9.5(\(a\))). Now taking the small square \(S_{i}\) and iterating it backward \(n\) iterates (to see where it came from), it goes to a very narrow vertical strip \({\bf M}^{-n}(S_{i})\) contained within one of the many narrow strips constituting \({\bf M}^{-n}(B)\cap B\) (see the magnification shown in Figure 9.5(\(b\))). The width of this very narrow strip \({\bf M}^{-n}(S_{i})\) is \[\varepsilon_{i}/\lambda_{1}({\bf x}_{i},\,n)\approx\,l_{2}\lambda_{2}({\bf x} _{i},\,n)/\lambda_{1}({\bf x}_{i},\,n),\] and its length is \(l_{2}\). The unstable measure inside the small square \(S_{i}\) is seen from the definition (5.15) to be \[\mu_{\rm u}(S_{i}) \equiv \frac{\mu_{\rm L}({\bf M}^{-n}(S_{i})\cap B)}{\mu_{\rm L}({\bf M}^{-n}(B)\cap B)}\] \[\exp\biggl{(}\frac{n}{\langle\tau\rangle}\biggr{)}\lambda_{2}({\bf x}_{i},\ n)[\lambda_{1}({\bf x}_{i},\ n)]^{-1},\] where use has been made of (9.27). Inserting \(\varepsilon_{i}\cong l_{2}\lambda_{2}({\bf x}_{i},\ n)\)\(\lambda_{2}({\bf x}_{i},\ n)\) and the estimate for the measure \(\mu_{\rm u}(S_{i})\) in (9.14) we obtain the Lyapunov partition function for the unstable manifold measure \(\mu_{\rm u}\) \[\Gamma^{\rm u}_{q{\rm L}} = {}_{i}\left[\exp\biggl{(}\frac{n}{\langle\tau\rangle}\biggr{)} \frac{\lambda_{2}({\bf x}_{i},\ n)}{\lambda_{1}({\bf x}_{i},\ n)}\right]^{q} \lambda_{2}^{-\tau}({\bf x}_{i},\ n)\] \[= \exp(qn/\langle\tau\rangle)\quad{}_{i}[\lambda_{2}({\bf x}_{i},\ n)]^{q-\tau}[\lambda_{1}({\bf x}_{i},\ n)]^{-q}\] This can be regarded as a Riemann sum (whose area elements are squares of side \(\lambda_{2}({\bf x}_{i},\ n)\)) for the integral Figure 9.5: Schematic of the derivation of the Lyapunov partition function for \(D_{q{\rm u}}\). \[\exp(qn/\langle\tau\rangle)\;\;_{B\cap{\bf M}^{n}(B)}\lambda_{2}({\bf x},\;n)^{q- \tau-2}\lambda_{1}^{-q}({\bf x},\;n){\rm d}^{2}{\bf y},\] where \({\bf y}\equiv{\bf M}^{n}({\bf x})\). Noting that the magnitude of the determinant of the Jacobian matrix of \({\bf M}^{n}({\bf x})\) is \(\lambda_{1}({\bf x},\;n)\lambda_{2}({\bf x},\;n)\), we can change the variable of integration in the above integral from \({\bf y}\) to \({\bf x}\) to obtain \[\Gamma_{q{\rm I}}^{n}(D,\;n)=\exp(qn/\langle\tau\rangle)\;\;_{{\bf M}^{-n}(B) \gamma\beta}[\lambda_{2}^{D-1}({\bf x},\;n)\lambda_{1}({\bf x},\;n)]^{1-q}{ \rm d}^{2}{\bf x}, \tag{92}\] where we have substituted \(\tau\equiv(q-1)D\). For the case of a hyperbolic attractor \(\langle\tau\rangle=\infty\), and we may replace the domain of integration by any finite area subset \(\vec{B}\) of the basin of attraction. Thus, in this case (92) can be replaced by \[\Gamma_{q{\rm L}}(D,\;n)=\;\;\;_{\vec{B}}[\lambda_{2}^{D-1}({\bf x},\;n) \lambda_{1}({\bf x},\;n)]^{1-q}{\rm d}^{2}{\bf x}\] (93a) Alternatively, we can also write \[\Gamma_{q{\rm L}}(D,\;n)=\langle[\lambda_{2}^{D-1}({\bf x},\;n)\lambda_{1}({ \bf x},\;n)]^{1-q}\rangle, \tag{93b}\] where the angle brackets indicate an average with respect to Lebesgue measure on \(\vec{B}\) or, equivalently, an average over the natural measure on the attractor. (In obtaining (93b) from (93a) we have dropped a factor \([\mu_{\rm L}(\vec{B})]^{-1}\), since this does not affect the result for \(D_{q}\).) Also, note that in (93), we have dropped the superscript u with the understanding that \(\Gamma_{q{\rm L}}\) refers to the chaotic attractor (which we assume is the same as its unstable manifold; see Chapter 4). To find \(D_{qu}\), we recall that \(\varepsilon_{i}\)\(\lambda_{2}({\bf x},\;n)\) goes exponentially to zero as \(n\to+\infty\). Thus, we let \(n\to+\infty\) in the Lyapunov partition function and set \(D_{qu}\) equal to that value of \(D\) at which the resulting limit transitions from 0 to \(+\infty\). Note that our procedure in obtaining the Lyapunov partition function has not involved the optimization with respect to the covering set specified by Eqs. (92). Thus, at this stage, we can only conclude that the \(q\) dimension value obtained from the Lyapunov partition function is an upper bound on the true \(D_{qu}\). Nevertheless, as before, we shall assume that the true \(D_{qu}\) assumes this upper bound and we shall see that this is supported by application of the Lyapunov partition function to the example of the generalized baker's map. An important special case occurs when the map in question has a Jacobian determinant that is constant; i.e., \(\det{\bf DM}({\bf x})\) is independent of \({\bf x}\) (e.g., for the Henon map \(\det{\bf DM}({\bf x})=-B\)). Letting \(J\) denote this constant, we have \[\lambda_{1}({\bf x},\;n)\lambda_{2}({\bf x},\;n)=|J|^{n},\] which can be used to eliminate \(\lambda_{2}({\bf x},\;n)\) from (92) and (93),\[\Gamma^{u}_{qL}(D,\;n)=\exp\biggl{\{}n\biggl{[}\frac{q}{\langle\tau\rangle}+(D-1)(1- q)\ln\lvert J\rvert\biggr{]}\biggr{\}}\] \[\Gamma^{u}_{M^{-n}(B)\cap B}[\lambda_{1}(\mathbf{x},\;n)]^{\alpha}\;d^{2} \mathbf{x}\] (9 28 \[{}^{\prime}\] ) and \[\Gamma^{u}_{qL}(D,\;n)=\lvert J\rvert^{n(D-1)(1-q)}\langle[\lambda_{1}( \mathbf{x},\;n)]^{\alpha}\rangle,\] (9 29 \[{}^{\prime}\] ) respectively. Here \[\sigma\equiv(2-D)(1-q)\] One of the advantages of the Lyapunov partition function formulation is that it allows the dimension spectrum to be calculated numerically without many of the problems associated with box counting (in particular, the large memory requirements, and the necessity of computing long orbits, when small box sizes are considered). Numerical estimation of (9.28) proceeds as follows. Take \(N(0)\) points uniformly chosen from \(B\), and iterate each \(n\) times. Take the \(N(n)\) points remaining in \(B\) after \(n\) iterations, and, for each one, identify its initial condition \(\mathbf{x}_{i}\). Then calculate \(\lambda_{1i}\) and \(\lambda_{2i}\), the magnitudes of the eigenvalues of \(\mathbf{DM}^{n}(\mathbf{x}_{i})\). We then estimate \(\Gamma^{u}_{qL}\) as \[\Gamma^{u}_{qL}(D,\;n)\quad\frac{[N(0)]^{q-1}}{[N(n)]^{q}}_{i=1}\langle\lambda _{1i}\lambda_{2i}^{D-1}\rangle^{1-q}\] (9 30) This numerical procedure and its obvious modification for the case of attractors have been carried out with good results for \(D_{qu}\) in a number of papers. We now argue heuristically that (9.29) reduces to the Kaplan Yorke formula in the limit that \(q\to 1\). Expanding (9.29) for small \((1-q)\) we have \(\Gamma_{qL}\quad 1+(1-q)\{(D-1)\langle\ln\lambda_{2}(\mathbf{x},\;n)\rangle+ \langle\ln\lambda_{1}(\mathbf{x},\;n)\rangle\}+O[(1-q)^{2}]\). Recall the definition of the Lyapunov exponents, \[h_{1,2}(\mathbf{x})=\lim_{n\to\infty}\frac{1}{n}\ln\lambda_{1,2}(\mathbf{x},\; n),\] and that \(h_{1,2}(\mathbf{x})\) assumes the same value denoted \(h_{1,2}\) (i.e., with the argument \(\mathbf{x}\) deleted) for almost every \(\mathbf{x}\) with respect to the natual measure. Since \(h_{1,2}(\mathbf{x})=h_{1,2}\) for almost every \(\mathbf{x}\), its average with respect to the natural measure must also be \(h_{1,2}\), \[h_{1,2}=\langle h_{1,2}(\mathbf{x})\rangle=\lim_{n\to\infty}\frac{1}{n}\langle \ln\lambda_{1,2}(\mathbf{x},\;n)\rangle\] This yields \[\Gamma_{qL}\quad 1+n(1-q)\{(D-1)h_{2}+h_{1}\}+O[(1-q)^{2}]\]For large \(n\) the second term becomes large, unless we set the term contained within the curly brackets to zero. Thus, we suspect that the value of \(D\) at which this occurs coincides with the value of \(D\) at which \(\Gamma_{q\mathrm{L}}\) makes a transition from 0 to infinity in the \(q\to 1\) limit. Setting the curly bracketed term to zero we obtain \[D_{1}=1+(h_{1}/|h_{2}|),\] which is our previously obtained Eq. (4.38). (We assume \(h_{1}>0>h_{2}\).) So far we have only been discussing the partition function for the dimension \(D_{qu}\) of the measure \(\mu_{u}\) on the unstable manifold. For the case of the stable manifold, refer to Figure 9.5(_b_), and note that the vertical strips converge to the stable manifold as \(n\to\infty\). The width of a vertical strip of initial points which do not leave \(B\) on \(n\) iterates is of the order of \(\lambda_{1}^{-1}(\mathbf{x}_{i},\,n)\) for \(\mathbf{x}_{i}\) in the vertical strip. Thus, taking \(\varepsilon_{i}\)\(\lambda_{1}^{-1}(\mathbf{x}_{i},\,n)\) and \(\mu_{s}(S_{i})\)\(\varepsilon_{i}^{2}\exp(n/\langle\tau\rangle)\) (with the \(S_{i}\) now covering the vertical strips), and proceeding as in the case of the unstable manifold, we obtain \[\Gamma_{q\mathrm{L}}^{s}(D,\,n)=\exp(nq/\langle\tau\rangle)\underset{\mathbf{M} ^{-n}(B)\cap B}{[\lambda_{1}(\mathbf{x},\,n)]^{(D-2)(q-1)}\mathrm{d}^{2} \mathbf{x}} \tag{93}\] For the case of a chaotic attractor we see that \(\Gamma_{q\mathrm{L}}^{s}=1\) for \(D=2\) and thus \(D_{qs}\equiv 2\). This corresponds to the fact that almost all points in the basin of attraction go to the attractor; i.e., the basin of attraction is two dimensional. As an example, we now apply the Lyapunov partition function formalism to the case of the chaotic attractor of the generalized baker's map. As shown in Chapter 3, application of the map \(n\) times to the units square results in \(2^{n}\) vertical strips of varying widths \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) (\(m=0,\,1,\,2,\,\dots,\,n\)), and the number of strips of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) is the binomial coefficient \(Z(n,\,m)\). A point \(\mathbf{x}_{i}\) for which \(\mathbf{M}^{n}(\mathbf{x}_{i})\) is in a strip of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\) has \[\lambda_{1}(\mathbf{x}_{i},\,n) = \frac{1}{\tilde{\alpha}}^{m}\left(\frac{1}{\tilde{\beta}}\right) ^{n-m},\] \[\lambda_{2}(\mathbf{x}_{i},\,n) = \lambda_{a}^{m}\lambda_{b}^{n-m}\] Applying \(\mathbf{M}^{-n}\) to the vertical strip of width \(\lambda_{a}^{m}\lambda_{b}^{n-m}\), we find the region of initial conditions that yield this vertical strip. This region is a horizontal strip of width \(\tilde{\alpha}^{m}\tilde{\beta}^{n-m}\); see Figure 9.6. If we choose \(\mathbf{x}_{i}\) randomly in the unit square, the probability of it falling in the initial strip of width \(\tilde{\alpha}^{m}\tilde{\beta}^{n-m}\) is just the area of that strip, which is also \(\tilde{\alpha}^{m}\tilde{\beta}^{n-m}\). Thus, using (9.29b), we have \[\langle[\lambda_{2}^{D-1}({\bf x},\;n)\lambda_{1}({\bf x},\;n)]^{1-q}\rangle = \sum_{m=0}^{n}Z(n,\;m)\tilde{\alpha}^{m}\tilde{\beta}^{n-m}\] \[\left[(\lambda_{a}^{m}\lambda_{b}^{n-m})^{D-1}\;\;\frac{1}{\tilde {\alpha}}^{m}\left(\frac{1}{\tilde{\beta}}\right)^{n-m}\right]^{1-q}\] \[= \left(\frac{\tilde{\alpha}^{q}}{\tilde{\lambda}_{a}^{\tilde{\imath}}}+\frac{\tilde{\beta}^{q}}{\tilde{\lambda}_{b}^{\tilde{\imath}}}\right)^{n},\] where we have used the basic property of the binomial coefficient \(\sum_{m=0}^{n}Z(n,\;m)x^{m}y^{n-m}\equiv(x+y)^{n}\). Letting \(n\to\infty\), we see that \(\Gamma_{q\rm L}\) goes from zero to \(+\infty\) at precisely the point where the previous result for the dimension, Eq. (9.25), holds. In this section we have utilized the stretching properties of the dynamical system to obtain a partition function for the dimension spec trum on a chaotic attractor. It may be of interest to note that a very similar situation arises in the mixing of an impurity in an incompressible fluid undergoing large scale smooth chaotic flow. In that case the gradient of the impurity density tends to concentrate on a fractal, and the multifractal properties of the gradient measure can be obtained from a Lyapunov partition function formulation related to that presented here. (Note that, since the fluid is incompressible, there is no chaotic attractor for the impurity _particles_; it is the _gradient_, and not the particles themselves, which concentrates on a fractal.) See Ott and Antonsen (1988, 1989) and Varosi _et al._ (1991). An experiment displaying this phenomenon has been carried out by Ramshankar and Gollub (1991). This topic is treated in Section 9.7. ### 9.4 Distribution of finite time Lyapunov exponents Define the _finite time Lyapunov exponent_ for an initial condition **x** as \[h_{i}(\textbf{x},\ n)=\frac{1}{n}\ln\lambda_{i}(\textbf{x},\ n) \tag{93}\] In this section we shall only consider chaotic (\(h_{1}>0\)) invariant sets of two dimensional maps, and we will restrict attention to the largest exponent, \(i=1\). Fujisaka (1983) introduces a '\(q\) order entropy spectrum' \[H_{q}=\frac{1}{1-q}\lim_{n\rightarrow\infty}\frac{1}{n}\ln\langle\exp[n(1-q)h_ {1}(\textbf{x},\ n)]\rangle, \tag{94}\] where \(\langle\cdots\rangle\) denotes an average over the relevant invariant measure. For \(q\to 1\), by expanding the exponential and the logarithm for small \(1-q\), we have \(\langle\exp[n(1-q)h_{1}(\textbf{x},\ n)]\rangle\simeq 1+n(1-q)\langle h_{1}( \textbf{x},\ n)\rangle\) and \(\ln\langle\exp[n(1-q)h_{1}(\textbf{x},\ n)]\rangle\simeq n(1-q)\langle h_{1} (\textbf{x},\ n)\rangle\). Thus, \(H_{1}=\lim_{n\rightarrow\infty}\langle h_{1}(\textbf{x},\ n)\rangle\), and \(H_{1}\) is just the (infinite time) Lyapunov exponent applying for almost every **x** with respect to the measure,6 \[H_{1}=h_{1}\] For hyperbolic attractors of two dimensional maps with \(h_{1}>0>h_{2}\), we have that \(h_{1}\) is the metric entropy of the measure, \(h_{1}=h(\mu)\) (see Section 4.5). In addition, one can argue on the basis of the result by Newhouse (1986) that for \(q=0\), Eq. (9.33) gives the topological entropy \(h_{\rm T}\), \[H_{0}=h_{\rm T}\] Thus, for general values of the index \(q\), we may regard the quantity \(H_{q}\) as generalizing the topological and metric entropies to a continuous spectrum of entropies. Note that the scaling with \(n\) of the Lyapunov partition function for the constant Jacobian determinant case is directly implied by a knowledge of \(H_{q}\). In particular, from (9.29') the relevant quantity is the average of a power of the largest Lyapunov number, \[\langle[\lambda_{1}(\textbf{x},\ n)]^{\alpha}\rangle=\langle\exp[n\alpha h_{1} (\textbf{x},\ n)]\rangle\sim\ \exp[n\alpha H_{1-\alpha}] \tag{95}\] Among other reasons, it is useful for calculating averages, such as that in (9.33), to introduce a distribution function for the finite time Lyapunov exponents \(h_{1}(\textbf{x},\ n)\). We denote this distribution function \(P(h,\ n)\). For **x** randomly chosen with respect to the invariant measure under consideration, the quantity \(P(h,\ n){\rm d}h\) is, by definition, the probability that the finite time exponent \(h_{1}(\textbf{x},\ n)\) falls in the range \(h\) to \(h+{\rm d}h\). Thus, for example\[\langle\exp[n\alpha h_{1}({\bf x},\ n)]\rangle=\ \ \exp(n\alpha h)P(h,\ n){\rm d}h \tag{93}\] For hyperbolic sets with a one dimensional unstable manifold through each point on the set, we can argue that \(\lambda_{1}({\bf x},\ n)\) is produced by the multiplication of \(n\) scalar numbers (as opposed to matrix multiplications (4.31)), the \(n\) numbers being the expansion factors separating infinitesimally nearby points on the unstable manifold on each of the \(n\) iterates of the map. Thus, by (9.32), \(h_{1}({\bf x},\ n)\) may be regarded as an average over \(n\) numbers (the logarithms of the expansion factors mentioned in the previous sentence). In accordance with the chaotic nature of the orbits, we regard these numbers as, in some sense, random. In this case, it follows that, for large \(n\), the distribution function \(P(h,\ n)\) is asymptotically of the general form (e.g., Ellis (1985)), \[P(h,\ n)\ \ \ \ [nG^{\ast}(\bar{h})/2\pi]^{1/2}\exp[-nG(h)], \tag{94}\] where the minimum value of the function \(G\) is zero and occurs at \(h=\bar{h}\); \(G(\bar{h})=0\), \(G^{\ast}(\bar{h})=0\), \(G^{\ast}(\bar{h})>0\). See Figure 9.7. Note that expansion of \(G\) around \(\bar{h}\) yields a normal distribution (this is the 'central limit theorem'), \[P(h,\ n)\ \ \ \ [nG^{\ast}(\bar{h})/2\pi]^{1/2}\exp\ -\frac{1}{2}\,nG^{\ast}( \bar{h})(h-\bar{h})^{2}\ \, \tag{95}\] where the standard deviation of \(h\) is \(\xi=[nG^{\ast}(\bar{h})]^{-1/2}\). Thus, for large \(n\), Eq. (9.36') is an excellent approximation to \(P(h,\ n)\) for values of \(h\) that are within several standard deviations from \(\bar{h}\). We emphasize, however, that in calculating integrals such as (9.35), we shall see that the dominant contribution comes from values of \(h\) that deviate from \(\bar{h}\) by an amount independent of \(n\). Thus, for these \(h\) values \((h-\bar{h})/\xi\ \ \ \ n^{1/2}\). Hence, the dominant contribution comes from values of \(h\) that deviate from \(\bar{h}\) by a number of standard deviations which approach infinity like \(n^{1/2}\) as \(n\) goes to infinity. Under these circumstances, the normal distribution (9.36') is inadequate for use in (9.35), and we must use the more accurate so called 'large deviation form', Eq. (9.36). Note that the function \(G(h_{1})\) depends on the particular map and measure under consideration. Note that for \(n\to+\infty\), the distribution \(P(h,\ n)\) becomes more and more peaked about \(h=\bar{h}\) and becomes a delta function, \(\delta(h-\bar{h})\) at \(n=\infty\). Thus, almost every initial point with respect to the measure yields the exponent \(\bar{h}\), and we therefore have that \[\bar{h}=h_{1}\] We emphasize that, while \(P(h,\ n)\) approaches a delta function, the finite \(n\) deviations of \(P(h,\ n)\) from the delta function (as well as from the normal approximation (9.36')) are crucial for the multifractional properties of the measure. Let us now use (9.36) to obtain the large \(n\) scaling of \(\langle[\hat{\lambda}_{1}({\bf x},\,n)]^{\sigma}\,\rangle\). Substituting into (9.35), we have \[\langle[\hat{\lambda}_{1}({\bf x},\,n)]^{\sigma}\rangle\simeq\quad\exp\{-n[G(h)- \sigma h]\}[nG^{\sigma}(\bar{h})/2\pi]^{1/2}\,{\rm d}h\] For large \(n\), the dominant contribution to this integral comes from the vicinity of the minimum of the function \(G(h)-\sigma h\) occurring at \(h=h_{\sigma}\) given by (cf. Figure 9.7) \[G^{\prime}(h_{\sigma})=\sigma \tag{9.37}\] Thus, we have that \[\langle[\hat{\lambda}_{1}({\bf x},\,n)]^{\sigma}\rangle\sim\ \exp\{-\,n[G(h_{ \sigma})-\sigma h_{\sigma}]\} \tag{9.38}\] In terms of the quantity \(H_{\sigma}\), Eqs. (9.38) and (9.33) yield \[H_{q}=[\sigma^{-1}G(h_{\sigma})-\,h_{\sigma}]_{\sigma=1-q}\] We can also use (9.38) to obtain the dimension spectrum \(D_{q}\) from (9.29') (Grassberger _et al._, 1988). The Lyapunov partition function becomes \[\Gamma_{qL}(D,\,n)\quad\exp\{n[(D-1)(1-q){\rm ln}\,|J|-G(h_{\sigma})+\sigma h _{\sigma}]\},\] with \(\sigma=(2-D)(1-q)\). Since \(\Gamma_{qL}\) goes to \(+\infty\) or zero as \(n\to\infty\) according to whether the term in square brackets in the exponential is positive or negative, we have that \(D_{q}\) is determined by requiring this term to be zero. Hence, we obtain the following transcendental equation for \(D_{q}\), \[(D_{q}-1)(1-q){\rm ln}\,|J|=[G(h_{\sigma})-\sigma h_{\sigma}]|_{\sigma=(2-D_{q })[1-q)} \tag{9.39}\] Figure 9.7: Schematic of the function \(G(h)\). for the case where the determinant of the Jacobian matrix is a constant. Equations (9.37) (9.39) show that a knowledge of the function \(G\) is sufficient to obtain \(H_{q}\) and \(D_{q}\). Thus, it is of interest to discuss how \(G\) can be determined numerically. In particular, we consider the case of a chaotic attractor and its natural measure. A possible procedure might be as follows. We randomly choose many initial conditions in the basin of attraction and iterate each one, say 100 times, so that the orbits are now on the attractor and are distributed in accord with the attractor's natural measure. We treat these as our new initial conditions and iterate each one, along with the corresponding tangent map, \(n\) times to obtain \(h_{1}({\bf x}_{1},\,n)\). We then make a histogram approximation to \(P(h,\,n)\). From (9.36) we have a very long time near the tori wind up having abnormally small values of \(h_{1}({\bf x}_{i},\;n)\). This, in turn, results in an enhancement of \(P(h,\;n)\) at low \(h\) and this enhancement has a temporal power law behavior qualitatively diferent from the time dependence in (9.36). We note, however, that this modification of (9.36) occurs only at low \(h\) and that the form (9.36) apparently remains valid for \(h>\bar{h}=h_{1}\). ### 9.5 Unstable periodic orbits and the natural measure In this section we show that the natural measure on a chaotic attractor of an invertible, hyperbolic, two dimensional map can be expressed in terms of the infinite number of unstable periodic orbits embedded within the attractor. More specifically the principal result can be stated as follows. Let \({\bf x}_{jn}\) denote the fixed points of the \(n\) times iterated map, \(M^{n}({\bf x}_{jn})={\bf x}_{jn}\), and let \(\lambda_{1}({\bf x}_{jn},\;n)\) denote the magnitude of the expanding eigenvalue of the Jacobian matrix \({\bf DM}^{n}({\bf x}_{jn})\). (Note that each \({\bf x}_{jn}\) is on a periodic orbit whose period is either \(n\) or a factor of \(n\).) Then, the natural measure of an area \(S\) is given by \[\mu(S)=\lim_{n\to\infty}\sum_{{\bf x}_{jn}\in S}\frac{1}{\lambda_{1}({\bf x}_{ jn},\;n)} \tag{9.41}\] with the summation taking place over all fixed points of \({\bf M}^{n}\) in \(S\). The interpretation of (9.41) is that, for large \(n\), there is a small region about \({\bf x}_{jn}\) which covers a fraction \(1/\lambda_{1}({\bf x}_{jn},\;n)\) of the natural measure, such that orbits originating from this small region closely follow the orbit originate from \({\bf x}_{jn}\) for \(n\) iterates. Before deriving (9.41) and demonstrating the above interpretation of it, we state some of the resulting consequences: (i) If we let \(S\) cover the whole attractor (\(\mu(S)\equiv 1\)), then we obtain the result (Hannay and Ozorio de Almeida, 1984) \[\lim_{n\to\infty}\;\;\;\;\frac{1}{\lambda_{1}({\bf x}_{jn},\;n)}=1, \tag{9.42}\] where the sum is over all fixed points on the attractor. (This result is important for certain considerations of quantum chaos which we discuss in Chapter 11.) (ii) The Lyapunov exponents are given by \[h_{1,2}=\lim_{n\to\infty}\;\frac{1}{n}\;\;\;\;\;\frac{1}{\lambda_{1}({\bf x}_{ jn},\;n)}\ln\lambda_{1,2}({\bf x}_{jn},\;n)\] (9.43) (iii) The quantity \(H_{q}\) defined in Eq. (9.33) may be expressed as \[H_{q}=\frac{1}{1-q}\lim_{n\to\infty}\frac{1}{n}\ln\Biggl{\{}\begin{array}{c}[ \lambda_{1}({\bf x}_{jn},\,n)]^{-q}\end{array}\Biggr{\}} \tag{94}\] In particular, for \(q=0\) we obtain a result for the topological entropy (Katok, 1980) \[h_{\rm T}=\lim_{n\to\infty}\frac{1}{n}\ln\overline{N}(n), \tag{95}\] where \(\overline{N}(n)\) denotes the number of fixed points of \({\bf M}^{n}\). Thus, the topological entropy gives the exponential increase of the number of fixed points of \({\bf M}^{n}\); i.e., \(\overline{N}(n)\)\(\exp(nh_{\rm T})\). (Referring to Eq. (94), we see that the number of terms in the sum in (94) increases exponentially with \(n\), but that this exponential increase is exactly compensated for by the general exponential decrease of the terms \(1/[\lambda_{1}({\bf x}_{j},\,n)]\) so that the resulting sum is 1.) * The Lyapunov partition function for a chaotic attractor, Eq. (9b), can be expressed in terms of the fixed points of \({\bf M}^{n}\) as \[\Gamma_{qP}(D,\,n)=\quad\frac{[\lambda_{2}({\bf x}_{jn},\,n)]^{(D-1)(1-q)}}{ [\lambda_{1}({\bf x}_{jn},\,n)]^{q}},\] (96) where we have replaced the subscript L (for Lyapunov) in (9) with the subscript P (for periodic orbits), and we call \(\Gamma_{qP}(D,\,n)\) the periodic orbits partition function. Again we claim that \(D_{q}\) is determined by the transition of the \(n\to\infty\) limit of the periodic orbits partition function from 0 to \(\infty\). The above results show that a knowledge of the periodic orbits yields a great deal of information on the properties of the attractor. For more information on this topic see Katok (1980), Auerbach _et al._ (1987), Morita _et al._ (1987), Grebogi _et al._ (1987e, 1988b) and Cvitanovic and Eckhardt (1991). Equations (9.41) (9.46) apply for chaotic attractors. Similar results can be obtained for nonattracting chaotic sets (Grebogi _et al._, 1988b). For example, the result analogous to (94) is \[\lim_{n\to\infty}\exp\ \ \frac{n}{\langle\overline{r}\rangle}\quad\quad\frac{1}{\lambda_{1}({\bf x}_{jn},\,n)}=1,\] which yields an expression for the average decay time in terms of the periodic orbits (Kadanoff and Tang, 1984), \[\frac{1}{\langle\overline{r}\rangle}=-\lim_{n\to\infty}\left\{\frac{1}{n}\ln \begin{array}{c}1\\ j\end{array}\right.\frac{1}{\lambda_{1}({\bf x}_{jn},\,n)}\end{array}\Biggr{\}} \tag{97}\]We now give a derivation of the result (Eq. (9.41)) for the measure of an attractor in terms of the periodic orbits (equivalently, the fixed points of \({\bf M}^{n}\)). Our treatment follows that of Grebogi _et al._ (1988b). Imagine that we partition the space into cells \(C_{i}\), where each cell has as its boundaries stable and unstable manifold segments (Figure 9.8(_a_)). If the cells are taken to be small, the curvature of the boundaries will be slight and we can regard the cells as being parallelograms (Figure 9.8(_b_)). Consider a given cell \(C_{k}\) and sprinkle within it a large number of initial conditions distributed according to the natural measure of the attractor. Now imagine that we iterate these initial conditions \(n\) times. After \(n\) iterates, a small fraction of the initial conditions return to the small cell \(C_{k}\). We assume the attractor to be ergodic and mixing. Thus, in the large \(n\) limit, the fraction of initial conditions that return is just the natural measure of the cell \(C_{k}\), denoted \(\mu(C_{k})\). Let \({\bf x}_{0}\) be an initial condition that returns and \({\bf x}_{n}\) its _n_th iterate. This is illustrated in Figure 9.9(_a_), where we take the stable direction as horizontal and the unstable direction as vertical. The line _ab_(_c'd'_) through **x**0(**x**_n_) is a stable (unstable) manifold segment traversing the cell. Now take the _n_th forward iterate of _ab_ and the _n_th backward iterate of _c'd'_. These map to _a'b'_ and _cd_ as shown in Figure 9.9(_b_). Now consider a rectangle constructed by passing unstable manifold segments _e'f'_ through _a'_ and _g'h'_ through _b'_. By the construction, the _n_th preimages of these segments are the unstable manifold segments _ef_ and _gh_ shown in Figure 9.9(_c_). Thus, we have constructed a rectangle _efgh_ in _Ck_ such that all the points in _efgh_ return to _Ck_ in \(n\) iterates. That is, _efgh_ maps to _e'f'_g'_h'_ in \(n\) iterates. The intersection of these two rectangles must contain a single saddle fixed point of the \(n\) times iterated map (cf. Figure 9.9(_d_)). Conversely, given a saddle fixed point, we can construct a rectangle of initial conditions _efgh_ which returns to _Ck_ by closely following the periodic orbit which goes through the given fixed point \(j\) of **M**_n_ (the construction is the same as in Figures 9.9(_a_) (_c_) except that **x**0 = **x**_n_). Thus, all initial conditions which return after \(n\) iterates lie in some long Figure 9.9: Schematic of the construction of the region _efgh_ in _Ck_ which returns to _Ck_ after \(n\) iterates. In (_d_) \(\lambda_{1j}\equiv\lambda_{1}(x_{j},\,n)\) and \(\lambda_{2j}\equiv\lambda_{2}(x_{j},\,n)\) where _xj_ is the fixed point shown in the figure. thin horizontal strip (like _efgh_) which contains a fixed point of the \(n\) times iterated map. We label this fixed point \({\bf x}_{j}\). Denoting the horizontal and vertical lengths of the sides of the cell \(C_{k}\) by \(\xi_{k}\) and \(\eta_{k}\) (cf. Figure 9.9(\(b\))), we see that the initial strip _efgh_ has dimensions \(\xi_{k}\) by \([\eta_{k}/\lambda_{1}({\bf x}_{j},\ n)]\) and the final strip has dimensions \(\xi_{k}\lambda_{2}({\bf x}_{j},\ n)\) by \(\eta_{k}\) (cf. Figure 9.9(\(d\))). Since the dynamics is expanding in the vertical direction, the attractor measure varies smoothly in this direction. Since the cell is assumed small, we can treat the attractor as if it were essentially uniform along the vertical direction. Thus, the fraction of the measure of \(C_{k}\) occupied by the strip _efgh_ is \(1/\lambda_{1j}\). Since, for \(n\to\infty\), the fraction of initial conditions starting in \(C_{k}\) which return to it is \(\mu(C_{k})\), we have \[\mu(C_{k})=\lim_{n\to\infty}\,\frac{1}{\lambda_{1}({\bf x}_{j},\ n)} \tag{94}\] Since we imagine that we can make the partition into cells as small as we wish, we can approximate any subset \(S\) of the phase space with reasonably smooth boundaries by a covering of such cells. Thus, we obtain the desired result,7 Eq. (9.41). ### Validity of the Lyapunov and periodic orbits partition functions for nonhyperbolic attractors The arguments leading to our results for the Lyapunov partition function and for the periodic orbits partition function make use of the assumed hyperbolicity of the chaotic set. We note, however, that most attractors encountered in practice are not hyperbolic (e.g., the Henon attractor; Figures 1.12 and 4.14), since they typically display tangencies between their stable and unstable manifolds.8 The question then arises as to what extent, if at all, the Lyapunov and periodic orbits partition functions can be used to obtain the \(D_{q}\) dimension spectra of typical nonhyperbolic two dimensional maps such as the Henon map. In this regard it has been conjectured (e.g., Grassberger _et al._ (1988), Ott _et al._ (1989a,b)) that, for typical nonhyperbolic attractors of two dimensional maps, the Lyapunov and periodic orbits partition functions give the true \(D_{q}\) (i.e., that deter mined from the box counting formula (3.14) or the partition function (9.14)) for values of \(q\) below the phase transition \(q\)\(q_{\rm T}\). See Figure 9.2. (We have previously referred to \(q\)\(q_{\rm T}\) as the 'hyperbolic range.') Footnote 8: The Lyapunov exponent is \(\beta=1/\beta\), and the Lyapunov exponent is \(\beta=1/\beta\). An analytical example illustrating the above has been given by Ott _et al._ (1989b), who consider the invertible two dimensional map \[x_{n+1}=\cases{\lambda x_{n},&for $y_{n}>\frac{1}{2}$,\cr\frac{1}{2}+\lambda x_{n},& for $y_{n}\quad\frac{1}{2}$,\cr} \tag{95a}\]\[y_{n+1}=4y_{n}(1-y_{n}), \tag{94b}\] where \(\lambda<\frac{1}{2}\). For this map the true \(D_{q}\) is \[D_{q}=\frac{2}{\ln(1/\lambda)}+\left\{\begin{array}{ll}1,&\mbox{for $q$ \ \ \ \ \ }q_{\rm T}=2,\\ \frac{q/2}{q-1},&\mbox{for $q$ \ \ \ \ }q_{\rm T}=2\end{array}\right. \tag{95}\] Thus, \(D_{q}\) is the sum of the Cantor set dimension in \(x\) generated by (94a) (namely, \(2/\ln(1/\lambda)\)) and the logistic map \(D_{q}\) given by (91) and Figure 9.2(_a_). Ott _et al._ (1989a) also evaluate the dimension spectrum predicted by the periodic orbits partition function (we denote this prediction \(D_{q}^{\prime}\) ) and the dimension spectrum predicted by the Lyapunov partition function (we denote this prediction \(D_{q}^{\prime\prime}\)). They obtain \[D_{q}^{\prime}=\frac{1}{\ln(1/\lambda)}+1, \tag{95}\] for all \(q\), and \[D_{q}^{\prime\prime}=\left\{\begin{array}{ll}\frac{\ln 2}{\ln(1/\lambda)}+1,& \mbox{for $q$ \ \ \ \ }q_{\rm T},\\ \frac{1}{(q-1)}\ \ \frac{\ln 2}{\ln(1/\lambda)}+1\ \,&\mbox{for $q$ \ \ \ \ }q_{\rm T}\end{array}\right. \tag{96}\] These results are illustrated in Figure 10. In accord with the conjecture, we see that (95) (96) agree for \(q\)\(q_{\rm T}\) but that they disagree outside this range. See, for example, Grassberger _et al._ (1988) and Ott _et al._ (1989a,b) for further discussion. Figure 10: Schematic plots of \(D_{q}\), \(D_{q}^{\prime}\) and \(D_{q}^{\prime\prime}\) versus \(q\) for the map (94). ### Fractal aspects of fluid advection by Lagrangian chaotic flows In Section 7.3.4 we discussed how consideration of the advection of tracer particles in a fluid may lead to a system of ordinary differential equations whose solution is chaotic. In this section we apply the analytical tools introduced in Section 9.4 to this problem. In particular, we find that chaos leads to multifractal properties of the spatial distribution of tracer parti cles, and that the function \(G(h)\) (see Eq. (9.36)) characterizing the probability distribution function of finite time Lyapunov exponents plays a key role. In the following we discuss two situations: (_a_) The multifractal gradient field of a tracer distribution that results from chaotic advection of an initially smooth tracer distribution (Ott and Antonsen, 1988, 1989 and Varosi _et al._, 1991); and (_b_) the diffusive exponential decay of tracer distributions to homogeneity (Pierrehumbert, 1994; Antonsen _et al._, 1995a; Rothstein _et al._, 1999). Related considerations apply to the multifractal properties of the magnetic field in the kinematic dynamo problem (see Section 4.6; Finn and Ott, 1988; Ott and Antonsen, 1989 and Ott, 1998), to the multifractal vorticity field in the stability problem for smooth fluid flow at high Reynolds number (Reyl _et al._, 1998), and to the wavenumber spectrum of a tracer distribution when there is a tracer source at long wavelength and a finite lifetime for exponential tracer decay (Nam _et al._, 1999). We begin by considering a velocity field \({\bf v}({\bf x},\ t)\) which is chaotic in the sense discussed in Section 7.3.4; i.e., solutions of the equation describing the trajectory of fluid elements, \({\bf dx}(t)/{\rm d}t={\bf v}({\bf x}(t),\ t)\), are chaotic. This situation is commonly called _Lagrangian chaos_. In what follows we restrict our discussion to the case of two dimensional incompressible flows (\({\bf x}=(x,\ y),\ {\bf v}=(\nu_{x},\nu_{y})\) and \((\nabla\cdot{\bf v}=0)\). We consider tracer particles advected by the fluid, where \(\rho({\bf x},\ t)\) represents the density of tracer particles. Alternatively \(\rho({\bf x},\ t)\) may be thought of as the density of a dye or chemical in the fluid, or, indeed, any quantity (e.g., temperature) advected by the fluid. If the density \(\rho({\bf x},\ t)\) has no effect on the fluid motion, the quantity \(\rho({\bf x},\ t)\) is called a _passive scalar_. In general, a passive scalar \(\rho({\bf x},\ t)\) obeys an equation of the form \[\partial\rho({\bf x},\ t)/\partial t+{\bf v}({\bf x},\ t)\cdot\nabla\rho({\bf x },\ t)=Q({\bf x},\ t) \tag{9.53}\] where \(\nabla\cdot{\bf v}=0\) has been used (otherwise the second term on the left hand side of Eq. (9.53) should be replaced by \(\nabla\cdot(\rho{\bf v})\)) and \(Q\) represents effects such as diffusion and sources and sinks of the passive scalar. For situation (_a_) refered to above, we will consider \(Q=0\). For situation (_b_) we consider the effect of diffusion,\[Q=\kappa\nabla^{2}\rho,\] (9 54) where \(\kappa\) is the diffusion coefficient of the passive scalar in the fluid. #### Multifractal gradient field In the frame moving with a fluid trajectory, \({\rm d}{\bf x}(t)/{\rm d}t={\bf v}({\bf x}(t),\,t)\), Eq. (9.53) becomes \[{\rm d}\rho({\bf x}(t),\,t)/{\rm d}t=Q({\bf x}(t),\,t)\] (9 55) Thus, the rate of change of \(\rho\) seen following a fluid element is \(Q\). In the absence of sources and sinks, \(Q\) is given by (9.54), and we consider the problem of finding \(\rho({\bf x},\,t)\) for \(t>0\) given some initial passive scalar distribution \(\rho({\bf x},\,0)\). We are particularly interested in the case where \(\kappa\) is very small, and \(\rho({\bf x},\,0)\) does not vary on spatial scales that are too small. For example, we consider a fluid confined in a container of size \(L\), where \(\nabla\rho({\bf x},\,0)\)\(\rho({\bf x},\,0)/L\) for typical points in the fluid, and at \(t=0\) we have \({\bf v}\cdot\nabla\rho\gg\kappa\nabla^{2}\rho\). As we shall shortly demonstrate, as time increases, chaotic mixing causes \(\rho\) to develop variation on finer and finer spatial scales. Thus, at some time, \(t_{*}\), the inequality \({\bf v}\cdot\nabla\rho\gg\kappa\nabla^{2}\rho\) is violated. For the considerations in this subsection, we consider \(t\) to be in an intermediate range, where it is large in the sense that the chaotic mixing has created spatial variation of \(\rho\) on scales very much smaller than \(L\) (i.e., \(|\nabla\rho({\bf x},\,t)|\gg|\nabla\rho({\bf x},\,0|)\), but diffusion is still negligible, \(t<t_{*}\). (In the next subsection we consider \(t>t_{*}\).) Thus, we can take \(Q=0\). In this time range Eq. (9.55) indicates that \(\rho\)_is constant following a fluid element_. We define a measure based on the gradient of the passive scalar \[\mu(S,\,t)=\left.\begin{array}{c}{\left|\nabla\rho\right|{\rm d}^{2}x} \right/\left.\begin{array}{c}{\left.R_{0}\right|\nabla\rho\right|{\rm d}^{2} x},\end{array}\right.\] (9 56) where \(\mu(S,\,t)\) is the measure of the two dimensional region \(S\) at time \(t\), and \(R_{0}\) is the entire region in which the fluid is contained (we assume \(R_{0}\) is bounded). As the time \(t\) increases (with \(t<t_{*}\)) the gradient measure \(\mu\) approaches a multifractal measure in the sense that \(I(q,\,\epsilon)=\quad\mu_{i}^{q}\) (Eq. (9.4)) has a range of \(\epsilon\) values, \(L\gg\epsilon\gg\epsilon_{*}\), where \(I(q,\,\epsilon)\) has a power law scaling with \(\epsilon\), \(I(q,\,\epsilon)\)\(\epsilon^{x}\), corresponding to \(D_{q}=(q-1)^{-1}\tau\) (Eq. (9.3)). As we shall see, \(\epsilon_{*}\), the small \(\epsilon\) end of the scaling range, decreases exponentially with \(t\) (as long as \(t<t_{*}\)). For numerical and experimental examples of power law scaling of \(I(q,\,\epsilon)\) for the gradient measure see Varosi _et al._ (1991) and Ramashankar and Gollub (1991). We now seek to obtain \(D_{q}\) from the function \(G(h)\) characterizing the distribution function of finite time Lyapunov exponents (Eq. (9.36)). Suppose we start at \(t=0\) with a spatially uniform gradient \(\nabla\rho({\bf x},\,0)\) (the final result is independent of this assumption). Imagine that we divide the space by a grid of small squares of unit size \(\delta\). Now advance time forward to a time \(t<t_{*}\). For small enough \(\delta\) the action of the flow on a given square is linear. Thus, a small \(\delta\times\delta\) square at \(t=0\) is stretched into a long, thin parallelogram, as illustrated in Figure 9.11. In Figure 9.11 the width of the parallelogram is the reciprocal of its length due to the incompressibility of the flow, and \(\lambda_{1}(\mathbf{x}_{i},\,t)\) is the finite time stretching experienced for the inital location \(\mathbf{x}_{i}\) of box \(i\) of the \(\delta\) gird. We now cover the parallelogram with smaller square boxes of edge length \(\epsilon_{i}=\delta/\lambda_{i}\), where \(\lambda_{i}=\lambda_{1}(\mathbf{x}_{i},\,t)\). There are of the order of \(\lambda_{i}^{2}\) such small boxes. Since \(\rho\) is constant following fluid elements (for \(Q=0\)), the compression of the original box to a width \(\lambda_{i}^{-1}\) smaller than the original width implies an increase of the gradient by the same factor, \[|\nabla\rho(\mathbf{x},\,t)|_{\mathbf{x}=\mathbf{x}_{i}}\quad\lambda_{i}| \nabla\rho(\mathbf{x},\,0)|_{\mathbf{x}=\mathbf{x}_{i}}\] Thus, we can estimate the integral \(\mathcal{T}_{i}=\int|\nabla\rho|\mathrm{d}^{3}x\) over one of the small \(\epsilon_{i}\times\epsilon_{i}\) boxes as \(\mathcal{T}_{i}\sim|\nabla\rho(\mathbf{x},\,0)|\lambda_{i}(\delta/\lambda_{i })^{2}\quad(\delta^{2}/\lambda_{i})|\nabla\rho(\mathbf{x},\,0)|^{2}\), and the same integral over the whole fluid region \(R_{0}\) as \[\lambda_{i}^{2}\mathcal{T}_{i},\] where the factor \(\lambda_{i}^{2}\) results from the fact that there are \(\lambda_{i}^{2}\) boxes of size \(\epsilon_{i}\times\epsilon_{i}\) in the parallelogram. Using these estimates in Eq. (9.56) gives the following result for the measure \(\mu_{i}\) of one of the small \(\epsilon_{i}\times\epsilon_{i}\) boxes: \[\mu_{i}\quad\quad\quad\lambda_{i}\quad\quad\lambda_{j}^{-1} \tag{9.57}\] At time \(t\), we have a covering of the space by small boxes of varying sizes \(\epsilon_{i}\). To make use of our knowledge of the measures \(\mu_{i}\) in these boxes of varying size, we utilize the partition function (9.14). Equation (9.57) then leads to the estimate,\[\Gamma_{q} \qquad\begin{array}{ccc}\lambda_{i}^{\tilde{\tau}+2-q}&\div&\lambda_{i}^{q}\\ &&\\ \sim\left(\ \left[\lambda_{1}(\mathbf{x},\ t)\right]^{\tilde{\tau}+2-q}\right)&\div& \left\langle\lambda_{1}(\mathbf{x},\ t)\right\rangle^{q}\end{array} \tag{95}\] Employing Eq. (9.38) to evaluate the spatial averages, and using argu ments similar to those giving Eq. (9.39), we obtain our desired result for the multifractal dimension spectrum of the gradient measure in terms of \(G(h)\), \[D_{q}=2-\frac{q-\alpha}{q-1}, \tag{96}\] where \(\alpha\) is given by \[\min_{h}\left[G(h)-\alpha h\right]=q\min_{h}\left[G(h)-h\right] \tag{97}\] #### Long-time exponential decay of the passive scalar distribution to homogeneity In a real fluid, diffusion of the passive scalar is always present (\(Q=\kappa\nabla^{2}\rho\)), and \(t_{*}\) (although large for \(\kappa\) small) is still finite. As \(t\) increases past \(t_{*}\), diffusion comes into play and acts to remove gradients. Thus, it is expected that, at long time, the passive scalar distribution will approach homogeneity. It is of interest to consider the nature of this approach. Numerical results (Pierrehumbert, 1994 and Antonsen _et al._, 1995b), as well as experimental ones (Rothstein and Gollub, 1999), show that this approach is exponential. That is, the spatially averaged scalar variance, \(C(t)=\left\langle\left[\rho(\mathbf{x},\ t)-\left\langle\rho(\mathbf{x},\ t)\right\rangle\right]^{2}\right\rangle\), decays exponentially, \(C(t)\)\(\exp(-\nu t)\). Furthermore, the numerical results also show that, for small \(\kappa\), the decay rate \(\nu\) is independent of \(\kappa\). Thus, although the long time exponential damping of \(C(t)\) is due to diffusion, the damping rate does not depend on the strength of the diffusion. Note, however, that diffusion does influence the time dependence of \(C(t)\) through \(t_{*}\) which increases with decreasing \(\kappa\). Thus, for smaller \(\kappa\), exponential decay of \(C(t)\) starts later. An analysis determining \(\nu\) is given in Antonsen _et al._ (1995b). The result is \[\nu=\min_{h}\left[h+G(h)\right], \tag{98}\] which is independent of \(\kappa\) and depends only on the stretching properties of the flow as reflected in \(G(h)\). ## Problems 1. Derive Eq. (9.12). 2. Find \(D_{q}\) for the measure described in Problem 10 of Chapter 3 by using the partition function formalism. Sketch the corresponding \(f(\alpha)\) labeling significant values on the vertical and horizontal coordinate axes. 3. Repeat Problem 2 above for the measure described in Problem 11 of Chapter 3. 4. Derive Eq. (9.21). 5. Taking the limit \(q\to 1\), obtain Eq. (5.16) from Eq. (9.28). Similarly, use (9.31) to obtain (5.14). 6. Calculate \(H_{q}\) for the generalized baker's map. 7. Consider the attractor of the generalized baker's map with \(\lambda_{a}=\lambda_{b}=\frac{1}{2}\), and, using Eq. (9.10), show that the set whose singularity index is \(\alpha\) (in the notation of Section 9.1) is given by those \(x\) values whose binary decimal representation \(0\)\(a_{1}\), \(a_{2}\), \(a_{3}\),... (where \(a_{i}=0\) or \(1\)) is such that the fraction of ones, 4. Note that \(\langle\mathbf{r}\rangle\), the average decay time, is very different from the notationally similar quantity \(\tau(q)\) appearing in (9.8). 5. It is suggested, at this point, that the reader refresh his memory by reviewing the material in Secton 5.6. 6. Recall that we use the notation \(h_{1}\) (i.e., without any arguments) to denote the value of \(\lim_{n\to\infty}h_{1}(\mathbf{x},\,n)\) assumed for almost every \(\mathbf{x}\) with respect to the measure under consideration. 7. In the construction which we used in arriving at Eq. (9.48), we have made two implicit assumptions. Namely, we have assumed that the segment \(ab\) maps to a segment \(a^{\prime}b^{\prime}\) which lies entirely within \(C_{k}\), and we have assumed that the preimage of \(c^{\prime}d^{\prime}\) is entirely within \(C_{k}\). These situations might conceivably not hold if \(\mathbf{x}_{n}\) is too close to a stable boundary or if \(\mathbf{x}_{0}\) is too close to an unstable boundary. The point we wish to make here is that, for hyperbolic systems, the partition into cells can be chosen in such a way that \(a^{\prime}b^{\prime}\) and \(c^{\prime}d^{\prime}\) are always in \(C_{k}\). See Grebogi _et al._ (1988b) for further discussion. 8. We note, however, that it is quite common for nonattracting chaotic sets encountered in practice to be hyperbolic. For example, this is apparently the case for the chaotic scattering invariant chaotic set shown in Figure 5.21(\(c\)). Note, from this figure, that the chaotic set appears to be the intersection of a Cantor set of stable manifold lines with a Cantor set of unstable manifold lines, and that these appear to intersect at angles that are well bounded away from zero, thus implying the absence of tangencies. ## Chapter 10 Control and synchronization of chaos In the preceding chapters we have been mainly concerned with studying the properties of chaotic dynamical systems. In this chapter we adopt a different, more active, point of view. In particular, we ask, can we use our knowledge of chaotic systems to achieve some desired goal? Two general areas where this point of view has proven useful are the control of chaos and the synchronization of chaotic systems. By control we shall generally mean feedback control. That is, we have some control variable that we can vary as a function of time, and we decide how to do this variation on the basis of knowledge (perhaps limited) of the system's past history and/or current state. In the synchronization of chaos, we generally shall be considering two (or more) systems that are coupled. The evolution is chaotic, and we are interested in the conditions such that the component systems execute the same motion. Both control of chaos and synchroniza tion of chaos have potential practical applications, and we shall indicate these as this chapter proceeds. ### 10.1 Control of chaos Two complementary attributes sometimes used to define chaos are (i) _exponentially sensitive dependence_ and (ii) _complex orbit structure_. A quantifier for attribute (i) is the largest Lyapunov exponent, while a quantifier for attribute (ii) is the entropy (e.g., the metric entropy or the topological entropy, Section 4.5). These attributes of chaos can be ex ploited to fashion chaos control strategies. Control is done with some goal in mind. Three possible goals that have received attention in the context of chaotic systems are the following:For Goal 1, the chaos attribute of the existence of a complex orbit structure is most relevant. For Goals 2 and 3, the chaos attribute of exponentially sensitive dependence is the more relevant. For all three goals perhaps the most interesting aspect of the chaos control problem is that the control goals can, in principle, be achieved with only small controls. This is not so for typical nonchaotic situations. For example, consider Goal 1 and suppose the system is nonchaotic; e.g., it is on a steady state attractor. If one wanted to improve the performance by a substantial amount, one would typically need to make some nontrivial change in the system, for example, by moving the steady state attractor to a new location in state space substantially different from its original location. In Sections 10.2 and 10.3 we discuss work done on Goals 1 and 2, respectively. Goal 3 has been much less studied, and we limit ourselves to giving only a brief discussion of it below. The reason for interest in Goal 3 is to avoid some catastrophic event that is known to occur whenever the chaotic orbit wanders into a particular region of state space. As an example, we mention the work of In _et al_. (1997) who considered a thermal combustor, a device whose purpose is to burn an incoming fuel and air mixture, thus producing at the output of its combustion chamber an outflowing hot gas. It is found that as the fuel/air ratio is decreased (a regime that has potential practical advantages), the combustor pulses chaotically. Upon further decrease, the chaotic attractor undergoes a boundary crisis (Section 8.3.1) whereby it is replaced by a chaotic transient followed by 'flameout.' By flameout we mean that the flame in the combustion chamber goes out, and the device ceases to operate. In _et al_. (1997) discuss how the orbit in the chaotic transient can be controlled by small perturbations to avoid coming to the state space region on the flameout side of the former basin boundary. If this is done, then flameout is avoided and the chaotic transient is converted to sustained chaotic motion with practically advantageous properties. Other examples of Goal 3 include preventing the capsizing of a ship in a rough sea (Ding _et al_., 1994), and a possible intervention strategy for preventing epileptic seizures (Yang _et al_., 1995). ### 10.2 Controlling a steadily running chaotic process (Goal 1) To make the situation concrete, consider Figure 10.1, which shows a schematic of a chemical reactor. On the left hand side of the figure, several pipes carrying different chemicals flow into the reactor tank, where they are rapidly stirred to create a homogeneous mixture within the tank. The inflow from the pipes on the left is balanced by outflow from the pipe on the right. As discussed in Section 2.4.3, it is possible for the chemical rate equations for such a situation to yield chaotic time dependences of the various concentrations of chemical species within the tank. In such a situation, the chemical concentrations in the outflow pipe will also vary chaotically with time. Suppose we wish to maximize the output of some desired chemical, then we can regard the time average of the flux of the particular chemical out of the tank as a measure of the system perform once. Further suppose that there is accessible to us some valve controlling the flow rate in one of the input pipes (see Figure 10.1). Can we make a large improvement in the performance of the system by making time dependent adjustments to the valve setting? We shall see that substantial improvement is often possible even if we are only allowed to turn the valve handle within a small angular range. In fact, in the ideal, noiseless, case, we can, in principle, typically make big performance improvements even for an _arbitrarily small_ allowed control range. The key point for this problem is the presence of complex orbit structure in chaos. By this we mean that there are present within a chaotic invariant set many topologically distinct orbits. One aspect of this is that there are typically many different unstable periodic orbits (UPOs) em bedded in a chaotic attractor. In fact, for hyperbolic attractors, the number, \(\#(P)\), of UPOs with period less than or equal to \(P\) increases exponentially with \(P\), \[\#(P)\sim\exp(h_{\rm T}P),\]where \({}_{\rm T}\) is the topological entropy (Katok, 1980). Because we envision making only small control perturbations, we cannot create new orbits with very different properties from the existing ones. Thus, we seek to exploit the UPOs that exist in the absence of control. The approach is as follows (Ott _et al_. 1990a,b): 1. Examine the uncontrolled dynamics of the system and determine some of the low period UPOs embedded in the chaotic attractor. 2. Examine these UPOs and find the system performance that would result from the time dynamics of each of these UPOs. Compare the performances for the UPOs and choose the UPO with the best per formance. (From the results of Section 9.5, the uncontrolled performance can be expressed as an average over the performances of UPOs (e.g., in the case of a two dimensional map the UPO performances are weighted by \(\lambda_{1}^{-1}(x_{jn}\), \(n\)), Eq. (9.41)). Thus, some of the UPO performances should be better than the uncontrolled performance, and some should be worse.) 3. Formulate a control algorithm that stabilizes the selected UPO in some small neighborhood of the UPO. Thus, we can envision that, as the uncontrolled orbit wanders ergodially over the attractor, it will eventually come near the selected UPO, and when this happens it takes only a small kick (control perturbation) to place it on the UPO. Thereafter, if the orbit is displaced from the UPO (e.g., by noise), it can be kicked back on. This general strategy (Ott _et al_., 1990a,b) has proven to have wide potential applicability. Some issues concerning it are the choice of the control algorithm referred to in step (_c_), implementation using delay coordinates, application to systems where a known model is lacking, the effect of noise, and the possibility of using techniques from Goal 2 to reduce the waiting time for the orbit to enter the stabilizing neighborhood. In the rest of this section we address some of these issues. We begin by discussing techniques for the stabilization of UPOs em bedded in the chaotic attractor. Linear control theory offers a very general technique for the stabilization of a periodic orbit. This method, called _pole placement_, will be discussed subsequently. However, for illustrative purposes, we begin by discussing a less general method and restrict consideration to two dimen sional maps. We consider a two dimensional map **M**(**x**, _p_) where \(p\) is a system parameter. At \(p\) = \(\bar{p}\) the map **M** has a chaotic attractor which has embedded within it an unstable period one orbit (fixed point) **x**F( \(\bar{p}\)). (The extension to higher period will be given subsequently.) Say we want to convert the chaotic motion to a stable orbit **x** = **x**F( \(\bar{p}\)) by small variations of the parameter \(p\). Since we vary \(p\) at each step we replace \(p\) by \(p_{n}=\bar{p}+q_{n}\), where we restrict the perturbation \(q_{n}\) to satisfy \(|q_{n}|<q_{*}\). That is, the maximum allowed perturbation of \(p\) is \(q_{*}\), which we regard as small. Linearizing the map \(\mathbf{x}_{n+1}=\mathbf{M}(\mathbf{x}_{n},\,p)\) about \(\mathbf{x}=\mathbf{x}_{\mathrm{F}}(\,\bar{p})\) and \(p=\bar{p}\), we have \[\mathbf{x}_{n+1}\simeq q_{n}\mathbf{g}+[\lambda_{\mathrm{u}}\mathbf{e}_{ \mathrm{u}}\mathbf{f}_{\mathrm{u}}+\lambda_{\mathrm{s}}\mathbf{e}_{\mathrm{s}} \mathbf{f}_{\mathrm{s}}]\cdot[\mathbf{x}_{n}-q_{n}\mathbf{g}], \tag{10.1}\] where we have chosen coordinates so that \(\mathbf{x}_{\mathrm{F}}(\,\bar{p})=0\), and the quantities appearing in (10.1) are as follows: \(\mathbf{g}=\partial\mathbf{x}_{\mathrm{F}}/\partial p|_{p=\bar{p}}\), \(\lambda_{\mathrm{s}}\) and \(\lambda_{\mathrm{u}}\) are the stable and unstable eigenvalues, \(\mathbf{e}_{\mathrm{s}}\) and \(\mathbf{e}_{\mathrm{u}}\) are the stable and unstable eigenvectors, and \(\mathbf{f}_{\mathrm{s}}\) and \(\mathbf{f}_{\mathrm{u}}\) are contravariant basis vectors (defined by \(\mathbf{f}_{\mathrm{s}}\cdot\mathbf{e}_{\mathrm{s}}=\mathbf{f}_{\mathrm{u}} \cdot\mathbf{e}_{\mathrm{u}}=1\), \(\mathbf{f}_{\mathrm{s}}\cdot\mathbf{e}_{\mathrm{u}}=\mathbf{f}_{\mathrm{u}} \cdot\mathbf{e}_{\mathrm{s}}=0\)). Assume that \(\mathbf{x}_{n}\) falls near the fixed point so that (10.1) applies. We then attempt to pick \(q_{n}\) so that \(\mathbf{x}_{n+1}\) falls approximately on the stable manifold of \(\mathbf{x}_{\mathrm{F}}(\,\bar{p})=0\). That is, we choose \(q_{n}\) so that \(\mathbf{f}_{\mathrm{u}}\cdot\mathbf{x}_{n+1}=0\). If \(\mathbf{x}_{n+1}\) falls on the stable manifold of \(\mathbf{x}=0\), we can then set \(q_{n}=0\), and the orbit will approach the fixed point at the geometrical rate \(\lambda_{\mathrm{s}}\). Dotting (10.1) with \(\mathbf{f}_{\mathrm{u}}\) we obtain the following equation for \(q_{n}\), \[q_{n}=q(\mathbf{x}_{n})\equiv\lambda_{\mathrm{u}}(\lambda_{\mathrm{u}}-1)^{-1} (\mathbf{x}_{n}\cdot\mathbf{f}_{\mathrm{u}})/(\mathbf{g}\cdot\mathbf{f}_{ \mathrm{u}}), \tag{10.2}\] which we use for \(q_{n}\) when \(|q(\mathbf{x}_{n})|<q_{*}\). When \(|q(\mathbf{x}_{n})|>q_{*}\), we set \(q_{n}=0\). Thus, for small \(q_{*}\), a typical initial condition will generate a chaotic orbit which is the same as for the uncontrolled case until \(\mathbf{x}_{n}\) falls within a narrow slab \(|x_{n}^{\mathrm{u}}|<x_{*}\), where \(x_{n}^{\mathrm{u}}=\mathbf{f}_{\mathrm{u}}\cdot\mathbf{x}_{n}\) and \(x_{*}=q_{*}|(1-\lambda_{\mathrm{u}}^{-1})\mathbf{g}\cdot\mathbf{f}_{\mathrm{u }}|\). At this time the control (10.2) will be activated. Even then, however, the orbit may not be brought to the fixed point because of the nonlinearities not included in (10.2). In this event the orbit will leave the slab and continue to move chaotically as if there were no control. Eventually (due to ergodicity of the uncontrolled attractor) the orbit will fall near enough to the desired fixed point that attraction to it is obtained. See Figure 10.2 for the extension of this technique (Ott _et al._, 1990b) to a periodic orbit of period \(k>1\). The periodic orbit (shown as \(1\to 2\to 3\rightarrow\cdots\,k\to 1\) in the figure) is assumed to be embedded in the chaotic attractor of a two dimensional map with one positive and one negative Lyapunov exponent. The linearized directions of the stable manifold at the element points (\(1,\,2,\,\ldots,\,k\)) of the periodic orbit are denoted \(\mathbf{e}_{\mathrm{s1}}\), \(\mathbf{e}_{\mathrm{s2}}\), \(\ldots,\,\mathbf{e}_{\mathrm{s}k}\), in Figure 10.2. When the chaotic orbit \(\mathbf{x}_{n}\) comes close to the period \(k\) orbit, the control is applied to put \(\mathbf{x}_{n+1}\) on the component \(\mathbf{e}_{\mathrm{s}j}\) of the linearized stable manifold that is nearest to \(\mathbf{x}_{n+1}\). Thus, we created a stable orbit, but it is preceded by a chaotic transient. The length of such a chaotic transient depends sensitively on the initial conditions and, for randomly chosen initial conditions, it has an average \(\langle\tau\rangle\) which scales as \[\langle\tau\rangle\sim\ q_{*}^{-\gamma},\] where the exponent \(\gamma\) is given by \[\gamma=1+\frac{1}{2}\ \ln\left|\lambda_{\rm u}\right|/\ln\left|\lambda_{\rm s} \right|^{-1} \tag{10.3}\] (see Ott _et al._, 1990a). The above procedure specifying the control \(q_{n}\) is a special case of the general technique known as 'pole placement' in the theory of control systems (see Ogata, 1990). For an \(N\) dimensional map \({\bf M}({\bf x},\ p)\), lineariza tion around a fixed point \({\bf x}_{\rm F}(\ \overline{p})\) and the nominal parameter value \(\overline{p}\) yields \[\Delta_{n+1}={\bf A}\Delta_{n}+{\bf B}q_{n}, \tag{10.4}\] where \(\Delta={\bf x}-{\bf x}_{\rm F}(\ \overline{p})\), \({\bf A}\) is the \(N\times N\) matrix of partial derivatives of \({\bf M}\) with respect to \({\bf x}\), \({\bf D}_{\rm z}{\bf M}({\bf x}_{\rm F}(\ \overline{p})\), \(\overline{p})\), and \({\bf B}\) is the \(N\) vector derivative of \({\bf M}\) with respect to \(p\), \(D_{p}{\bf M}({\bf x}_{\rm F}(\ \overline{p})\), \(\overline{p})\). For a linear control we have in general \[q_{n}={\bf K}\Delta_{n}, \tag{10.5}\] where \({\bf K}\) is an \(N\) dimensional row vector. Thus, we have \(\Delta_{n+1}=({\bf A}+{\bf B}{\bf K})\Delta_{n}\), and we desire to choose \({\bf K}\) so that the matrix \({\bf A}^{\prime}=({\bf A}+{\bf B}{\bf K})\) is stable (i.e., has eigenvalues of magnitude less than 1). If \({\bf A}\) and \({\bf B}\) satisfy a certain condition called 'controllability' (this condition is typically satisfied), then the pole placement technique allows one to determine a control vector \({\bf K}\) which yields _any_ set of eigenvalues that we may choose for the matrix \({\bf A}^{\prime}\). Equation (10.2) corresponds to the choice wherein the unstable eigenvalue of \({\bf A}\) is made zero, while the stable eigenvalue is unaltered by the control. See Romeiras _et al._ (1992) for implementation of the pole placement technique for controlling a chaotic system governed by a four dimensional map (this system, called the kicked double rotor, is described in Chapter 5). For this map and the Figure 10.2: Illustration of control to stabilize a period \(k>1\) orbit of a two dimensional map. chosen parameter values there are 36 unstable fixed points embedded within the chaotic attractor. Choosing one of the unstable fixed points for control, \(q_{n}\) was set equal to \({\bf K}\cdot\;_{n}\) whenever \(|{\bf K}\cdot\;_{n}|<q_{*}\) and \(q_{n}=0\) otherwise. Figure 10.3 shows results for one component of \({\bf x}\) versus iterate number for the case where we first control one fixed point (labeled (1)) for iterates 0 1000 and then successively switch the control, at iterate num bers 2000, 3000 and 4000, to stabilize three other fixed points (labeled (2), (3) and (4)). We note that, after switching the control, the orbit goes into chaotic motion, but eventually approaches close enough to the desired fixed point orbit, after which time it locks into it. Thus, not only can a small control stabilize a chaotic system in a desired motion but it also provides the flexibility of being able to switch the system behavior from one type of periodic motion to another. This method was first implemented in an experiment on a periodically forced magnetoelastic ribbon by Ditto _et al._ (1990a). It is especially noteworthy that no reliable mathematical model is available for this system, but that the method was nevertheless applied by making use of the experimental delay coordinate embedding technique. In particular, this technique yielded a surface of section map as well as estimates of the locations of periodic orbits, their eigenvalues, and the vectors \({\bf e}_{u,s}\) and \({\bf f}_{u,s}\). Furthermore, the control achieved is quite insensitive to errors in the estimates of these quantities and is also not greatly affected by noise provided \(q_{*}\) is not too small. Subsequent to the experiment of Ditto _et al_. (1990a), many other experiments using the above principles have been performed on a variety of fluid, optical, biological, chemical, electrical and mechanical systems. See for example the reviews by Shinbrot _et al_. (1993), Chen and Dong (1993), Boccaletti _et al_. (2000), and Ott and Spano (1995), and the books by Chen and Dong (1998) and by Sauer _et al_. (1994). In addition, there have been many subsequent papers discussing other methods and contexts for the stabilization of unstable periodic orbits embedded in the chaotic attractor. For example, our discussion above assumed the ability to measure the state vector **x**. In many cases of interest it may only be possible to obtain the time series of a single measured function of the system state. In such cases, one would like to utilize the techniques of delay coordinate embedding discussed in Sections 1.6 and 3.9. The theory of how to stabilize an unstable periodic orbit from delay coordinate observations has been formulated by Dressler and Nitsche (1992), who adapt the technique of Eqs. (10.1) and (10.2) to delay coordinates, and by So and Ott (1995), who adapt the pole placement technique to delay coordinates. As an example, we consider the simple case of a two component delay coordinate vector \({\bf x}_{n}=\ y_{n}\), \(y_{n-1}\), where \(y_{n}\) is a scalar measurement of some quantity related to the system state at time \(n\). Then the relationship between \({\bf x}_{n}\) and \({\bf x}_{n+1}\) is \({\bf x}_{n+1}={\bf M}({\bf x}_{n},\ p_{n},\ p_{n-1})\). The essential difference is that **M** depends on the parameter value at time \(n-1\) as well as that at time \(n\). (In the previous analysis, Eqs. (10.1) and (10.2), **M** depended only on \(p_{n}\).) The dependence on \(p_{n-1}\) reflects the fact that, since the delay coordinate vector \({\bf x}_{n}\) involves \(y_{n-1}\), the system state at time \(n+1\) must depend on the system evolution from time \(n-1\) to time \(n\), and hence on both \(p_{n}\) and \(p_{n-1}\). We can also write \({\bf x}_{n+1}={\bf M}({\bf x}_{n},\ p_{n},\ p_{n-1})\), \({\bf x}_{n}=(y_{n},\ y_{n-1})\) as \(y_{n+1}=N(y_{n},\ y_{n-1},\ p_{n},\ p_{n-1})\). Linearizing about an as sumed period one orbit, \(y=y_{\rm F}(\vec{p})\) and \(p=\vec{p}\), we obtain \[\eta_{n+1}=\alpha\eta_{n}+\beta\eta_{n-1}+\gamma q_{n}+\delta q_{n-1} \tag{10.6}\] where \(p_{n}=\vec{p}+q_{n}\), \(y_{n}=y_{\rm F}(\vec{p})+\eta_{n}\), \(\alpha=\partial N/\partial y_{n}\), \(\beta=\partial N/\partial y_{n-1}\), \(\gamma=\partial N/\partial p_{n}\), \(\delta=\partial N/\partial p_{n-1}\) and all partial derivatives are evaluated at \(y_{n}=y_{n-1}=y_{\rm F}(\vec{p})\), \(p_{n}=p_{n-1}=\vec{p}\). Again adopting a linear control law, we choose \(q_{n}\) based on knowledge of \(y\) at times \(n\) and \(n-1\) and knowledge of the previous control perturbation \(q_{n-1}\), \[q_{n}=a\eta_{n}+b\eta_{n-1}+cq_{n-1}, \tag{10.7}\] where we are free to choose the control, i.e., to choose the constants \(a\), \(b\) and \(c\). We wish to do so in such a way that the fixed point \(y_{\rm F}(\vec{p})\) is stable. That is, we desire that (10.6) and (10.7) yield \({}_{n}\to 0\) and \(q_{n}\to 0\) as \(n\to\infty\). To explore the conditions under which this is true, we set \({}_{n+1}=\lambda\eta_{n}\) and \(q_{n}=\lambda q_{n-1}\). Stability then corresponds to \(|\lambda|<1\). Substi tuting into (10.6) and (10.7) yields a cubic equation for \(\lambda\) with coefficients that depend on \(a\), \(b\) and \(c\), \[\lambda^{3}-A\lambda^{2}+B\lambda+C=0, \tag{10.8}\] where \(A=c+a+aa\), \(B=ca-\beta-a\delta-b\delta\) and \(C=\beta c-b\delta\). Thus, by choice of the control law (i.e., by choice of the constants \(a\), \(b\) and \(c\)) we can produce any desired set of coefficients \(A\), \(B\) and \(C\), and hence any set of roots of the cubic (10.8). Choosing these roots to have \(|\lambda|<1\) yields stability. (For example, a triple root of (10.8) at \(\lambda=0\) corresponds to \(A=B=C=0\).) In general, the stability achieved is robust to perturbations, and there is a wide class of control laws (i.e., choices of \({\bf K}\) in (10.5) or choices of (\(a\), \(b\), \(c\)) in (10.8)) that yield stability. For example, there will be a volume of (\(a\), \(b\), \(c\)) space for which all roots have magnitude less than one. Under such conditions it is feasible to adopt an empirical approach that does not require direct knowledge of the linearized dynamics: implement feedback (e.g., \(q_{n}={\bf K}\delta_{n}\), Eq. (10.5)), and vary the feedback parameters (e.g., in (10.5) vary \({\bf K}\)) until stability is obtained. This empirical approach has proven particularly useful in experiments. So far we have been discussing stabilization of a periodic orbit in the case of a discrete time (map) formulation of the dynamics. While continuous time dynamics can be viewed in discrete time via a Poincare surface of section, it is also of interest to consider controlling chaos directly in continuous time. Again we assume that we can determine periodic orbits embedded in the chaotic attractor. Let \({\bf x}_{\rm p}(t)={\bf x}_{\rm p}(t+T_{\rm p})\) denote the continuous time periodic orbit that we wish to stabilize, and say the uncontrolled continuous time dynamical system is \({\rm d}{\bf x}(t)/{\rm d}t={\bf F}({\bf x}(t))\). Pyragas (1992) introduced two possible forms for continuous time con trollers, \[{\rm d}{\bf x}/{\rm d}t={\bf F}({\bf x})+{\bf K}_{a}\ {\bf x}_{\rm p}(t)-{ \bf x}(t)], \tag{10.9}\] and \[{\rm d}{\bf x}/{\rm d}t={\bf F}({\bf x})+{\bf K}_{b}\ {\bf x}(t-T_{\rm p})-{ \bf x}(t)], \tag{10.10}\] where \({\bf K}_{a,b}\) are matrices specifying the control laws. The control (10.9) requires that the time dependence \({\bf x}_{\rm p}(t)\), of the periodic orbit be obtained. For the control law (10.10) only the period \(T_{\rm p}\) of the desired orbit is required. Since other periodic orbits will typically have periods that differ from \(T_{\rm p}\), one could hope that Eq. (10.10) might preferentially stabilize the desired orbit. Note that for both (10.9) and (10.10) the applied control perturbation becomes zero when \({\bf x}(t)={\bf x}_{p}(t)\). Whether the periodic orbit is or is not stabilized by the control depends on the choice of the control matrix (\({\bf K}_{a}\) for (10.9) or \({\bf K}_{b}\) for (10.10)), and this choice could potentially be made empirically, as discussed in the previous paragraph. Pyragas and Tamasevicius (1993) experimentally demonstrated the continuous time control of chaos by applying a control of the delay form (10.10) to a circuit. Another situation of great interest is the case where knowledge of the map \({\bf M}\) is not available. In such cases it turns out that the idea of control by stabilization of unstable periodic orbits (Ott _et al._, 1990a,b) is still very useful. The point is that the minimum relevant input data for application of the method is knowledge of the unstable periodic orbit to be stabilized. This is much less information than is contained in the full map \({\bf M}\) and, in many cases, it has proven experimentally feasible to determine this more limited information directly from data taken from the chaotic evolution of an experimental system. Methods for the determination from experimental noisy data of unstable periodic orbits embedded in the attractor are discussed by So _et al._ (1997), and by Pierson and Moss (1995). Some representative controlling chaos experiments where periodic orbits or steady states embedded in a chaotic attractor are determined from data occur in a driven magnetoelastic system (Ditto _et al._, 1990a), lasers (Gills _et al._, 1992), electric circuits (Hunt, 1991), chemical reactors (Petrov _et al._, 1994) and cardiac tissue (Garfinkel _et al._, 1992), among others. We now describe one technique for the determination of a periodic orbit from experimental data. Imagine that we have a three dimensional flow, \({\bf Z}(t)=\ x(t)\), \(y(t)\), \(z(t)\)], as shown in Figure 10.4, and assume that we experimentally measure and record a long time series consisting of the Figure 10.4: Finding an unstable periodic orbit in a chaotic attractor by use of experimental measurement of the time series \({\bf u}_{n}\). coordinates of points, **u**1, **u**2, **u**3,..., as the orbit crosses a Poincare surface of section \(z\) = (constant) in the upward direction \({\rm d}z/{\rm d}t>0\). (Here **u**_n_ = (_x_(_t_n_), _y_(_t_n_)) where \(t_{n}\) is the time at the _n_th piercing of the surface of section.) For simplicity we restrict consideration to the determination of a period one orbit (fixed point) of the surface of section map. To do this we look for close returns. That is, we sift through the time series **u**_n_ looking for two time consecutive **u**'s that are close to each other. Say we find that **u**100 and **u**101 are close (see Figure 10.4). If this is so, we can presume that a period one periodic orbit is nearby. To find its upward intersection **u*** with the surface of section we construct a restricted neighborhood containing **u**100 and **u**101. This restricted neighborhood is shown as the small rectangle in Figure 10.4. We then search through our time series for other close returns (**u**_n_+1, **u**_m_) that occur in the restricted neighborhood. Since we are dealing with a small neighborhood of the periodic point **u**_*, we can treat the restricted return dynamics as approxi mately linear. That is, we can model our close return data by \[{\bf u}_{m+1} = {\bf Cu}_{m} + {\bf b},\] where **C** is a 2 x 2 constant matrix, and **b** is a two dimensional constant vector. Supposing that we have obtained many close return pairs (**u**_m_, **u**_m_+1), we can use them to obtain a best least squares fit of the model coefficients in (10.11) (i.e., the four matrix elements of **C** and the two components of **b**) to this data. Note that least squares fitting with many return pairs (**u**_m_, **u**_m_+1) is advantageous for reducing the effect of random measurement errors in the **u**_m_ time series. Having determined estimates of **C** and **b**, we now use them to obtain **u***, the periodic orbit of the sur face of section map. From (10.11) **u*** = **Cu*** + **b**, which yields **u*** = (**1** - **C**)-1**b**. Thus, we have **u***. As an added bonus, we also have an estimate of the stability matrix of the period one periodic orbit **u***. In particular, **C** in (10.11) is **A** in (10.4). As the above discussion indicates, the main analysis effort in implementing chaos control may often be in the determination of unstable periodic orbits embedded in the chaotic set. Since there are an infinite number of such orbits, only a limited subset of them can be determined. Assume that we have determined several low period periodic orbits, say all orbits of period four or less. We then find which of those orbits gives the best system performance. A natural question that arises is the follow ing. By choosing an orbit of period four or less we may be able to achieve a substantial improvement in system performances. Can we achieve a further substantial improvement over that by determining many more periodic orbits of period higher than four? If this is so, then it might be worth the considerable effort to find such high period orbits. Hunt and Ott (1996) have investigated this question. They found evidence that performance typically peaks at low period. Furthermore, even if one has not looked at periods high enough to attain the optimally performing periodic orbit, the potential improvement that the optimal orbit offers is often rather small as compared to the best performance for orbits of period lower than that of the optimal orbit. Thus, the message is rather comfort ing: Determining several low period orbits will often be sufficient to attain optimal or near optimal chaos control by use of the periodic orbit stabilization method. Finally, we note that it is also of interest to stabilize periodic orbits in a chaotic attractor under conditions where the parameters of the system vary slowly with time. Such slow parameter changes can arise through unavoidable circumstances, such as slow changes in the environment in which an experiment is performed (e.g., slow changes of the ambient temperature in a room), or else may be purposely induced (e.g., see Gills _et al._, 1992). In either case the phase space location of the periodic orbit being stabilized changes slowly with time. To maintain its stabilization, this phase space location must be continuously tracked. See Schwartz _et al_. (1997) for a discussion of issues presented by such situations. ### Control Goal 2: targeting The second of the three control goals mentioned in Section 10.1 is that of rapidly directing the orbit of a chaotic system to some desired location in the state space. We call this _targeting_. The basic concept (Kostelich _et al._, 1993; Shinbrot _et al._, 1992; Shinbrot _et al._, 1990) originates from the idea that chaotic orbits are exponentially sensitive to small perturbations. Thus a small perturbation to the orbit has a large effect in a relatively short time. If the perturbations are unknown noise, then this makes the chaotic orbit unpredictable. But, if the inherent noise is small, and we can very carefully and cleverly apply control perturbations, then we have the hope that we can direct the orbit to a target with small controls. As an example of one of several possible control strategies for targeting, we mention the _forward backward method_ (Shinbrot _et al._, 1990) illustrated in Figure 10.5 for the case of a two dimensional map with one expanding direction (one positive Lyapunov exponent) and one contracting direction (one negative Lyapunov exponent). As shown in the figure we start at an initial point, labeled'source', and we wish to hit a region labeled 'target region.' By small variation of a scalar control parameter, \(p\), we can perturb the source point to a one parameter set of locations in state space. This set is shown as the short curve segment through the source point. Each point on this curve segment corresponds to a particular value of \(p\). We imagine that we only activate the control initially, and that no other control perturba ions are subsequently applied. (This will need to be modified in the presence of noise, error in the observation of the system state, etc.) We now use our presumed knowledge of an accurate system model to evolve the line segment forward in time by an amount _T._ Simultaneously, we evolve the target region backward in time by _T._ Increasing \(T\), the forward image of the initial control segment, and the backward image of the target region both expand exponentially in length. The control segment expands along the one dimensional unstable manifold of the chaotic set, while the preimage of the target region expands along the one dimensional stable manifold. Eventually, as \(T\) is increased, the forward image of the control segment and the preimage of the target region will intersect. Such an intersection point evolves forward to the target region and backward to the initial control segment, specifying an orbit connecting the two. Thus, evolving an intersection point backward, we determine a point on the initial control segment. This point in turn determines a value of the initial control parameter \(p=p_{\text{int}}\) that yields an orbit going to the target region. Having done such a computation we can now apply the control perturba \(p=p_{\text{int}}\) to the actual system. Ideally, the orbit would now evolve to hit the target region at time 2\(T\). If, however, there is small noise, modeling error, inaccuracy in determining or generating \(p_{\text{int}}\), etc., then the orbit would initially closely follow its computed trajectory, but with exponentially building error. In such a case, the error could be counteracted by repeatedly applying the forward backward method as the orbit evolves toward the target region. See Shinbrot _et al._ (1990) for an illustration of how the above computational procedure is applied in a specific example. In higher dimensional situations, where the unstable (stable) manifold has dimension \(D_{\text{u}}\) (\(D_{\text{s}}\)), the forward backward method becomes difficult to apply due to the necessity of following a surface of dimension \(D_{\text{u}}\) (\(D_{\text{s}}\)) forward (backward) in time. A computationally much less demanding alternate approach that is suitable for such higher dimensional situations has been formulated by Kostelich _et al._ (1993) who illustrate their method using the four dimensional kicked double rotor map described in Chapter 5. The basic idea of Kostelich _et al._ is to pre compute a tree of controllable orbit paths leading to the target. This is illustrated in Figure 10.6, where we label the target point \(x_{T}\) and its \(T\) preimages \(x_{T-1}\), \(x_{T-2}\),..., \(x_{0}\). These points, the 'root path' in the figure, form the trunk of the tree, from which sprout branches (labeled'secondary path' in the figure), which, in turn, may have their own branches. Figure 10.6 shows one branch, \(z_{n}\), \(z_{n-1}\),..., \(z_{n-T}\), where \(z_{n}\) is slightly perturbed from \(x_{T-15}\). Going backwards in time this branch diverges exponentially from the root path. In this way one can construct and store in the computer a road like network of paths permeating the chaotic set and leading to the target. Now say that we start from the specified initial point. We iterate it forward until it comes close to one of the points on the tree. We then apply controls to put it on the stable manifold of the branch through that tree orbit point. When the controlled orbit reaches a junction (e.g., \(z_{n}\) in Figure 10.6) we apply a control to put it on the stable manifold of the nearby next branch in the path to the target, etc. The time to reach the target can be further shortened by making a forward tree from the initial point and extending it until one of its points comes close enough for the control to transfer it to the backward tree from the target. An actual instance of targeting in a chaotic system was carried out by NASA in achieving the first encounter of a spacecraft with a comet in 1985. The key point is that the success of this project was critically dependent on the existence of chaos in the motion of a spacecraft in the gravitational fields of the Earth and Moon. (The existence of chaos for the gravitational three body problem was originally shown by Poincare (Sec 1.1).) The situation was as follows. The Giaccobini Zinner comet was due to travel through the solar system in mid 1985. In 1982 NASA Figure 10.6: Schematic illustration of the hierarchy of paths leading to the target point (from Kostelich _et al._, 1993). scientists began to consider how they might achieve an encounter with the comet by a spacecraft for the purpose of taking scientific measurements. Rather than launching a space probe from the surface of the Earth, they were able to propose a much cheaper way to accomplish their goal. A previous space probe, the _International Sun Earth Explorer 3_ (ISEE 3), having completed its mission, was parked in the vicinity of the Sun Earth Lagrange point, the point where the gravitational attractions of the Earth and Sun cancel. This point, although an equilibrium, is an unstable orbit embedded in a chaotic region of state space. The spacecraft still carried a small amount of fuel. The problem was how to use this fuel to achieve a flyby of the comet as it passed through the solar system. NASA scientists were indeed able to do this (Farquhar _et al._, 1985). The orbit that they came up with is shown in Figure 10.7, and is clearly rather complicated, as befits a chaotic situation. In the final part of the trajectory the orbit passed close to the surface of the Moon (the point \(S_{5}\) in the figure), after which it was slung outwards leaving the Earth Moon system. The spacecraft, now renamed the _International Cometary Explorer 3_ (ICE 3), traveled half way across the solar system to make the first close observations of a comet. That this mission was possible using only the relatively small amount of fuel in the parked ISEE 3 spacecraft is due to the chaotic nature gravitational three body problem. ### 10.4 Synchronization of chaotic systems The problem of synchronization of chaotic systems was first analyzed by Yamada and Fujisaka (1983), with subsequent early work by Afraimovich,Verichev and Rabinovich (1986) and by Pecora and Carroll (1990). Since about 1990 there has been a great deal of interest in this topic due to a variety of applications where it is relevant. To introduce the subject of synchronization of chaos, we recall the concept of sensitive dependence on initial conditions. In particular, one way of demonstrating sensitive dependence on initial conditions is to consider two identical chaotic systems. Call these two systems system \(a\) and system \(b\). System \(a\) is described by \({\rm d}{\bf x}_{a}/{\rm d}t={\bf F}({\bf x}_{a})\) and system \(b\) is described by \({\rm d}{\bf x}_{b}/{\rm d}t={\bf F}({\bf x}_{b})\), where the function \({\bf F}\) is the same for both systems. Thus, if at \(t=0\) we have \({\bf x}_{a}(0)={\bf x}_{b}(0)\), then \({\bf x}_{a}(t)={\bf x}_{b}(t)\) for all times \(t>0\). If, however, the initial values, \({\bf x}_{a}(0)\) and \({\bf x}_{b}(0)\), differ slightly then the chaotic nature of the dynamics of the two systems typically leads to exponential divergence of \({\bf x}_{a}(t)\) and \({\bf x}_{b}(t)\) with increasing time. Even usually, at large time, although the attractors in \({\bf x}_{a}\) space and \({\bf x}_{b}\) space are identical, the locations of \({\bf x}_{a}\) and \({\bf x}_{b}\) on their respective attractors are totally uncorrelated with each other. In the context above, one version of the problem of chaos synchronization is to design a coupling between the two systems such that the chaotic time evolutions, \({\bf x}_{a}(t)\) and \({\bf x}_{b}(t)\), become identical. That is, if \({\bf x}_{a}(0)\) and \({\bf x}_{b}(0)\) start close to each other, then \(|{\bf x}_{a}(t)-{\bf x}_{b}(t)|\to 0\) as \(t\to\infty\). For example, if \({\bf x}_{a}\) and \({\bf x}_{b}\) are three dimensional, \({\bf x}_{a,b}=x_{a,b}^{(1)}\), \(x_{a,b}^{(2)}\), \(x_{a,b}^{(3)}\)\(\dagger\), \({\bf F}({\bf x})=\ f_{1}({\bf x})\), \(f_{2}({\bf x})\), \(f_{3}({\bf x})\)\(\dagger\) (where \(\dagger\) denotes transpose) then one example of a coupling might be \[{\rm d}x_{a}^{(1)}/{\rm d}t=f_{1}({\bf x}_{a})+k_{a1}(x_{b}^{(1) }-x_{a}^{(1)}), \tag{10.12a}\] \[{\rm d}x_{a}^{(2)}/{\rm d}t=f_{2}({\bf x}_{a})+k_{a2}(x_{b}^{(1) }-x_{a}^{(1)}),\] (10.12b) \[{\rm d}x_{a}^{(3)}/{\rm d}t=f_{3}({\bf x}_{a})+k_{a3}(x_{b}^{(1) }-x_{a}^{(1)}),\] (10.12c) \[{\rm d}x_{b}^{(1)}/{\rm d}t=f_{1}({\bf x}_{b})+k_{b1}(x_{a}^{(1) }-x_{b}^{(1)}),\] (10.12d) \[{\rm d}x_{b}^{(2)}/{\rm d}t=f_{2}({\bf x}_{b})+k_{b2}(x_{a}^{(1) }-x_{b}^{(1)}),\] (10.12e) \[{\rm d}x_{b}^{(3)}/{\rm d}t=f_{3}({\bf x}_{b})+k_{b3}(x_{a}^{(1) }-x_{b}^{(1)}), \tag{10.12f}\] where the \(k\)s are _coupling constants_. If \(k_{ai}=0\) for \(i=1\), 2, 3, then we say there is one way coupling from \(a\) to \(b\), since the state of \(a\) influences \(b\), but \(b\) has no influence on \(a\). If \(k_{ai}\neq 0\) and \(k_{bj}\neq 0\) for at least one value of \(i=1\), 2, 3 and at least one value of \(j=1\), 2, 3, then we say there is two way coupling. This is shown schematically in Figure 10.8. The system given by Eqs. (10.12) is a six dimensional dynamical system resulting from the coupling of the two, originally, uncoupled, three dimensional systems. Note that if synchronization is achieved (i.e., \(({\bf x}_{a}(t)={\bf x}_{b}(t))\), then the coupling terms involving the \(k\)s are identically zero and \({\bf x}_{a,b}(t)\) is a trajectory of the uncoupled three dimensional system,\({\bf dx}_{a,b}/{\rm d}t={\bf F}({\bf x}_{a,b})\). (In this example we have, somewhat arbitrarily, chosen to couple only the \(x_{a}^{(1)}\) component of \({\bf x}_{a}\) to \(b\) and only the \(x_{b}^{(1)}\) component of \({\bf x}_{b}\) to \(a\).) In the six dimensional state space of the system (10.12) the synchronized state \({\bf x}_{a}={\bf x}_{b}\) represents a three dimensional invariant mani fold. It is invariant because, if the state of the full six dimensional system is initialized on the invariant manifold (\({\bf x}_{a}(0)={\bf x}_{b}(0)\)), then it remains there for all time (\({\bf x}_{a}(t)={\bf x}_{b}(t)\)). On this manifold Eqs. (10.12) reduce to \({\bf dx}_{a}/{\rm d}t={\bf F}({\bf x}_{a})\), \({\bf dx}_{b}/{\rm d}t={\bf F}({\bf x}_{b})\). The basic problem of synchronization of chaos is the stability of the synchronized state. In the context of Eqs. (10.12), we can consider an orbit in the six dimensional state space to be represented by the six dimensional vector, \[{\bf z}(t)=\biggl{[}\matrix{{\bf x}_{a}(t)\cr{\bf x}_{b}(t)\cr}\biggr{]}.\] Assume that \({\bf z}(t)\) is undergoing synchronized chaotic motion on the invariant manifold \({\bf x}_{a}(t)={\bf x}_{b}(t)\), and then imagine that we perturb \({\bf z}\) slightly from synchronism, i.e., \({\bf z}\) is given a small perturbation that is transverse to the three dimensional invariant manifold of Eqs. (10.12). If the synchronism is stable, then \({\bf z}\) relaxes back to the invariant manifold. In particular, to examine this stability problem we consider the difference perturbation \({\bf x}_{a}-{\bf x}_{b}\) to be infinitesimal, set \({\bf x}_{a}-{\bf x}_{b}=\), subtract Eqs. (10.12d f) from Eqs. (10.12a c), and linearize with respect to the infinitesimal transverse perturbation. This gives an evolution equation for. \[{\rm d}\ \ \bigl{/}{\rm d}t={\bf D}{\bf F}({\bf x})\ \ -{\bf k}\ \ \ \ \ \ \ \ (1), \tag{10.13}\]where \({\bf x}(t)={\bf x}_{a}(t)={\bf x}_{b}(t)\) is the synchronized chaotic motion satisfying \({\rm d}{\bf x}/{\rm d}t={\bf F}({\bf x})\), and \({\bf k}=k_{a1}+k_{b1}\), \(k_{a2}+k_{b2}\), \(k_{a3}+k_{b3}\)\({}^{\dagger}\). If for all \(({\bf x}_{a}(0)-{\bf x}_{b}(0))\) satisfying \(|{\bf x}_{a}(0)-{\bf x}_{b}(0)|<\epsilon\), with \(\epsilon\) some positive (possibly small) constant, Eqs. (10.12) yield decay to the invariant manifold (i.e., \(|{\bf x}_{a}(t)-{\bf x}_{b}(t)|\to 0\) with increasing \(t\)), then we say that the synchron is stable. This will be so if for all orbits \({\bf x}(t)\) on the chaotic attractor of the synchronized system \({\rm d}{\bf x}/{\rm d}t={\bf F}({\bf x})\) we have negative Lyapunov ex \({}_{0}\) transverse to the invariant manifold. That is, if \(({\bf x}_{0},\quad_{0})<0\), where \[({\bf x}_{0},\quad_{0})=\lim_{t\to\infty}\frac{1}{t}{\rm ln}[|\quad(t)|/|\quad _{0}|], \tag{10.14}\] \({\bf x}_{0}\) is the initial condition \({\bf x}(0)={\bf x}_{0}\) yielding the orbit \({\bf x}(t)\) experiencing the infinitesimal perturbation \((0)=\quad_{0}\), and \((t)\) is calculated using Eq. (10.13). In general, for Eqs. (10.12) with a specific choice of \({\bf x}_{0}\) on the chaotic attractor there will be three possible values of \(({\bf x}_{0},\quad_{0})\), corre \({}_{0}\). We will be interested in the largest (in the case where all the's are negative, the least negative) of the three values of \(({\bf x}_{0},\quad_{0})\), which we denote \(({\bf x}_{0})\) \[({\bf x}_{0})={\rm max}\quad_{0}({\bf x}_{0},\quad_{0}), \tag{10.15}\] we refer to \(({\bf x}_{0})\) as the maximal transverse Lyapunov exponent. If \(({\bf x}_{0})<0\), then an infinitesimal perturbation away from the invariant manifold that is applied at \({\bf x}_{0}\) will lead to an orbit that relaxes back to the invariant manifold. We return to the discussion of the stability of synchro \({}_{0}\) of chaos in Section 10.5. In addition to the coupling of two identical chaotic systems as in Eqs. (10.12), other types of chaos synchronization are also possible. One type, introduced by Pecora and Carroll (1990), is called _replacement synchro nization_. To illustrate replacement synchronization, consider an \(m\) dimensional system whose state is \[{\bf w}(t)=\begin{array}{c}{\bf x}(t)\\ {\bf y}(t)\end{array}, \tag{10.16}\] where \({\bf x}\) is \(m_{1}\) dimensional and \({\bf y}\) is \(m_{2}\) dimensional, \(m=m_{1}+m_{2}\). That is, we divide the state variables (the components of \({\bf w}\)) into two groups signified by \({\bf x}\) and \({\bf y}\). The \(m\) dimensional dynamical system is \({\rm d}{\bf w}/{\rm d}t={\bf F}({\bf w})\). It is presumed that the evolution, \({\bf w}(t)\) is chaotic. In terms of our \({\bf x}\) and \({\bf y}\) variables, the dynamical system is decomposed into two so called subsystems, \[{\rm d}{\bf x}/{\rm d}t={\bf G}({\bf x},\,{\bf y}), \tag{10.17a}\] \[{\rm d}{\bf y}/{\rm d}t={\bf H}({\bf x},\,{\bf y}), \tag{10.17b}\]where \[{\bf F}({\bf w})=\begin{array}{c}{\bf G}({\bf x},\,{\bf y})\\ {\bf H}({\bf x},\,{\bf y})\end{array}. \tag{10.18}\] We now introduce a driven replica subsystem, \[{\rm d}\hat{\bf y}/{\rm d}t={\bf H}({\bf x},\,\hat{\bf y}). \tag{10.19}\] Here we call \(\hat{\bf y}\) the subsystem response. According to Eq. (10.19) we duplicate the \({\bf y}\) subsystem (10.17b). We then take the time series \({\bf x}(t)\) from the \({\bf x}\) subsystem (Eq. (10.17a)), feed it into the replica subsystem (10.19), and examine the response of the replica subsystem \(\hat{\bf y}(t)\). This is illustrated schematically in Figure 10.9 where we utilize the common terminology whereby Eqs. (10.17) are referred to as the 'drive system' and Eq. (10.19) is referred to as the'response'. We say that the response subsystem synchronized to the chaotic evolution of the drive system, Eqs. (10.17), if \[\lim_{t\to\infty}\,|{\bf y}(t)-\hat{\bf y}(t)|=0.\] The state of the full dynamical system, Eqs. (10.17) and (10.19), is given by \({\bf x},\,{\bf y}\) and \(\hat{\bf y}\) and hence is \(m_{1}+2\,m_{2}\) dimensional. The synchronized state \(\hat{\bf y}={\bf y}\) represents a (\(m_{1}+m_{2}\)) dimensional manifold embedded in the state space of the full (\(m_{1}+2\,m_{2}\)) dimensional system. As for Eqs. (10.12), we can investigate the stability of synchronization for the system of Eqs. (10.17) and (10.19) by use of the transverse Lyapunov exponents. That is, we introduce an infinitesimal perturbation transverse to the invariant synchronization manifold, \(\hat{\bf y}-{\bf y}=\). Linearization of (10.19) about \(\hat{\bf y}={\bf y}\) then yields the evolution equation for \[{\rm d}\ \ /{\rm d}t={\bf D}_{\rm y}{\bf H}({\bf x},\,{\bf y})\ \, \tag{10.20}\] where \({\bf x}\) and \({\bf y}\) are solutions of (10.17). Using the time evolution of Figure 10.9: Schematic of replacement synchronization. obtained from (10.20) we can then define the maximal transverse Lyapu now exponent as in Eqs. (10.14) and (10.15). Some systems of replacement synchronization form (Eqs. (10.17), (10.19), (Figure 10.9)) will yield stable synchronization while others will not. This depends on the specific system (i.e., on the vector functions \(\mathbf{H}\) and \(\mathbf{G}\)). As an example of a replacement synchronization system, for the drive system we consider the following set of ordinary differential equations, \[\mathrm{d}x/\mathrm{d}t = b+x(y^{(1)}-c)\quad\ G(x,\ y^{(1)},\ y^{(2)}), \tag{10.21a}\] \[\mathrm{d}y^{(1)}/\mathrm{d}t = -x-y^{(2)}\quad\ H_{1}(x,\ y^{(1)},\ y^{(2)}),\] (10.21b) \[\mathrm{d}y^{(2)}/\mathrm{d}t = y^{(1)}+ay^{(2)}\quad\ H_{2}(x,\ y^{(1)},\ y^{(2)}). \tag{10.21c}\] Thus, for this example, in Eqs. (10.17) the dimension of \(\mathbf{x}\) is \(m_{1}=1\) and the dimension of \(\mathbf{y}\) is \(m_{2}=2\), \(\mathbf{y}=(y^{(1)},\ y^{(2)})^{\ddagger}\). (Equations (10.21) con stitute a class of systems that are potentially chaotic (depending on \(a\), \(b\) and \(c\)) originally considered by Rossler (1976).) The response system is \[\mathrm{d}\hat{y}^{(1)}/\mathrm{d}t = -x-\hat{y}^{(2)}, \tag{10.21d}\] \[\mathrm{d}\hat{y}^{(2)}/\mathrm{d}t = \hat{y}^{(1)}+a\hat{y}^{(2)}. \tag{10.21e}\] Equations (10.21d,e) are of a particularly special simple form (namely they are linear) which allows an analytical calculation of the transverse Lyapunov exponents. From Eqs. (10.21) we have \[\mathrm{d}\ \ ^{(1)}/\mathrm{d}t = -\ \ ^{(2)}, \tag{10.22a}\] \[\mathrm{d}\ \ ^{(2)}/\mathrm{d}t = \ ^{(1)}+a\ \ ^{(2)}, \tag{10.22b}\] which has solutions \[\exp(\gamma_{1,2}t),\ \gamma_{1,2}=(a/2)\pm\sqrt{(a/2)^{2}-1}.\] Thus, the maximal transverse Lyapunov exponent is \(=\mathrm{Re}(\gamma_{1})\) and stability applies (\(\ <0\)) if \(a<0\). Note that for this simple example (\(\mathbf{x}_{0},\ \mathbf{y}_{0}\)) is independent of the initial conditions \(\mathbf{x}_{0}\) and \(\mathbf{y}_{0}\). This is an exceptional circumstance that does not hold in more general situations. (The consequences of the existence of points on the attractor whose maximal transverse Lyapunov exponent is different from that for points that are typical with respect to the natural measure are discussed in the next section.) Figure 10.10 shows a numerical example where the stability condition for Eqs. (10.21) is violated. Although \(\mathbf{y}\) and \(\hat{\mathbf{y}}\) start close to each other, the figure shows that the response \(\hat{y}^{(1)}\) diverges from the chaotic evolution of the drive \(y^{(1)}\). (For the case \(a<0\), \(\hat{y}^{(1)}\) stays close to \(y^{(1)}\) and relaxes to it.) We also remark that the chaos synchronization schemes illustrated in Figures 10.8 and 10.9 by no means exhaust the possible system synchronization architectures, and indeed a variety of other schemes have been investigated. All possess the common feature of having synchronized chaotic motion corresponding to system states on an invariant manifold embedded in the full state space of the coupled system. Why might we be interested in the synchronization of chaotic systems? In the remainder of this section we give three examples of applications involving the synchronization of chaos. We first discuss the application of synchronization of chaos to the problem of secure communication (e.g., see Cuomo and Oppenheim, 1993). By secure communication we mean that, if \(A\) sends \(B\) an informa tion signal and \(C\) intercepts this signal, then \(C\) will have difficulty extracting the transmitted information from the signal, while \(B\) will not have difficulty. The idea of Cuomo and Oppenheim (1993) was to mask an information bearing signal \(s(t)\) by adding to it a large noiselike chaotic signal \(x(t)\). The transmitted signal \(S(t)=s(t)+x(t)\) appears noiselike. Here \(s(t)\) is a digital signal consisting of a string of ones (\(s(t)=s_{0}\)) and zeros (\(s(t)=0\)). Cuomo and Oppenheim feed \(S(t)\) into a synchronizing circuit at the receiver, such that if \(s(t)=0\) (i.e., \(S(t)=x(t)\)), then the receiver system response \(\hat{x}(t)\) is synchronized to the system creating the transmitted masking chaotic signal, (i.e., \(\hat{x}(t)=x(t)\)). Because of the presence of the signal, the difference, \(S(t)-x(t)\), may not be zero. This difference can be sensed at the receiver, since \(S(t)\) is known at the receiver and we have the proper synchronizing system. In particular, when a zero is sent (\(s(t)=0\)), \(S(t)-\hat{x}(t)\) will tend to be small, while, when a one is sent (\(s(t)=s_{0}\)), it will be larger. The particular synchronization scheme of Cuomo and Oppenheim makes use of the replacement synchronization idea of Figure 10.9 with \(m_{1}=1\) (**x** is a scalar). The transmitter is described by equations of the form \[\mathrm{d}x/\mathrm{d}t = G(x,\,\mathbf{y}), \tag{10.23a}\] \[\mathrm{d}y/\mathrm{d}t = \mathbf{H}(x,\,\mathbf{y}), \tag{10.23b}\] generating the chaotic time series \(x(t)\) which is added to \(s(t)\) to mask the information signal, \(S(t)=x(t)+s(t)\). The receiver then has the form \[\mathrm{d}\hat{x}/\mathrm{d}t = G(\hat{x},\,\hat{y}), \tag{10.23c}\] \[\mathrm{d}\hat{y}/\mathrm{d}t = \mathbf{H}(S,\,\hat{y}). \tag{10.23d}\] Thus, if \(s(t)\) 0 for all time then \(S(t)=x(t)\), and synchronization of Eqs. (10.23c,d) with Eqs. (10.23a,b) implies \(S(t)-\hat{x}(t)\) 0. Because \(s(t)\) is not zero for all time, but switches to zero when a zero is sent (\(s(t)=s_{0}\) when a one is sent), \(S(t)-\hat{x}(t)\) will not be zero when a zero is sent but, depending on \(G\) and \(\mathbf{H}\) and the bit rate, it should be small. Thus, ones can be distinguished from zeros. A schematic of the Cuomo Oppenheim scheme is shown in Figure 10.11. Someone who intercepts the signal \(S(t)\) as it is transmitted from the transmitter to the receiver who has no knowledge of the chaotic system producing \(x(t)\) will not be able to construct a synchronizing receiver, and hence will have a hard time extracting \(s(t)\). Many variations on this theme (e.g., configurations other than that shown in Figure 10.11) have been studied. See, for example, the paper by Kocarev and Parlitz (1995), and the review of synchronization of chaos by Pecora _et al_. (1997). In addition, other applications of synchronization of chaos to communications have also been discussed (Sharma and Ott, 2000; Tsimring and Suschik, 1996). Another application of synchronization of chaos is to construct what control engineers call an _observer_. Say we have some physical system that is behaving chaotically, and we know the equations describing the system which are of the form \(\mathrm{d}\mathbf{x}/\mathrm{d}t=\mathbf{F}(\mathbf{x})\). We wish to know the state of \(\mathbf{x}(t)\) of this physical system, but, due to practical limitations we cannot measure \(\mathbf{x}(t)=\ x^{(1)}(t)\), \(x^{(2)}(t)\),..., \(x^{(N)}(t)]^{\dagger}\) but only one component, say \(x^{(1)}(t)\) Figure 10.11: Schematic of the Cuomo Oppenheimer scheme. The observer problem is to deduce the chaotic time dependence \({\bf x}(t)\) of the state of the physical system from limited observations of it, in this case \(x^{(1)}(t)\) (e.g., see So _et al._, 1993). One potential way of doing this is to use synchronization of chaos. For example, consider Figure 10.8 and Eqs. (10.12) in the case of one way coupling (\(k_{a1}=k_{a2}=k_{a3}=0\)), identify system \(a\) with the actual physical device from which the measured time series \(x_{a}^{(1)}(t)\) is observed, and consider system \(b\) to be a set of model equations being numerically solved in real time on a computer with the measured \(x_{a}^{(1)}(t)\) as input. If systems \(a\) and \(b\) synchronize, then the computed time series for \(x_{b}^{(2)}(t)\) and \(x_{b}^{(3)}(t)\) will be the same as the time series \(x_{a}^{(2)}(t)\) and \(x_{a}^{(3)}(t)\) for the unobserved components of the state of the physical system. Thus, we can determine the full system state from measurement of the single component \(x_{a}^{(1)}(t)\). As a final example of the application of synchronization of chaos, we refer to the situation of particles floating on the surface of a fluid whose flow is varying chaotically with time. In this case, to a good approximation, a particle obeys the equation \({\rm d}{\bf y}/{\rm d}t={\bf v}_{\rm s}({\bf y},\,t)\), where \({\bf y}\) is a two dimensional vector giving the position of the particle on the fluid surface, and \({\bf v}_{\rm s}\) is the fluid velocity tangential to the fluid surface. The fluid velocity is determined by the Navier Stokes equations, the fluid forcing (e.g., stirring), and the boundary conditions. Considering two particles with positions \({\bf y}\) and \(\hat{\bf y}\) this situation may be thought of as a degenerate case of replacement synchronism where in Eqs. (10.17) (10.19) the function \({\bf G}\) is independent of \({\bf y}\). In this case, Eqs. (10.17) (10.19) become \({\rm d}{\bf x}/{\rm d}t={\bf G}({\bf x})\), \({\rm d}{\bf y}/{\rm d}t={\bf H}({\bf x},\,{\bf y})\) and \({\rm d}\hat{\bf y}/{\rm d}t={\bf H}({\bf x},\,\hat{\bf y})\). With respect to the situation of floating particles, we identify \({\bf x}\) with the state of the fluid and \({\rm d}{\bf x}/{\rm d}t={\bf G}({\bf x})\) with the Navier Stokes equations (perhaps via a modal expansion). The surface velocity field \({\bf v}_{\rm s}\) depends on the fluid state \({\bf x}\), so that \({\bf H}({\bf x}(t),\,{\bf y})={\bf v}_{\rm s}({\bf y},\,t)\). If synchronization holds, then two particles are driven to the same point on the fluid surface and move together in an irregular manner under the influence of the chaotic time dependence of the surface velocity field (Yu _et al._, 1991). In the case where synchroniza tion does not hold, the particles move in a seemingly independent fashion. Which of these two possibilities applies depends on \({\bf v}_{\rm s}\), which in turn depends on such factors as the strength of the stirring force driving the fluid, the fluid viscosity, etc. Experimental results (Section 3.8) for the unsynchronized case show that a cloud of many particles distribute themselves on a fractal set within the fluid surface. The dimension of this fractal was experimentally confirmed to satisfy the Kaplan Yorke formula Eq. (4.38) using Lyapunov exponents based on the equation \({\rm d}\,\,\,/{\rm d}t={\bf D}_{\rm y}{\bf H}({\bf x},\,{\bf y})={\bf D}_{\rm y }{\bf v}_{\rm s}({\bf y},\,t)\,\,\). In the case where synchronization holds the two Lyapunov exponents are both negative and an initial cloud of many particles clump together (Yu _et al._, 1991). ### Stability of a chaotic set on an invariant manifold As we have seen in the previous section, the key consideration for determining whether synchronization of chaos actually occurs reduces to the problem of determining whether the chaotic motion in an invariant manifold is stable. More specifically, we consider the case where, for typical initial conditions _in_ the invariant manifold, the resulting orbits go to a chaotic attractor and move ergodically on it. The question that arises is whether this attractor for points on the invariant manifold attracts points not on the invariant manifold, and, if it does, how is this affected by small deterministic or random perturbations to the system? This problem is more general than the specific context of chaos synchronization, and applies to other situations as well. One example, discussed in Section 5.7, is that of systems with symmetry. Our discussion here will be general in that it is not limited to the consideration of chaos synchronization. As indicated in the previous section the key quantity is the maximal transverse Lyapunov exponent (\(\mathbf{x}_{0}\)), defined in Eqs. (10.14) and (10.15). This quantity characterizes the exponential convergence ( \((\mathbf{x}_{0})<0\)) or divergence ( \((\mathbf{x}_{0})>0\)) toward the invariant manifold for infinitesimal transverse perturbations from an initial condition \(\mathbf{x}_{0}\) on the chaotic set in the invariant manifold. It is natural to assign stability or instability to the transverse motion based on whether the transverse Lyapunov exponent is negative or positive. But the transverse Lyapunov exponent depends on \(\mathbf{x}_{0}\). Which \(\mathbf{x}_{0}\) should we use to decide stability? In particular, there is the \(\mathbf{x}_{0}\) which makes \((\mathbf{x}_{0})\) maximum (least stable), and there is the set of \(\mathbf{x}_{0}\) in the chaotic set that is typical to the natural measure. In terms of these two choices we define two transverse Lyapunov exponents, \[\tilde{\ }=\underset{\mathbf{x}_{0}\in A}{\max}\ \ (\mathbf{x}_{0}), \tag{10.24a}\] \[\ast=\ (\mathbf{x}_{0})\text{ for }\mathbf{x}_{0}\in A\text{ typical}, \tag{10.24b}\] where \(A\) is the chaotic attractor for initial conditions in the invariant manifold. Note that if \(\tilde{\ }<0\), then \((\mathbf{x}_{0})<0\) for all \(\mathbf{x}_{0}\in A\). By Eq. (10.24b) we mean that the set of \(\mathbf{x}_{0}\) on \(A\) for which \(\ast\neq\ (\mathbf{x}_{0})\) has natural measure zero with respect to points on the invariant manifold. That is, if we consider the set of all initial conditions that are on the invariant manifold, that go to \(A\), and that yield \(\ast\neq\ (\mathbf{x}_{0})\), then the Lebesgue measure of this set (restricted to the invariant manifold) is zero. Thus, if one were to 'close one's eyes and put one's finger down randomly at a point on the invariant manifold' then the maximal transverse Lyapunov exponent generated by following the orbit from this point would be \(\ast\), with probability one. (See the previous discussions of natural measure inSections 3.3 and 4.4.) On the other hand, there is typically a zero natural measure set of points on \(A\) for which \(({\bf x}_{0})\neq~{}*\); for example, this is typically the case for \({\bf x}_{0}\) on an unstable periodic orbit embedded in \(A\) (see Section 4.4). Thus, it is typical that regime (Ashwin _et al._, 1994a,b), and is characterized by \(\hat{}\) > 0 and \(*<0\). As \(p\) is further increased, \(*\) passes from negative to positive at \(p=p_{*}\). We call the dynamical phenomena that accompany the passage of \(p\) through \(\hat{p}\) the _bubbling transition_, and the phenomena that accom pany the passage of \(p\) through \(p_{*}\) the _blow out bifurcation_ (Ott and Sommerer, 1994). We now consider the dynamics associated with the bubbling regime, \(\hat{p}<p<p_{*}\). Since \(*<0\), a neighborhood of the chaotic set \(A\) will have a nonzero volume of the full state space such that orbits initialized in this volume limit on \(A\) as \(t\to\infty\). Furthermore, since \(*\) refers to a set on the attractor that has natural measure one, it follows that as the neighborhood is taken to be closer and closer to \(A\), the fraction of the neighborhood's volume yielding orbits that move toward \(A\) approaches one. On the other hand, since \(\hat{}\) > 0, there are initial points in any neighborhood of \(A\) that yield orbits which are repelled from \(A\), moving far from \(A\) and from the invariant manifold. When such an orbit moves far from \(A\) it may be captured by another attractor, if the system indeed has another attractor (not necessarily always the case). We have already dealt with the case where there is another attractor in Section 5.7 where we showed that, in this case, the basin of attraction for \(A\) is a _riddled basin_. What happens if the global system dynamics away from the invariant manifold does not admit another attractor? In that case, as the orbit wanders it eventually will return to the vicinity of \(A\). Perhaps, when it does this, it will again be repelled and move away, but, in the end, it will come near \(A\) and be attracted to it, settling down to sustained chaotic motion on \(A\). The set \(A\) is thus the single global attractor for the full system. The discussion above applies to an ideal case. There is no external noise in the system and the invariant manifold is assumed to exist. In practical situations there is inevitably small noise present, and the invar iant manifold may only exist in an approximate sense. As an example of the latter consider Figure 10.8, but now assume that the uncoupled systems \(a\) and \(b\) obey slightly different equations, \[{\rm d}{\bf x}_{a}/{\rm d}t={\bf F}_{a}({\bf x}_{a}),\ \ \ \ {\rm d}{\bf x}_{b}/{\rm d}t={\bf F}_{b}({\bf x}_{b}),\] where \({\bf F}_{a}({\bf x})-{\bf F}_{b}({\bf x})=\epsilon{\bf g}({\bf x})\). We refer to \(\epsilon{\bf g}\) as the synchronization _mismatch_ and the parameter \(0<\epsilon\ll 1\) is used to signify our presumption that the mismatch is small. When dealing with real physical systems, we anticipate that it may be impossible to make two systems that are intended to synchronize such that they are exactly the same. Thus, like noise, mismatch is always present. The effect of noise of maximum absolute value \(\epsilon\) on a riddled basin attractor (Section 5.7) on the invariant manifold can be understood as follows. Say the orbit is on the attractor \(A\) and that the noise kicks it off the invariant manifold to some position whose distance is of order \(\epsilon\) from the invariant manifold. Depending on where the orbit was on \(A\) and on the noise kick, the orbit may be perturbed to fall into the basin of the attractor not on the invariant manifold. In such a case, if the noise after this first kick is set to zero then the orbit will move toward the other attractor, never returning to \(A\). Note, however, that this is rather unlikely if \(\epsilon\ll 1\). This is because the volume within a small distance \(\epsilon\ll 1\) of \(A\) is overwhelmingly occupied by points in the basin of \(A\), and only a very small fraction of this volume is occupied by the basin of the attractor not on the invariant manifold. This can be seen from Eq. (5.22). Thus, there is only a small probability that the orbit will be repelled from \(A\). But, if we iterate the orbit many times, with noise applied on each iterate, eventually it will leave the vicinity of \(A\), going to the other attractor and never returning to the vicinity of \(A\). Thus, all orbits eventually go to the other attractor, and \(A\) may be said to cease being an attractor when _any_ noise is present. Let \(\tau_{A}\) be the average time that an orbit initialized on \(A\) stays within the vicinity of \(A\), where the average is over initial conditions distributed according to the natural measure on \(A\). For small \(\epsilon\) this time can be very long (Ott _et al._, 1994), and \(\tau_{A}\) approaches infinity as \(\epsilon\to 0\). However, no matter how small \(\epsilon>0\) is, if we wait long enough, the orbit will always leave the vicinity of \(A\). On the other hand, an experiment of limited duration might be short enough that noise induced expulsion of the orbit from the vicinity of \(A\) does not occur. In such a case \(A\) would appear to be an attractor for the system. The above discussion has been for the case of the noise, assuming no mismatch. It can be shown that the effect of mismatch is similar to noise (Venkataramani _et al._, 1996a). That is, it gives \(A\) a finite lifetime \(\tau_{A}\) which is longer for smaller mismatch, approaching infinity as the mismatch size tends to zero. Again consider the set \(A\) on the invariant manifold with \(p\) in the bubbling regime, \(\hat{p}<p<p_{*}\). Now, however, we wish to examine the situation where the basin of \(A\) is not riddled but occupies the entire volume of the state space. What is the effect of the addition of noise in this situation? (Mismatch has a similar qualitative effect.) The dynamics in the vicinity of \(A\) is essentially the same as for our previous discussion of the case where the basin of \(A\) is riddled. The difference is that now, when the orbit has been repelled from \(A\) and moves far from the vicinity of \(A\), there is no other attractor waiting there to capture it. In this case, the orbit will eventually return to the vicinity of \(A\), staying there for a possibly long time before again being repelled, again returning, and so on. Thus, there is an intermittent occurrence of _burst_ events, where by a burst we mean that the orbit goes far from \(A\). If, in a case with small noise, we plot the distance of the orbit from the invariant manifold versus time, it is very small for long stretches of time punctuated by relatively short epochs(bursts) where it is large. The mean time between bursts, which we denote by \(\tau_{A}^{\prime}\), is essentially determined by the same considerations that yield the mean lifetime \(\tau_{A}\) of the riddled basin attractor (Venkataramani _et al._, 1996a,b). An experimental study of noise induced bursting in the bubbling regime appears in the paper of Gauthier and Bienfang (1996). Figure 10.13 shows a bursting time series taken from their paper. The behavior in the bubbling regime is to be contrasted with the behavior in the regime \(p<\hat{p}\). For \(p<\hat{p}\) both \(\hat{\ }\) and \(\ \ast\) are negative. Thus _all_ sufficiently small perturbations of an orbit from the invariant manifold eventually relax back toward it. The effect of noise or mismatch of maximum magnitude \(\epsilon\) is fairly benign for small \(\epsilon\). In particular, if \(\epsilon>0\) is sufficiently small, an orbit initially on \(A\) will be confined to a small neighborhood of \(A\)_for all time_. Thus, for \(\epsilon\) small there is only a small effect on the orbit. For example, in the case of the synchronizing system given by Eqs. (10.12) and Figure 10.8, the two orbits \({\bf x}_{a}(t)\) and \({\bf x}_{b}(t)\) will stay very close for all time. The fact that they are not _exactly_ equal may have no practical importance. This is to be contrasted with the case of bubbling where (in the absence of an unsynchronized attractor off the \({\bf x}_{a}={\bf x}_{b}\) manifold) for long periods of time \({\bf x}_{a}(t)\) and \({\bf x}_{b}(t)\) may be well synchronized (i.e., close to each other) but there are also short occasional bursts when \({\bf x}_{a}\) and \({\bf x}_{b}\) move far apart (see Figure 10.13). As \(p\) increases through \(\hat{p}\) (see Figure 10.12), we have a transition between the case of stability (\(\hat{\ }\) and \(\ \ast\) both negative) to the bubbling regime (\(\hat{\ }>0\)). The dynamical behavior accompanying this bubbling transition is discussed by Venkataramani _et al._ (1996a,b). In particular, the scaling of \(\tau_{A}\) and of \(\tau_{A}^{\prime}\) with \((p-\hat{p})\) and \(\epsilon\) (the size of the noise or mismatch) are discussed, as is the dependence of the maximum burst amplitude on these quantities. We now ask, what are the characteristic phenomena accompanying the increase of \(p\) through \(p_{\ast}\)? This is the blow out bifurcation discussed by Figure 10.13: Experimental loss of synchronism in a system of two coupled oscillator circuits (Gauthier and Bienfang, 1996). The quantity \(|x_{\perp}(t)|\) is the difference between two voltages in the circuits and measures the distance of the system state from the synchronization manifold. Long intervals of approximate synchronization are interrupted by brief large scale desynchronization events. Ott and Sommerer (1994). The effect of a blow out bifurcation on a riddled basin of attraction can be seen from the results of Section 5.7: as \(p\) approaches \(p_{*}\) from below, the state space volume occupied by the riddled basin of attraction approaches zero. In particular, in Eq. (5.22) the exponent is given by \(=[\ln(\beta/\alpha)]/\ln 2\), which approaches zero as the blow out bifurcation is approached, \(\beta/\alpha\to 1\). This is a special case of the general result (Ott, _et al_. 1994) for \(p\) near \(p_{*}\), \[\cong-\ */D, \tag{10.26}\] where \(D\) is a quantity characterizing the spread of finite time Lyapunov exponents for differential perturbations in the invariant manifold. As \(p\) approaches \(p_{*}\) from below, the exponent becomes smaller, approaching zero at \(p=p_{*}\). This signals the death as an attractor of the chaotic set in the invariant manifold. The basin of the attractor off the invariant manifold occupies all of the volume of state space. It remains to discuss the effect of a blow out bifurcation for the situation where for \(p>p_{*}\) the basin of the attractor on the invariant manifold is not riddled. For this case, it is found that, for \(p\) slightly larger than \(p_{*}\), there is intermittent bursting from the invariant manifold. An observed time series is much like that in Figure 10.13 for \(p<p_{*}\). The difference is that now bursting occurs in the ideal case where noise and mismatch are absent. Figure 10.14 from Ott and Sommerer (1994) shows such a time series obtained from a system of the form (5.20). (In Section 5.7 it was stated that this system can support a riddled basin attractor. For the situation in Figure 10.14 the potential \(V(x,\,y)\) in Eq. (5.20) is modified Figure 10.14: On off intermittent time series for a system of the form of Eq. (5.20). from that case so that there is no attractor off the invariant manifold.) The type of bursting seen for \(p\) slightly greater than \(p_{*}\), as in Figure 10.14, has been studied in a number of papers (Yamada and Fujisaka, 1983; Yu _et al._, 1991; Heagy _et al._, 1994; Platt _et al._, 1994; Venkataramani _et al._, 1995; Cenys _et al._, 1996), and is called on off intermittency. The main properties of an on off intermittent time series \(y(t)\) can be summarized as follows. * Sampling \(|y(t)|\) at many times, one can obtain a histogram approximation to the probability distribution function \(P(|y|)\). This probability distribution function is predicted to have the following form, \[P(|y|)\sim|~{}y|^{\gamma},\] (10.27) with \(\gamma=(~{}*/D)-1\) for \(|y|\ll|y|_{\max}\), where \(|y|_{\max}\) is the maxi mum burst amplitude and \(D\) is the previously mentioned quantity characterizing the spread of finite time Lyapunov exponents for infinitesimal perturbations in the invariant manifold. (Referring to Eq. (9.36'), \(D=~{}G^{n}(\bar{~{}^{-}})\)]\({}^{-1}\), where \(\bar{~{}^{-}}\) is the Lyapunov exponent for perturbations in the invariant manifold.) * The time average \(\langle|y(t)|\rangle\) scales linearly with the parameter for \(p>p_{*}\), and (\(p-p_{*}\)) small, \[\langle|y(t)|\rangle\ \ \ \ (p-p_{*}).\] (10.28) Since the maximum burst amplitude \(|y|_{\max}\) is essentially constant for small (\(p-p_{*}\)), Eq. (10.28) implies that the bursts become more frequent as (\(p-p_{*}\)) increases. * We can define a set of burst times as those times at which \(|y(t)|\) crosses some threshold value \(y_{\rm th}\) in the upward direction. The threshold can be chosen to be some \(O(1)\) fraction \(\beta\) of \(|y|_{\max}\), i.e., \(y_{\rm th}=\beta|y|_{\max}\), \(\beta<1\) (e.g., for Figure 10.14 one might choose \(y_{\rm th}=5\)), and the scalings based on this thresholding (Eqs. (10.29) and (10.30) below) are independent of \(\beta\). Let \(t_{j}\) and \(t_{j+1}\) be two successive burst times. The \(j\)th interburst interval duration is defined as \(\delta_{j}=t_{j+1}-t_{j}\). Given a long on off intermittent time series we can obtain a large collection of interburst times \(\{\delta_{j}\}\). Using these we can construct a histogram approximation to the interburst time probability distribution function, \(\hat{P}(\delta)\). Then \(\hat{P}(\delta)\) is predicted to obey the scaling \[\hat{P}(\delta)\ \ \ \ \delta^{-3/2}\] (10.29a)in the range \[D^{-1}\ll\delta\ll D/\begin{array}{c}2\\ \ast\end{array}.\] (10.29b) Note that the range (10.29b) increases as the transition is ap proached from above since \(\begin{array}{c}\ast\to 0\text{ as }p\to p_{\ast}.\end{array}\) (iv) Imagine that we plot the burst times \(t_{j}\) along the \(t\) axis between \(t=0\) and some very long time \(t=T\), and then we rescale this long time interval to the unit interval by normalizing \(t\) to \(T\). In this case, in the double limit \(T\to\infty\) followed by \((p-p_{\ast})\to 0\), the set of normalized burst times on the unit interval is predicted to approach a fractal set with box counting dimension \[d=1/2.\] (10.30a) Without normalizing, and considering \(p-p_{\ast}\) to be small but nonzero, this corresponds to \[N(\delta t)\quad(\delta t)^{1/2},\] (10.30b) where the \(t\) axis has been divided into segments of length \(\delta t\), and \(N(\delta t)\) is the number of these segments needed to cover the set of burst times. This scaling is predicted to be valid in the range \[D^{-1}\ll\delta t\ll D/\begin{array}{c}2\\ \ast\end{array},\] (10.30c) similar to Eq. (10.29b). ### Generalized synchronization of coupled chaotic systems In the previous sections we have been considering a form of synchronism whereby chaotic systems that are coupled produce orbits that are identical, or nearly identical. Inherent in this is the assumption that the coupled systems are matched: in the terminology of the previous section the mismatch parameter \(\epsilon\) is zero or small. What happens if the coupled systems are not, even approximately, close to each other? Is there some notion of synchronization that might apply to such a situation? With respect to this question, Rulkov _et al_. (1995) consider a one way coupled system (Figure 10.8(_a_)) of the general form \[\mathrm{d}\mathbf{x}/\mathrm{d}t =\mathbf{F}(\mathbf{x}), \tag{10.31a}\] \[\mathrm{d}\mathbf{y}/\mathrm{d}t =\mathbf{G}(\mathbf{y},\,\mathbf{h}(\mathbf{x})), \tag{10.31b}\] where \(\mathbf{x}\) is \(n\) dimensional and \(\mathbf{y}\) is \(m\) dimensional. Equation (10.31a) is the _drive_ and Eq. (10.31b) is the _response_. The quantity \(\mathbf{h}(\mathbf{x})\) is introduced to explicitly take into account the possibility that a function of \(\mathbf{x}\) is used to drive the response system. (Note that the case of a matched system is included, e.g., \(m=n\) and \(\mathbf{G}(\mathbf{y},\,\mathbf{h}(\mathbf{x}))=\mathbf{F}(\mathbf{y})+\mathbf{K} (\mathbf{x}\,\,\mathbf{y})\), where \(\mathbf{K}\) is an \(m\times m\) constant matrix, but we are interested in a situation far from this case.) Rulkov _et al_. stated that there is _generalized synchronization_ for the system (10.31) if the response state \(\mathbf{y}\) is determined uniquely by the drive state \(\mathbf{x}\) once the system has settled onto its chaotic attractor, \[\mathbf{y}=\Phi(\mathbf{x}).\] This is shown schematically in Figure 10.15. Rulkov _et al_. consider how evidence for generalized synchronism can be extracted from data taken from the drive and response systems. Kocarev and Parlitz (1996) present examples and give a rigorous result for the occurrence of generalized synchronization: generalized synchroni cation occurs if, for all initial \(\mathbf{x}_{0}\) in a neighborhood of the drive system, the response system is asymptotically stable. Recall that the response system is said to be asymptotically stable if there is a region \(B\) of \(\mathbf{y}\) space such that for any two initial points \(\mathbf{y}_{0}^{(1)}\) and \(\mathbf{y}_{0}^{(2)}\) in that region we have \[\lim_{t\to\infty}\|\mathbf{y}(t,\,\mathbf{x}_{0},\,\mathbf{y}_{0}^{(1)})- \mathbf{y}(t,\,\mathbf{x}_{0},\,\mathbf{y}_{0}^{(2)})\|=0\] where \(\mathbf{y}(t,\,\mathbf{x}_{0},\,\mathbf{y}_{0}^{(1)})\) and \(\mathbf{y}(t,\,\mathbf{x}_{0},\,\mathbf{y}_{0}^{(2)})\) are the orbits from \((\mathbf{x}_{0},\,\mathbf{y}_{0}^{(1)})\) and \((\mathbf{x}_{0},\,\mathbf{y}_{0}^{(2)})\) and these orbits are in the interior of \(B\) for \(t\) sufficiently large. Hunt, Ott and Yorke (1997) consider the smoothness of the function \(\Phi\). In particular, it is possible that the function \(\Phi\) can be very wild so that its graph (e.g., Figure 10.15) can be a fractal surface. Hunt _et al_. use the Holder exponent to quantify the roughness of the surface when it is fractal, and they also obtain a formula for the Holder exponent in terms of Lyapunov exponents. In addition, they derive a condition for the surface to Figure 10.15: Schematic of \(\mathbf{y}=\Phi(\mathbf{x})\). Here \(\mathbf{x}\) is \(n\) dimensional \(\mathbf{y}\) is \(m\) dimensional, and, if the function \(\Phi\) is smooth, then \(\mathbf{y}=\Phi(\mathbf{x})\) is an \(m\) dimensional surface in the \((m+n)\) dimensional system state space \((\mathbf{x},\,\mathbf{y})\). be smooth (differentiable), i.e., nonfractal, where this condition is also in terms of Lyapunov exponents. ### 10.7 Phase synchronization of chaos In this section we consider a form of synchronization that may be regarded as being weaker than either identical synchronization or generalized synchronization. In particular, there may be no relation of the form \({\bf y}=\Phi({\bf x})\), as in generalized synchronization. The type of synchronization we consider here applies to systems that, although chaotic, are oscillatory in such a way that a temporal phase angle can be defined. As an illustration, consider a three dimensional dynamical system, \[{\rm d}{\bf x}/{\rm d}t={\bf R}({\bf x}),\] where \({\bf x}(t)=x(t)\), \(y(t)\), \(z(t)]^{\dagger}\), and the dynamics of \({\bf x}(t)\) is both oscilla tory and chaotic. Figure 10.6(\(a\)) illustrates what we mean by this utilizing a particular choice for \({\bf R}(t)\). The figure shows a plot of \(x(t)\) which oscillates between maxima and minima where the amplitudes and the times between them vary in an irregular manner. This type of behavior is common to many chaotic systems. Figure 10.16(\(b\)) shows the \(xy\) projection of the orbit on the chaotic attractor of this system. The orbit continually circles around the origin in the counter clockwise direction. Defining a phase angle, \[\phi(t)=\arctan[(y(t)-\bar{y})/(x(t)-\bar{x})], \tag{10.32}\] where \(\bar{x}\) and \(\bar{y}\) are the time averages of \(x(t)\) and \(y(t)\), the phase \(\phi(t)\) increases continually with \(t\). Here we take \(\phi(t)\) to be continuous in time. Thus, by this convention, as \(\phi(t)\) increases through \(2\pi\) it is not reset to zero, and the quantity \(\phi(t)-\phi(0)]/2\pi\) indicates the total number of times between time 0 and time \(t\) that the orbit has encircled the point \((\bar{x},\bar{y})\). Now imagine that we couple a signal \({\bf P}(\omega t)\) from a periodic oscillator to an oscillatory chaotic system as in Figure 10.16, \[{\rm d}{\bf x}/{\rm d}t={\bf R}({\bf x})+A{\bf P}(\omega t), \tag{10.33}\] where \({\bf P}\) is \(2\pi\) periodic in its argument, \({\bf P}(\omega t+2\pi)={\bf P}(\omega t)\), and \(A\) is a parameter signifying the strength of the coupling to the periodic oscillator. We ask whether the phase \(\omega t\) of the periodic drive and the phase \(\phi\) of the chaotic oscillator are synchronized. In particular, we say that there is phase synchronization if \[K\leq\phi(t)-\omega t\leq K+2\pi \tag{10.34}\] for some constant \(K\) and all time \(t>t_{\ell}\), where \(t_{\ell}\) is the time at which the phases become locked. According to (10.34), as the phase \(\phi(t)\) increases it can also fluctuate chaotically provided that the chaotic fluctuations of \(\phi(t)\) are limited to within a range of \(2\pi\). Note that, while the phases are locked in this manner, the amplitude of the phase synchronized oscillation (e.g., for Figure 10.16, \(a(t)=[(x(t))^{2}+(y(t))^{2}\)\({}^{1/2}\)) can vary chaotically, just as it does without the periodic pacing signal (i.e., with \(A=0\) in Eq. (10.33)). In a situation such as in Figure 10.16 we can define an average frequency for the undriven (\(A=0\)) system, \[\overline{\omega}=\lim_{t\rightarrow\infty}\phi(t)-\phi(0)]/t, \tag{10.35}\] where \(\phi(t)\) is calculated using (10.32) and a typical chaotic orbit. It is important to note that this does _not_ imply that the undriven system satisfies the phase synchronization condition (10.34) when \(\omega\) is replaced by \(\overline{\omega}\). In fact, the chaotic nature of the orbit implies that \(\phi(t)-\overline{\omega}t\) is similar to an unbiased random walk. Taking an average (denoted \(\langle\cdots\rangle\)) over initial conditions on the attractor one thus expects that at long time \[\langle\phi(t)-\overline{\omega}t\ ^{2}\rangle\sim\ 2D_{\phi\phi}t, \tag{10.36}\] where \(D_{\phi\phi}\) is a phase diffusion coefficient. This is appropriate to the fact that the mean squared distance covered by the random walk is governedby diffusion. Thus, for a typical orbit, \(|\phi(t)-\overline{\omega}|\sim\sqrt{t}\), and Eq. (10.34) with \(\omega=\overline{\omega}\) is not satisfied. (The diffusion does not affect the average in (10.35) because \(\sqrt{t}/t\to 0\) as \(t\rightarrow\infty\).) The phase synchronization condition (10.34) implies not only that \(|\phi(t)-\overline{\omega}t|\) is bounded, but that it is bounded by \(2\pi\). Thus, starting at the locking time \(t_{\ell}\), the number of phase rotations of the driver \(\omega(t-t_{\ell})/2\pi\) and the number of phase rotations of the chaotic system, \([\phi(t)-\phi(t_{\ell})]/2\pi\), are the same to within a fluctuation that is always less than one rotation. Whether phase synchronization holds for a system like Eq. (10.33) with given **R** and **P** depends on \(\omega\) and the amplitude \(A\). A typical situation is as shown in Figure 10.17 (Pikovsky, 1985; Stone, 1992; Rosenblum _et al._, 1996; Rosa _et al._, 1998). For example, Figure 10.17 applies for the **R(x)** used for Figure 10.16 with \({\bf P}(t)=[0,\,\sin\omega t,\,0]^{\dagger}\). As shown in Figure 10.17, there is a tongue like region in (\(A\), \(\omega\)) space where phase synchro nization applies. This tongue is similar to the Arnold tongues of frequency locking for two coupled _periodic_ oscillators (see Figure 6.7). An important difference from the Arnold tongues in Figure 6.7 is that in Figure 10.17 there is a gap between the minimum of the tongue and the \(\omega\) axis; thus, if \(A<A_{\rm min}\), there is no phase synchronization for any \(\omega\). We note that \(m\!:\!n\) phase synchronization (where \(m\) and \(n\) are integers) of chaotic oscillators is also possible (see Tass _et al._, 1998) in which case (10.34) is replaced by \(K\)\(|m\phi(t)-not|\)\(K+2\pi\). In this situation the tongue minima on a plot like that in Figure 10.18 would occur at \(\omega\cong(m/n)\overline{\omega}\). Returning now to the case \(m=n=1\), inside the tongue shown in Figure 10.17 locking as defined by (10.34) holds. Just outside of the tongue, e.g., at the points labeled \(\beta\) and \(\gamma\) in Figure 10.17, locked like Figure 10.17: Phase synchronization holds in a tongue like region in \(A\omega\) parameter space. behavior applies for finite intervals of time interrupted by \(2\pi\) phase jumps. As one approaches the tongue boundary from outside the tongue, the mean time between these jumps becomes longer, approaching infinity as the boundary is approached. See Figure 10.18 for schematic illustrations corresponding to the parameter values labeled \(\alpha\), \(\beta\) and \(\gamma\) in Figure 10.17. As shown in Figure 10.18, for the parameters corresponding to \(\alpha\), the fluctuations in \(\phi(t)-\omega t\) remain bounded, while for the parameters corresponding to \(\beta\) (to \(\gamma\)) there are long periods of apparent phase synchronization intermittently punctuated by jumps of \(+2\pi\) (of \(-2\pi\)). It is found that the time duration between two consecutive jumps varies in a random like manner and has an exponential probability distribution for long interjump times. That is, the interjump time probability distribution function is described by Eq. (8.12), the equation for the transient lengths following a crisis. Indeed, as we shall soon show, as the tongue boundary is crossed, the transition from phase synchronized behavior to phase unsynchronized behavior (e.g., curve \(\beta\)) is a form of crisis. In the discussion above we have focused on a periodic signal coupled to a chaotic oscillatory system, Eq. (10.33). It was pointed out by Pikovsky _et al._ (1997) that one can also consider the coupling of two oscillatory chaotic systems. For example, \[{\rm d}{\bf x}_{a}/{\rm d}t = {\bf R}_{a}({\bf x}_{a})+A{\bf K}_{a}\cdot({\bf x}_{b}-{\bf x}_{a }), \tag{10.37a}\] \[{\rm d}{\bf x}_{b}/{\rm d}t = {\bf R}_{b}({\bf x}_{b})+A{\bf K}_{b}\cdot({\bf x}_{a}-{\bf x}_{b }), \tag{10.37b}\] where \({\bf K}_{a}\) and \({\bf K}_{b}\) are coupling matrices and we suppose that the uncoupled systems, \({\rm d}{\bf x}_{a}/{\rm d}t={\bf R}_{a}({\bf x}_{a})\) and \({\rm d}{\bf x}_{b}/{\rm d}t={\bf R}_{b}({\bf x}_{b})\) are oscillatory and chaotic, with \({\bf R}_{a}\) and \({\bf R}_{b}\) being in general nonidentical. Defining Figure 10.18: The curve labeled \(\beta\) (labeled \(\gamma\)) intermittently executes phase jumps of \(+2\pi\) (\(-2\pi\)). phases, \(\phi_{a}(t)\) and \(\phi_{b}(t)\), for the \(a\) and \(b\) variables (i.e., for \(\mathbf{x}_{a}(t)\) and \(\mathbf{x}_{b}(t)\)), we call the system phase synchronized if \[K\quad\left|\phi_{a}(t)-\phi_{b}(t)\right|\quad K+2\pi, \tag{10.38}\] which is analogous to (10.34) (or in the case of \(m:n\) commensurate synchronization, \(K\quad\left|m\phi_{a}(t)-n\phi_{b}(t)\right|\quad K+2\pi\)). We now discuss the phase synchronization transition (Pikovsky _et al._, 1997; Rosa _et al._, 1998). That is, we address the question of how the behavior of the system changes as we continuously vary the system parameters from inside a tongue (as at point \(a\) in Figure 10.17) to outside a tongue (as at point \(\beta\)). For this purpose it is useful to introduce \[\psi(t)=\phi(t)-\omega t\] as one of the variables defining the state of the system; the state of the system is then given by \(\psi(t)\), \(\mathbf{w}(t)\)\(]=\psi(t)\), \(w_{1}(t)\), \(w_{2}(t)\), \(\ldots\), \(w_{\ell}(t)\), where the \(w_{i}\) are other phase space coordinates, in addition to \(\psi\), that are needed to uniquely specify the system state. Assuming phase synchroniza tion, the situation is then as indicated in Figure 10.19. Since the evolutions of a state from (\(\psi\), \(\mathbf{w}\)) and (\(\psi+2\pi\), \(\mathbf{w}\)) are physically the same, there are identical attractors displaced by \(2\pi\) from each other, and the picture in Figure 10.19 is periodic in \(\psi\) with period \(2\pi\). Since the figure is for the phase synchronized case, the variation of \(\psi\) over an attractor is limited (see Eq. (10.34)), and the attractors are bounded in \(\psi\) by the basin boundaries shown in Figure 10.19, where these boundaries are separated by \(2\pi\). The consideration of an angle variable, like \(\psi\), on the infinite real line, \(-\infty\quad\psi\quad+\infty\), rather than restricted to the interval \(0\) to \(2\pi\), is called a _lift_. Suppose that the orbit is on one of the attractors in Figure 10.19 and that the system parameters are located at the point \(\alpha\) (Figure 10.17) in the middle of the tongue. Now imagine that we continuously change the system parameters so that the location of the system in parameter space moves toward the left tongue boundary (e.g., by decreasing \(\omega\) in Eqs. (10.33)). As this is done, the attractors in Figure 10.19 move towards the basin boundary on their right (and away from the basin boundary on their left). When the parameters become such that they lie on the left tongue boundary, in Figure 10.17, the attractors in Figure 10.19 just touch the basin boundary on their right. This situation corresponds to a boundary crisis (see Section 8.3). Further changes of the parameters put the system parameters at a point like \(\beta\) (Figure 10.17) that is just outside the tongue. In this case the sets that formerly constituted the attractors may be roughly thought of as poking just slightly over the former basin boundaries on their right. Since the former attractors poke only slightly over the boundary, typically an orbit initialized in one of the regions between the boundaries will initially go to the vicinity of the former attractor of that region and bounce around chaotically on it for a long time, mimicking an apparently phase synchronized state. Eventually, however, the orbit will find itself in the region where the formerly attracting set has poked over to the right side of the former basin boundary. After this occurs, the orbit moves to the vicinity of the former attractor located \(2\pi\) to the right of the orbit's previous motion. This corresponds to a \(+2\pi\) phase jump, as in Figure 10.19 for the curve labelled \(\beta\). The process then repeats, chaotically generating \(2\pi\) phase jumps. Thus, by use of our lift construction, we are able to identify the transition to phase synchronization as a crisis (Rosa _et al._, 1998), and results previously obtained for crises can be applied. We now return to the issue of how one can define the phase of a chaotic oscillatory process. The point is that phase is not a uniquely defined quantity in this context. In particular, there are several potentially useful ways of specifying a phase, and the one that is chosen may be dictated by operational considerations, such as ease of computation, effectiveness in evidencing phase synchronization, etc. For example, the definition given by (10.32) requires knowledge of two phase space variables, \(x(t)\) and \(y(t)\). However, in an experiment it may only be possible to measure a single variable, say \(x(t)\), and we might desire to specify a phase based solely on the time series \(x(t)\). One possible method is to introduce a delay coordinate \(x(t-T)\), where, if \(x(t)\) is oscillatory as in Figure 10.16(a), \(T\) is chosen as roughly one quarter of a typical oscillation period. In this case we define a phase \[\phi_{1}(t)=\arctan[(x(t-T)-\bar{x})/(x(t)-\bar{x})],\] (10.39a) where \[\bar{x}\] denotes the time average of \[x(t)\]. A second method is to introduce the time derivative \[\dot{x}(t)=\mathrm{d}x(t)/\mathrm{d}t\] as the second coordinate, \[\phi_{2}(t)=\arctan[\alpha\dot{x}(t)/(x(t)-\bar{x})]. \tag{10.39b}\]The scale factor \(\alpha\) is chosen as \(\alpha\quad\quad\langle(x(t)-\bar{x})^{2}\rangle/\langle\dot{x}^{2}\rangle\ ^{1/2}\), where the angle brackets indicate a time average. A third method, advocated by Rosenblum _et al._ (1996), is to introduce the Hilbert transform of \(x(t)-\bar{x}\), \[x_{\rm H}(t)=\frac{1}{\pi}{\rm P}\!\int_{-\infty}^{+\infty}\frac{x(t^{\prime}) -\bar{x}}{t-t^{\prime}}{\rm d}t^{\prime}, \tag{10.40}\] where the symbol \({\rm P}\) signifies that the singularity of the integral at \(t-t^{\prime}\) is to be resolved by taking the integral in the sense of the Cauchy principal value. Using this, a third definition of phase, which we call the 'Hilbert phase,' can be introduced, \[\phi_{3}(t)=\arctan[x_{\rm H}(t)/(x(t)-\bar{x})]. \tag{10.41}\] One way to understand why this definition is reasonable is by introducing the convolution operation \(\circ\), \[c(t)=f(t)\ \circ\ g(t)\quad\ \int_{-\infty}^{\infty}\!f(t-t^{\prime})g(t^{ \prime}){\rm d}t^{\prime}=\int_{-\infty}^{+\infty}\!f(t^{\prime})g(t-t^{\prime }){\rm d}t^{\prime}. \tag{10.42}\] In terms of the convolution, we can write \[X(t)\quad\quad x(t)-\bar{x}\ +{\rm i}x_{\rm H}(t)=\ x(t)-\bar{x}\quad\circ\quad\delta(t)+\frac{{\rm i}}{\pi}{\rm P}\frac{1}{t} \tag{10.43}\] so that \(\phi_{3}(t)\) is the polar angle of the complex number \(X.\) In (10.43), \(\delta(t)\) denotes the delta function. A property of the convolution is that its Fourier transform, \[\hat{c}(\nu)=\int_{-\infty}^{+\infty}\exp(-{\rm i}\nu t)c(t){\rm d}t, \tag{10.44}\] is simply the product of the Fourier transforms of the two convolved quantities, \[\hat{c}(\nu)=\hat{f}(\nu)\hat{g}(\nu). \tag{10.45}\] Applying (10.45) to \(X(t)\) in (10.43) we have \[\hat{X}(\nu)=2\theta(\nu)\hat{x}(\nu), \tag{10.46}\] where \(\theta(\nu)\) is the unit step function, \(\theta(\nu)=1\) for \(\nu>0\) and \(\theta(\nu)=0\) for \(\nu<0\). Equation (10.46) follows from the fact that the Fourier transform of \(\delta(t)\) is 1, while the Fourier transform of \(({\rm i}/\pi){\rm P}(1/t)\) is \({\rm sgn}(\nu)\) (\({\rm sgn}(\nu)=+1\) for \(\nu>0\) and \({\rm sgn}(\nu)=-1\) for \(\nu<0\)). Adding these two contributions we have \(1+{\rm sgn}(\nu)=2\theta(\nu)\), yielding (10.46). Thus, \(X(t)\) can be written as \[X_{\rm H}(t)=\frac{1}{2\pi}\!\int_{-\infty}^{+\infty}\!\hat{X}_{\rm H}(\nu){\rm e }^{{\rm i}\nu t}{\rm d}\nu=\frac{1}{\pi}\!\int_{0}^{\infty}\!\hat{x}(\nu){\rm e }^{{\rm i}\nu t}{\rm d}\nu. \tag{10.47}\]Note that the second integral in (10.46) is only over positive \(\nu\). Thus, we can view \(X_{\rm H}(t)\) as a superposition of temporally rotating complex numbers \(\hat{x}(\nu){\rm e}^{{\rm i}\nu t}\) all of which have \(\nu>0\), corresponding to phase angles that increase continuously in time. Thus we may reasonably expect that the complex number \(X(t)=x(t)-\bar{x}\ +{\rm i}x_{\rm H}(t)\) will be temporally rotating and that its phase angle, which is \(\phi_{3}(t)\) (Eq. (10.39c)), will continually increase with time. Motivated by the above reasoning, a fourth definition of phase has been introduced (e.g., DeShazer _et al._, 2001), \[\phi_{4}(t)=\arctan[{\rm Im}(X_{\rm G}(t))/{\rm Re}(X_{\rm G}(t))], \tag{10.39d}\] where \[X_{\rm G}(t)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}\!\!\hat{x}(\nu){\rm exp}[{ \rm i}\nu t-(\nu-\nu_{0})^{2}/(2\sigma^{2})]{\rm d}\nu, \tag{10.47a}\] or \[X_{\rm G}(t)=\frac{\sigma}{\surd 2\pi}\ x(t)-\bar{x}\ \ \circ\ \exp\ -\frac{1}{2}\sigma^{2}t^{2}+{\rm i}\nu_{0}t\ . \tag{10.47b}\] Here the center frequency \(\nu_{0}\) and the frequency spread \(\sigma\) are parameters that can be adjusted to optimize results. Basically, this definition (the 'Gaussian phase') is similar to that for the Hilbert phase if one replaces \(2\theta(\nu)\) by the Gaussian, \({\rm exp}[-(\nu-\nu_{0})^{2}/(2\sigma^{2})]\). Note that, as in the case of the Hilbert phase (see Eq. (10.46)), the Gaussian phase also emphasizes positive frequencies (assuming \(\nu_{0}>0\)). Because the Gaussian is smoother and allows one to focus attention on a particular band of the frequency spectrum, \(-\sigma\approx(\nu-\nu_{0})\approx+\sigma\), it has been found (e.g., DeShazer _et al._, 2001) that this definition can be advantageous for revealing phase synchronization, particularly for noisy nonstationary experimental time series of limited duration. Finally, we mention a fifth phase definition that has been especially useful in recent experimental studies of biological systems. The data from such systems is often best thought of as a _point process_: a sequence of times (\(t_{1}\), \(t_{2}\), \(t_{3}\), \(\ldots\)) at which some repeatedly occurring event, such as an electrocardiogram spike, happens. For data of this type several studies have defined an instantaneous phase given by \[\phi_{5}(t)=2\pi\frac{t-t_{k}}{t_{k-1}-t_{k}}+2\pi k,\qquad{\rm for}\ t_{k} \ \ \ \ t\ \ \ \ t_{k+1}. \tag{10.39e}\] Cases where the definition (10.39e) has been used include studies of synchronization between the cardiovascular and respiratory systems (Shafer _et al._, 1999; Stefanovska _et al._, 2000) and synchronization of electro sensitive cells of the paddlefish (Neiman _et al._, 1999). In the case of two oscillators acted upon by a very noisy environment, exact phase synchronization, such as curve \(\alpha\) in Figure 10.18, is not expected. Rather, one may be presented with data representing measure ments of two oscillatory signals and be asked whether there is any evidence of interdependence between the two (this is the issue for the previously mentioned biological studies and for the laser study of DeShazer _et al_. (2001)). In this case one possibility is to look at the phase difference between the two signals restricted to the interval \([0,\,2\pi]\), \[\phi(t)=\;\;m\phi_{a}(t)-n\phi_{b}(t)]\text{modulo }2\pi,\] where \(\phi_{a}\) and \(\phi_{b}\) are the phases for the two data sets determined by one of the above definitions. Sampling \(\phi(t)\) at many times, one can collect a large number of \(\phi\) values. If the two signals are completely independent, \(\phi\) is equally likely to have any value between 0 and 2\(\pi\), and a histogram of the collection of \(\phi\) will be flat. On the other hand, a statistically significant peak in a histogram approximation to the probability density of \(\phi\) indicates that the oscillators producing the two signals are in some way significantly connected to each other. ## Problems 1. Consider the cat map, Eq. (4.29), with a control \(c_{n}\) in the \(x\)-component of the map. Thus, the \(x\)-component of the map reads \(x_{n+1}=(x_{n}+y_{n}+c_{n})\) modulo 1, and the \(y\)-component is unperturbed, \(y_{n+1}=(x_{n}+2y_{n})\) modulo 1. It is desired to convert the chaotic orbit of the uncontrolled cat map to a stable orbit at the period one fixed point \((x,\,y)=(0,\,0)\). (Note that, since \(x\) and \(y\) are angle-like variables, the values 0 and 1 are equivalent for them.) 1. Find \(\mathbf{e}_{u}\), \(\mathbf{f}_{u}\), \(\mathbf{e}_{s}\), \(\mathbf{f}_{s}\), \(\lambda_{u}\) and \(\lambda_{s}\) appearing in Eq. (10.1). 2. Specify \(c_{n}\) in terms of \((x_{n},\,y_{n})\) using Eq. (10.2). 3. Assume that inadvertently, when the control is implemented, the value \(c_{n}\) for the applied control perturbation is reduced from its value in part (b) by a factor \(0<\gamma<1\). How small can the reduction factor \(\gamma\) be before the control fails to work? 2. Consider the cat map, Eq. (4.29), with the control \(c_{n}\) added to its \(x\)-component, as in Problem 1. Assume that one observed only the time series of the \(x\)-variable and not of the \(y\)-variable. Devise a control (i.e., specify \(c_{n}\) in terms of the observed \(x\)-orbit) to stabilize \((x,\,y)=(0,\,0)\) using the idea related to Eqs. (10.6) (10.8). 3. Consider the cat map, Eq. (4.29), with the control \(c_{n}\) added to the \(x\)-component of the map, as in Problem 1. Using the procedure illustrated in Figure 10.2, devise a control strategy to stabilize the period 2 orbit of the cat map. 4. What are the explicit expressions for the quantities \(\mathbf{g}\) and \(\mathbf{f}_{u}\) in Eq. (10.2) for the Henon attractor of Figure 11 and the fixed point on the attractor (see Problem 6 of Chapter 4)? Identify \(p\) with \(A\) and take \(\overline{p}=1.3\), so that \(A=1.3+q_{n}\). 5. Consider the \(3x\,\text{mod}\,1\) map with a control \(c_{n}\), \(x_{b+1}=(3x_{n}+c_{n})\) modulo 1. It is desired to target the point \(x=1/2\). Devise a strategy for picking \(c_{n}\), subject to the constraint \(|c_{n}|<(\frac{1}{3})^{5}\), such that for any initial condition \(x_{n}\) falls on \(x=\frac{1}{3}\) in \(m\) iterates or less. What is the smallest value of \(m\) such that the target can always be hit no matter what the initial condition \(x_{0}\) is? 6. Consider the following two-dimensional area preserving map: \[x_{n+1}=(x_{n}+y_{n})\text{mod}\,1\quad\quad G(x_{n},\,y_{n}),\,y_{n+1}=\ y_{n} +s(x_{n}+y_{n})]\quad\quad H(x_{n},\,y_{n}),\] where \(s(x)\) is the sawtooth function defined by \(s(x)\quad(x\,\text{modulo}\,1)-\frac{1}{2}\). Show that the eigenvalues of the linearized map imply that this system is chaotic. Show that an attempt to synchronize \(\hat{y}_{n}\) to \(y_{n}\) using the replacement synchronization scheme \(\hat{y}_{n+1}=H(x_{n},\,\hat{y}_{n})\) will fail. Discuss the possibility of obtaining synchronization if instead we use as our synchronizing system \[\hat{x}_{n+1}=G(x_{n},\,\hat{y}_{n})-2(\hat{x}_{n}-x_{n}),\,\hat{y}_{n+1}=H(x_ {n},\,\hat{y}_{n})+\beta(\hat{x}_{n}-x_{n}).\] In particular, for what values of \(\beta\) will the synchronized state be stable? 7. Consider a chaotic system \(\text{d}\textbf{x}/\text{d}t=\textbf{F}(\textbf{x})\) one-way coupled to an identical system via coupling of the form \(\text{d}\hat{\textbf{x}}/\text{d}t=\textbf{F}(\hat{\textbf{x}})+k(\textbf{x}- \hat{\textbf{x}})\), where \(k\) is a scalar coupling constant. Discuss the conditions on \(k\) such that stable synchronization is achieved with reference to the Lyapunov exponents of the original chaotic system, \(\text{d}\textbf{x}/\text{d}t=\textbf{F}(\textbf{x})\); in particular, discuss the occurrence of the blow-out bifurcation and the bubbling transition. ## Notes 1. For the particular replacement synchronization system, Eq. (10.21), \(*=\hat{\ }\). This situation is nongeneric. The \(y\)-equation has the special form \(\text{d}\textbf{y}/\text{d}t=\textbf{H}(x)+\textbf{A}\textbf{y}\) where **A** is a constant matrix. Thus, \(\hat{\ }=\ *=|\lambda|\) where \(\lambda\) is the eigenvalue of the matrix **A** with largest magnitude. This situation is nongeneric because, as soon as we alter the right-hand side of the **y**-equation by the addition of a typical nonlinear term \(\epsilon\textbf{g}(x,\,\textbf{y}),\,\epsilon\neq 0\), then \(\hat{\ }>\ *\), and this will be so no matter how small \(\epsilon\) is. 2. As an example of a normal parameter consider Eqs. (10.12). Say we fix five of the six coupling constants (\(k_{ai},\,k_{bl}\)) and regard the sixth as a variable parameter. Then this parameter is a normal parameter: the motion on the invariant synchronization manifold is governed by \(\text{d}\textbf{x}/\text{d}t=\textbf{F}(\textbf{x})\), which is independent of the coupling constants, but the transverse stability is governed by Eq. (10.13), which does depend on the coupling constants. ## Chapter 11 Quantum chaos The description of physical systems via classical mechanics as embo died by Hamilton's equations (Chapter 7) may be viewed as an approximation to the more exact description of quantum mechanics. Depending on the relevant time, length and energy scales appropriate to a given situation, one or the other of these descriptions may be the one that is most efficacious. In particular, if the typical wavelength in the quantum problem is very small compared to all length scales of the system, then one suspects that the classical description should be good. There is a region of crossover from the quantum regime to the classical regime where the wavelengths are'small' but not extremely small. This crossover region is called the'semiclassical' regime. In the semiclassical regime, we may expect quantum effects to be important, and we may also expect that the classical description is relevant as well. According to the 'correspondence principle,' quantum mechanics must go over into classical mechanics in the 'classical limit,' which is defined by letting the quantum wavelength approach zero. In a formal mathematical sense we can equivalently take the 'classical limit' by letting Planck's constant approach zero, \(\hbar\to 0\), with other parameters of the system held fixed. This limit is quite singular, and its properties are revealed by an investigation of the semiclassical regime. Particular interest attaches to the case where the classical description yields chaotic dynamics. In particular, one can ask, what implications does chaotic classical dynamics have for the quantum description of a system in the semiclassical regime? The field of study which addresses problems related to this question has been called quantum chaos. Quite apart from the fundamental problem of the correspondence principle,quantum chaos questions are of great practical importance because of the many physical systems that exist in the semiclassical regime. The considerations we have been discussing above actually apply more generally than just to quantum wave equations. In Section 7.3.4 we have seen how the short wavelength limit of a general wave equation yields a system of ray equations which are Hamiltonian. The wave equation may describe acoustic waves, elastic waves, electromagnetic waves, etc., or it may be Schrodinger's equation for the quantum wavefunction. In the latter case the rays are just the orbits of the corresponding classical mechanics for the given Hamiltonian. Thus, the general question of the short wave length behavior of solutions of wave equations in relation to solutions of the ray equations is essentially what is to be addressed in this chapter.1 As an example, consider the case of a free point particle of mass \(m\) bouncing in a closed hard walled two dimensional region with spatial coordinates \(x\) and \(y\) (i.e., a 'billiard' system, as shown in Figure 7.24). In this case Schrodinger's equation for the wavefunction \(\psi_{i}(x,\,y)\), corre spending to a given energy level \(E_{i}\), satisfies the Helmholtz equation, \[\nabla^{2}\psi_{i}+\,k_{\,i}^{\,2}\psi_{i}=0, \tag{11.1}\] where \(\nabla^{2}=\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}\) with \(\psi_{i}(x,\,y)=0\) on the walls. The solu tions of this problem give a discrete set of eigenfunctions \(\psi_{i}\) and corresponding eigenvalues \(k_{i}\) in terms of which the energy levels are given by \(E_{i}=\hbar^{2}\,k_{\,i}^{\,2}/2\,m\). Taking a typical wavelength to be \(2\pi/k_{i}\), we see that the semiclassical regime corresponds to large eigenvalues such that \(k_{i}L\gg 1\), where \(L\) is a typical scale size of the billiard (e.g., the radius of the end caps for Figure 7.24(\(f\))). Now say we consider the classical problem of an electromagnetic wave in a two dimensional cavity with perfectly conductive walls and shapes as shown in Figure 7.24. Taking the electric field to be polarized in the \(z\) direction, we again obtain Eq. (11.1) with \(\psi_{i}\) now being the electric field and \(k_{\,i}^{\,2}=\omega_{i}^{\,2}/c^{2}\), where \(c\) is the speed of light and \(\omega_{i}\) is the resonant frequency of the \(i\)th cavity mode. Since the quantum problem and the classical electromagnetic problem are mathema tically equivalent, knowledge of the short wavelength regime of one implies knowledge of the short wavelength regime of the other. From now on our discussion will be in the context of quantum mechanics. We shall discuss three general classes of problems in this chapter: Time independent Hamiltonians for systems with bounded orbits (i.e., the orbits are confined to a finite region of phase space). An example of such a problem is Eq. (11.1) applied to closed billiard domains. Periodically driven systems in which the Hamiltonian is periodic in time. An example of such a problem is the ionization of a hydrogen atom by a microwave field. Scattering problems in which the particle motions are unbounded (see Section 5.5 for the classical case). We conclude the introduction to this chapter by mentioning that more material on quantum chaos can be found in the books by Ozorio de Almeida (1988), Tabor (1989), Gutzwiller (1990), and Haake (1991), and in the review articles by Berry (1983), Eckhardt (1988b) and Jensen _et al._ (1991). ### 11.1 The energy level spectra of chaotic, bounded, time-independent systems We consider systems that have a time independent Hamiltonian so that the energy is a constant of the motion. We also assume that the classical orbits are bounded. That is, for any given energy of the system, all orbits remain within some sufficiently large sphere in the phase space (the radius of the sphere may increase with energy). To be specific, we consider the Hamiltonian \[H({\bf p},\,{\bf q})=\frac{1}{2\,m}\,p^{2}+V({\bf q}), \tag{11.2}\] where \({\bf p}\) and \({\bf q}\) are \(N\) vectors and \(N\) is the number of degrees of freedom. The corresponding Schrodinger equation is \[{\rm i}\hbar\,\frac{\partial}{\partial t}=-\,\frac{\hbar^{2}}{2m}\quad^{2} \quad+\,V({\bf q})\,\,\,\,\,, \tag{11.3}\] where \(\quad^{2}\) now denotes the \(N\) dimensional Laplacian, \[^{2}=\partial^{2}/\partial q_{1}^{2}+\partial^{2}/\partial q_{2}^{2}+\cdots+ \partial^{2}/\partial q_{N}^{2}.\] Assuming an energy eigenstate with energy level \(E_{i}\), we have \(=\exp(-{\rm i}E_{i}t/\hbar)\quad_{i}\), yielding the eigenvalue problem \[^{2}\quad_{i}({\bf q})+\frac{2\,m}{\hbar^{2}}\bigl{[}E_{i}-V({\bf q})\bigr{]} \quad_{i}({\bf q})=0. \tag{11.4}\] In accordance with our restriction to systems with bounded orbits of the classical Hamiltonian, we assume that (11.4) has a complete, _denumerable_ set of eigenfunctions with corresponding energy levels. In this section we shall be interested in the behavior of the set of energy levels \(\{E_{i}\}\) (i.e., the'spectrum') in the semiclassical regime. In the next section (Section 11.2) we consider properties of the eigenfunctions. In both cases, particular interest will focus on the case where the classical dynamics is chaotic. In our discussion of the spectra a key quantity will be the density of state \(d(E)\) defined so that \[\int_{E_{a}}^{E_{b}}d(E)\mathrm{d}E\] is the number of states with energy levels between \(E_{a}\) and \(E_{b}\). Thus, \[d(E)=\sum_{i}\delta(E-E_{i}), \tag{11.5}\] where we henceforth take the subscripts \(i\) such that \[E_{i}\leq E_{i+1}.\] That is, we label the levels in ascending order with respect to the numerical values of the \(E_{i}\). Another quantity of interest will be the number of states with energies less than some value \(E\), \[N(E)=\int_{-\infty}^{E}d(E)\mathrm{d}E. \tag{11.6}\] We call this quantity the cumulative density. Thus, \(d(E)\) is a string of delta functions and \(N(E)\) is a function which increases in steps of size one as \(E\) passes through each energy level, Figure 11.1. In the semiclassical limit one can also introduce a smoothed density of states, Figure 11.1: The exact density function \(d(E)\) and the exact cumulative density \(N(E)\). \[\bar{d}(E)=\frac{1}{2\Delta}\int_{E-\Delta}^{E+\Delta}\!d(E)\mathrm{d}E, \tag{11.7}\] and a corresponding smoothed cumulative density, \[\bar{N}(E)=\int_{-\infty}^{E}\bar{d}(E)\mathrm{d}E. \tag{11.8}\] The smoothing scale \(\Delta\) will be taken to be much less than any typical energy of the _classical_ system but much larger than \(\hbar/T_{\mathrm{min}}\), where \(T_{\mathrm{min}}\) is the shortest characteristic time for orbits of the classical problem. The reason for the restriction \[\Delta\approx\hbar/T_{\mathrm{min}} \tag{11.9}\] will become evident subsequently. An expression for the average density of states \(\bar{d}(E)\) is provided by Weyl's formula (see, for example, Gutzwiller (1990)) which is given as follows. The volume of classical phase space corresponding to system energies less than or equal to some value \(E\) is \[\nu(E)=\int U(E-H(\mathbf{p},\,\mathbf{q}))\mathrm{d}^{N}\mathbf{p}\,\mathrm{ d}^{N}\mathbf{q}, \tag{11.10}\] where \(U(\quad)\) denotes the unit step function: \[U(x)\equiv 1\,\,\mathrm{if}\,\,x>0\,\,\mathrm{and}\,\,\,U(x)\equiv 0\,\, \mathrm{if}\,\,x\,\,\,\,\,\,\,\,\,0.\] Weyl's formula is equivalent to the statement that the average phase space volume occupied by a state is \((2\pi\hbar)^{N}\). Thus, the smoothed number of states with energies less than \(E\) is \[\bar{N}(E)=\nu(E)/(2\pi\hbar)^{N}, \tag{11.11}\] and, since the smoothed density of states is \(\bar{d}(E)=\mathrm{d}\bar{N}(E)/\mathrm{d}E\), we have \[\bar{d}(E)=\Omega(E)/(2\pi\hbar)^{N}, \tag{11.12a}\] \[\Omega(E)=\mathrm{d}\nu/\mathrm{d}E=\int\!\!\partial(E-H(\mathbf{p},\,\mathbf{ q}))\mathrm{d}^{N}\mathbf{p}\,\mathrm{d}^{N}\mathbf{q}, \tag{11.12b}\] where we have made use of the identity \(\partial(x)=\mathrm{d}U(x)/\mathrm{d}x\). For example, for the special case of two dimensional billiards, \(H(\mathbf{p},\,\mathbf{q})=p^{2}/2m\) for \(\mathbf{q}\) in the billiard region, and we thus obtain \[\nu(E)=\pi p^{2}A=2\pi mEA, \tag{11.13a}\] \[\Omega(E)=2\pi mA,\] (11.13b) \[\bar{d}(E)=mA/2\pi\hbar^{2}, \tag{11.13c}\] where \(A\) is the area of the billiard. More generally for a smooth potential we obtain from (11.12)\[\Omega(E)=m\sigma_{N}\bigg{[}\,2m[E-V({\bf q})]\,\}^{(N/2)-1}\,{\rm d}^{N}{\bf q} \tag{11.13d}\] for any number of degrees of freedom \(N\). Hence \(\sigma_{N}\) is the area of a sphere of radius 1 in \(N\) dimensional space (e.g., \(\sigma_{3}=4\pi\) for \(N=3\)). (The Weyl result (11.13), is correct to leading order in \(1/\hbar\), and, in the case of billiards, for example, higher order corrections that depend on the perimeter of the billiard and its shape have also been calculated. These semiclassically small corrections will be ignored.) We note from (11.12a) that the spacing between two adjacent states is typically \(O(\hbar^{N})\), and thus the restriction (11.9) ensures that in the semiclassical regime for \(N\geq 2\) there are many states in our smoothing interval. Also, we see from (11.12) that \(\Omega(E)\), and hence \(\bar{d}(E)\), is nearly constant over the energy range, since we have taken to be classically small. We emphasize that the Weyl result applies if we examine \(d(E)\) on the coarse scale. If we were to examine \(d(E)\) on a finer scale (i.e., smooth \(d(E)\) over an energy range less than ), then the resulting smoothed \(d(E)\) would fluctuate about \(\bar{d}(E)\). (These fluctuations will be the object of interest in Section 11.1.2.) #### The distribution of energy level spacings Consider two adjacent energy levels \(E_{i}\) and \(E_{i+1}\). Their difference, the 'energy level spacing,' \(S_{i}=E_{i+1}-E_{i}\), averaged over \(i\) in a band about a central energy \(E\) is just the inverse of the smoothed density of states, \(1/\bar{d}(E)\). The spacings \(S_{i}\) fluctuate about this average. If we look at many \(S_{i}\) in such a band, we can compute a distribution function for the energy level spacings. Rather than do this directly, we 'normalize out' the density of states so that the resulting distribution function does not depend on \(\bar{d}(E)\). Since \(\bar{d}(E)\) is system dependent, we hope that by such a normali \(2\) the resulting distribution function will be system independent. That is, the distribution function will be universal for a broad class of systems. To this end we replace the spectrum \(\{E_{i}\}\) by a new set of number \(\{e_{i}\}\) defined by \[e_{i}=\bar{N}(E_{i}). \tag{11.14}\] Here \(\bar{N}(E)\) is given by (11.11). The numbers \(e_{i}\) by definition have an average spacing of 1. Thus, we can think of the set \(\{e_{i}\}\) as a set of normalized energies for which the smoothed density is 1. Letting \(s_{i}=e_{i+1}-e_{i}\), we seek the universal distribution \(P(s)\) such that for a randomly chosen \(i\), the probability that \(s\)\(s_{i}\)\(s+{\rm d}s\) is \(P(s){\rm d}s\). For the case of a semiclassical regime quantum system whose Hamiltonian yields a classical integrable system, Berry and Tabor (1977) show that \(P(s)\) is universally the same and is a Poisson distribution,\[P(s)=\exp(-s). \tag{11.15}\] The next natural question to ask is, is there a universal distribution in the case where the classical mechanics is chaotic? Actually, this question has to be refined somewhat. In particular, in the absence of special symme tries, there is a difference in the distribution \(P(s)\) between time reversible classical dynamics (such as for the Hamiltonian \(H=p^{2}/2m+V({\bf q})\) corresponding to motion in a potential) and non time reversible classical dynamics (such as the case where one considers charged particle motion in the presence of a static magnetic field). A further complication arises in the'mixed case' where both chaotic and KAM orbits are present in the phase space (e.g., Figure 7.13). In fact the problem of characterizing the universal properties of \(P(s)\) in the mixed case is rather unsettled (for examples treating this case see Seligman _et al._ (1984) and Bohigas _et al._ (1990)). Thus, we shall restrict our discussion to the case of completely chaotic systems (Section 7.5) such as the billiards shown in Figure 7.24(_d_) (_g_). First we consider the time reversible case as in the Schrodinger equa tion. Eq. (11.4). Introducing a set of real orthonormal basis functions \(\{u_{j}({\bf q})\}\), we can write \({}_{i}({\bf q})=\sum_{j}c_{j}^{(i)}u_{j}({\bf q})\), in terms of which we obtain the infinite matrix eigenvalue problem \[{\bf H}{\bf c}_{i}=\lambda_{i}{\bf c}_{i}, \tag{11.16}\] where \({\bf c}_{i}=(c_{1}^{(i)},\,c_{2}^{(i)},\,\ldots\)), the matrix \({\bf H}\) has elements \(H_{lm}=-\int u_{l}({\bf q})[\ ^{2}-(2\,m/\hbar^{2})V({\bf q})]u_{m}({\bf q}){ \rm d}^{N}{\bf q}\), and \(\lambda_{i}=2mE_{i}/\hbar^{2}\). The matrix \(H_{lm}\) is clearly real and symmetric, \(H_{lm}=H_{ml}\). As an example of a non time reversible case, consider the Hamiltonian for a charged particle of charge \(Q\) in a potential \(V({\bf q})\) but with a static magnetic field added. The Hamiltonian is now \(({\bf p}-Q{\bf A})^{2}/2m+V({\bf q})\), where \({\bf A}({\bf q})\) is the magnetic vector potential. Correspondingly, the Schrodinger equation (11.4) is replaced by \({}^{2}-(iQ/\hbar)[{\bf A}\cdot{\bf\cdot}+{\bf\cdot}({\bf A})]+(2m/\hbar^{2})[ E-\bar{V}({\bf q})]\)\(=0\), where \(\bar{V}({\bf q})\equiv V({\bf q})+(Q^{2}/2m){\bf A}^{2}({\bf q})\). The big difference is that without the magnetic field the wave equation is real, whereas with the magnetic field the wave equation becomes complex. Introducing a basis, we again obtain (11.16), but now the matrix \({\bf H}\) is Hermitian with complex off diagonal elements, \(H_{lm}=H_{ml}^{*}\) where \(H_{ml}^{*}\) denotes the conjugate of \(H_{ml}\). In nuclear physics one typically has to deal with very complicated interacting systems. In 1951 Wigner introduced the idea that the energy level spectra of such complicated nuclear systems could be treated statistically, and he introduced a conjecture as to how this might be done. In particular, he proposed that the spectra of complicated nuclear systems have similar statistical properties to those of the spectra of ensembles of random matrices. That is, we take the matrix \({\bf H}\) of Eq. (11.16) to be drawn at random from some collection (the 'ensemble'). In order to specify the ensemble, it has been proposed that the following two statistical conditions on the probability distribution of the ensemble of matrices should be satisfied. 1. _Invariance_. Physcial results should be independent of the set of basis functions \(\{u_{i}({\bf q})\}\) used to derive (11.16). Thus, the probability dis ribution for the elements of **H** should be invariant to orthogonal transformations of **H** for the case of a time reversible system and should be invariant to unitary transformations for the non time rever \(\ \\) ible case. That is, the probability distribution of matrix elements, \(\tilde{P}({\bf H})\), should be unchanged if **H** is replaced by \({\bf O}^{\dagger}{\bf H}{\bf O}\) with **O** an orthogonal matrix, or by \({\bf U}^{\dagger}{\bf H}{\bf U}\) with **U** a unitary matrix, in the reversible and non reversible cases, respectively. Here the symbol \(\dagger\) stands for transpose. (An orthogonal matrix is defined by \({\bf O}^{\dagger}={\bf O}^{-1}\) with **O** real, while a unitary matrix satisfies \({\bf U}^{\dagger}={\bf U}^{-1}\).) 2. _Independence_. The matrix elements are independent random variables. The distribution function \(\tilde{P}({\bf H})\) for the matrix **H** is then the product of distributions for the individual elements \(H_{lm}\) for \(l\)\(m\) (the elements for \(l>m\) are implied by the symmetry of **H** about its diagonal). These two hypotheses can be shown to imply Gaussian distributions for each of the individual \(H_{lm}\) in the time reversible case and for \({\rm Re}(H_{lm})\) and \({\rm Im}(H_{lm})\) in the non time reversible case. The widths for these Gaus \(\\) \(\\)sians are the same for all the diagonal elements, while all the widths for the Gaussian distribution of the off diagonal (\(l\neq m\)) elements are half of the widths for the diagonal elements. The ensemble for the time reversible case is called the Gaussian orthogonal ensemble (GOE), while the ensemble in the non time reversible case is called the Gaussian unitary ensemble (GUE). (There is a third relevant type of ensemble, the Gaussian symplectic ensemble, which we shall not discuss here.) A detailed theory for the GOE and GUE matrix ensembles exists (Mehta (1967); see also Bohr and Mottelsen (1969)) and yields the following results for the level spacing distributions, \[P(s) \simeq (\pi/2)s\exp(-\pi s^{2}/4)\mbox{, for the GOE case.} \tag{11.17a}\] \[P(s) \simeq (32/\pi)s^{2}\exp[-(4/\pi)s^{2}]\mbox{, for the GUE case.} \tag{11.17b}\] The results on the right hand sides of (11.17a) and (11.17b) are exact for two by two matrices, and are also very good approximations to the results for the situation of interest here, namely, \(M\) by \(M\) matrices in the \(M\rightarrow\infty\) limit. Another result of the GOE and GUE random matrix theory is expres \(\\)sions for the'spectral rigidity', \(\ {}_{\pi}(L)\). The spectral rigidity is defined as the mean square deviation of the best local fit straight line to the staircase cumulative spectral density over a normalized energy scale \(L\), \[{}_{\rm sr}(L)=\min_{A,B}\left\{\frac{\tilde{d}(E)}{L}\int_{-L/2\tilde{d}(E)}^{+L /2\tilde{d}(E)}[N(E+\varepsilon)-(A+B\varepsilon)]^{2}{\rm d}\varepsilon\right\}.\] The random matrix theory results are \[{}_{\rm sr}(L)=\frac{1}{\pi^{2}}\ln L+K_{1},\,\mbox{for the GOE case}, \tag{11.18a}\] \[{}_{\rm sr}(L)=\frac{1}{2\pi^{2}}\ln L+K_{2},\,\mbox{for the GUE case}, \tag{11.18b}\] where \(K_{1}\) and \(K_{2}\) are constants. In the case of a Poisson process (appropriate to integrable systems), \[{}_{\rm sr}(L)=L/15. \tag{11.18c}\] As we have already said, random matrix theory (e.g., Eqs. (11.17) and (11.18)) was originally motivated by the study of nuclei. More recently it has also been proposed that random matrix theory applies to the semi classical spectra of quantum problems that are classically completely chaotic. This is, in a sense, a departure from the nuclear situation, since now the system can be quite simple (e.g., billiards). While there are some suggestive theoretical results supporting the random matrix conjecture for quantum chaos (e.g., see the review by Yukawa and Ishikawa (1989) and the paper on spectral rigidity by Berry (1985)), the validity of this conjecture and its range of applicability, if valid, remain unsettled. The main support for it comes from numerical experiments where some striking agreement with the conjecture is obtained (McDonald and Kauf man, 1979; Bohigas _et al._, 1984; Seligman and Verbaarschot, 1985). Figure 11.2 shows a histogram approximation to \(P(s)\) obtained by Bohigas _et al._ (1984) by numerically solving the Helmholtz equation in the billiard shape shown in the upper right inset. The classical motion in this shape is chaotic, and the resulting histogram appears to agree well with the GOE result (11.17a) shown as the solid curve in the figure, but is very different from the Poisson distribution shown in the figure as the dashed curve. Bohigas _et al._ also obtain excellent agreement of their numerically calculated spectral rigidity with the random matrix prediction Eq. (11.18a). In applying the GOE or the GUE statistics to a chaotic situation it is important that any discrete symmetries to the problem be taken into account. For example, the geometry of the billiard shown in Figure 7.24(\(d\)) has two diagonal symmetry lines, as well as a vertical and a horizontal symmetry line. Solutions of the Helmholtz equation in this billiard can be broken into classes according to whether they are even or odd about the symmetry lines. The problem shown in the inset of Figure 11.2 with the wavefunction set to zero on the boundaries corresponds to the symmetry class of solutions of the Helmholtz equation in the full billiard of Figure 7.24(_d_) that are odd about the symmetry lines. The random matrix ensemble is constructed assuming no symmetries. Thus, the GOE statistics conjecture is not taken to apply to the full spectrum for the billiard for Figure 7.24(_d_) but does apply to the spectrum restricted to any given symmetry class of the problem. For example, the odd odd symmetry class of Figure 11.2 corresponds to the billiard shown in the inset, and this reduced billiard (which is \(\frac{1}{8}\) of the original billiard area) has no symmetries. As seen in Figure 11.2, the principal gross qualitative difference between the level spacing distributions for integrable and chaotic systems is that \(P(s)\) goes to zero as \(s\to 0\) for the chaotic cases, but has its maximum at \(s=0\) for the integrable case. This is a manifestation of the phenomenon of 'level repulsion' for nonintegrable systems. In particular, Figure 11.2: Numerically obtained histogram of \(P(s)\) for the Helmholtz equation solved in the region shown in the upper right inset (\(R=0.2\)) compared with the GOE result Eq. (11.17a) (solid curve) and the Poisson distribution, Eq. (11.15) (dashed curve) (Bohigas _et al._, 1984). say we consider the variation of energy levels as a function of a parameter of the system. Then the situation is as shown in Figure 11.3. In the integrable case (Figure 11.3(_a_)), levels cross creating degeneracies at the crossings. In the nonintegrable case, degeneracies are typically avoided (Figure 11.3(_b_)); the levels'repel'. Thus, there is a tendency against having small \(s\) values in the nonintegrable case, and \(P(0)=0\). For a chaotic problem with discrete symmetries, each symmetry class yields an _independent_ problem of the form (11.16). Thus, energy levels from one class do not 'know' about energy levels of a different class, and two such levels will, therefore, not repel each other. Hence, even though the problem is chaotic, crossings of levels as in Figure 11.3(_a_) typically occur, if the spectra of different symmetry classes are not separated. #### Trace formula An important and fundamental connection between the classical mech anics of a system and its semiclassical quantum wave properties is provided by trace formulae originally derived by Gutzwiller (1967, 1969, 1980) and by Balian and Bloch (1970, 1971, 1972). We consider the Green function for the quantum wave equation corresponding to a classical Hamiltonian \(H(\mathbf{p},\,\mathbf{q})\), \[H(-\mathrm{i}\hbar\nabla,\,\mathbf{q})G(\mathbf{q},\,\mathbf{q}^{\prime};\,E) -\,EG(\mathbf{q},\,\mathbf{q}^{\prime};\,E)=-\delta(\mathbf{q}-\mathbf{q}^{ \prime}). \tag{11.19}\] The eigenfunctions and eigenvalues (energy levels) satisfy \[H(-\mathrm{i}\hbar\nabla,\,\mathbf{q})\;\;\;_{i}(\mathbf{q})=E_{i}\;\;\;_{i}( \mathbf{q}), \tag{11.20}\] and the orthonormality and completeness of the discrete set \(\{\;\;_{i}\}\) are respectively expressed by Figure 11.3: Behavior of two adjacent energy levels with variation of a system parameter (horizontal axis) for (_a_) an integrable case and (_b_) a nonintegrable case. \[\left\{\begin{array}{cc}{}_{i}({\bf q})&{}_{j}({\bf q}){\rm d}^{N}{\bf q}=\delta _{ij},\end{array}\right. \tag{11.21}\] and \[\begin{array}{cc}{}_{j}&{}_{j}({\bf q})&{}_{j}({\bf q})=\delta({\bf q}-{\bf q }^{\prime}).\end{array} \tag{11.22}\] Expressing \(G\) in terms of the complete basis \(\left\{\begin{array}{cc}{}_{j}\end{array}\right\}\), \[G({\bf q},\,{\bf q}^{\prime};\,E)=\begin{array}{cc}{}_{j}&c_{j}({\bf q}^{ \prime},\,E)&{}_{j}({\bf q}),\end{array}\] and using (11.21), Eq. (11.19) yields \((E_{j}-E)c_{j}({\bf q}^{\prime},\,E)=-\begin{array}{cc}{}_{j}({\bf q}^{ \prime})&{}_{j}({\bf q})\end{array}\) or \[G({\bf q},\,{\bf q}^{\prime};\,E)=\begin{array}{cc}{}_{j}&{}_{j}({\bf q}^{ \prime})&{}_{j}({\bf q})\end{array} \tag{11.23}\] The above is singular as \(E\) passes through \(E_{j}\) for reach \(j\). To define this singularity we make use of causality. This leads to replacing \(E\) by \(E+i\varepsilon\), where \(\varepsilon\) goes to zero through positive values (see Figure 11.4). That is, \[\frac{1}{E-E_{j}} \to\lim_{\varepsilon\to 0^{+}}\frac{1}{(E+i\varepsilon)-E_{j}}\] \[=P\frac{1}{E-E_{j}}-{\rm i}\pi\delta(E-E_{j}), \tag{11.24}\] where \(P(1/x)\) signifies that, when the function \(1/x\) is integrated with respect to \(x\), the integral is to be taken as a principal part integral at the singularity \(x=0\). (See standard texts on quantum mechanics for a discus \(P(E-E_{j})^{-1}\) in (11.24) corresponds to the part of the path along the real axis, while the term \(-{\rm i}\pi\delta(E-E_{i})\) results from integration around the \[{\rm Im}\,\,G({\bf q},\,{\bf q}^{\prime};\,E)=-\pi\begin{array}{cc}{}_{j}& {}_{j}({\bf q}^{\prime})&{}_{j}({\bf q})\delta(E-E_{j}),\end{array}\] Figure 11.4: Illustration of Eq. (11.24) for integration over the real axis. (\(a\)) The term \(1/[(E+i\varepsilon)-E_{j}]\) has a pole in \(E\) at \(E_{j}-i\varepsilon\) labeled by the cross. (\(b\)) Letting \(\varepsilon\to 0\) the integration path is deformed as shown. The term \(P(E-E_{j})^{-1}\) in (11.24) corresponds to the part of the path along the real axis, while the term \(-{\rm i}\pi\delta(E-E_{i})\) results from integration around the \[{\rm Im}\,\,G({\bf q},\,{\bf q}^{\prime};\,E)=-\pi\begin{array}{cc}{}_{j}&{ }_{j}({\bf q}^{\prime})&{}_{j}({\bf q})\delta(E-E_{j}),\end{array}\]which upon setting \({\bf q}={\bf q}^{\prime}\) and integrating yields \[{\rm Im}\int G({\bf q},\,{\bf q};\,E){\rm d}^{N}{\bf q}=-\pi\,\,\,\,\,\,\,\,\, \delta(E-E_{j}).\] Thus, from (11.5) \[d(E)=-\frac{1}{\pi}{\rm Im}[{\rm Trace}\,\,(G)], \tag{11.25}\] where \[{\rm Trace}\,\,(G)\equiv\int G({\bf q},\,{\bf q};\,E){\rm d}^{N}{\bf q}.\] Hence we obtain an exact formula for the density of states \(d(E)\) in terms of the trace of the Green function. We now wish to obtain a semiclassical approximation to \(G\) for use in (11.25). From now on we specialize to Hamiltonians of the form \(H({\bf p},\,{\bf q})=(p^{2}/2\,m)+V({\bf q})\) for which the Green function satisfies the equation, \[\left\{\begin{array}{c}{}^{2}+\frac{2\,m}{\hbar^{2}}[E-V({\bf q})]\right\}G ({\bf q},\,{\bf q}^{\prime};\,E)=\frac{2\,m}{\hbar^{2}}\,\delta({\bf q}-{\bf q }^{\prime}). \tag{11.26}\] Let \(k^{\prime}=\{(2\,m/\hbar^{2})[E-V({\bf q}^{\prime})]\}^{1/2}\). We interpret \(2\pi/k^{\prime}\) as the wavelength of plane waves in the local region near the delta function. If \(k^{\prime}L\gg 1\), where \(L\) is a typical scale for the variation of \(V({\bf q})\) with \({\bf q}\), then one can choose a ball \({\cal R}\) in \({\bf q}\) space about the point \({\bf q}={\bf q}^{\prime}\) whose radius is small compared to \(L\) but is still many wavelengths across. In the region \({\cal R}\), the function \(V({\bf q})\) is nearly constant at the value \(V({\bf q}^{\prime})\). Thus, to gain insight, consider the Green function \(G_{0}\) for the case where \(V({\bf q})\) is constant everywhere at the value \(V({\bf q}^{\prime})\): \[[\,\,\,\,^{2}+(k^{\prime})^{2}]G_{0}({\bf q},\,{\bf q}^{\prime};\,E)=(2\,m/ \hbar^{2})\delta({\bf q}-{\bf q}^{\prime}).\] The solution of this problem is known for any number of degrees of freedom. In particular, for \(N=2\) and \(N=3\) we have \[G_{0}({\bf q},\,{\bf q}^{\prime};\,E)=-\frac{{\rm i}}{4}\frac{2\,m}{\hbar^{2} }\,H_{0}^{(1)}(k^{\prime}|{\bf q}-{\bf q}^{\prime}|)\,\,{\rm for}\,\,N=2, \tag{11.27}\] \[G_{0}({\bf q},\,{\bf q}^{\prime};\,E)=-\frac{2\,m}{\hbar^{2}}\frac{\exp({\rm i }k^{\prime}|{\bf q}-{\bf q}^{\prime}|)}{4\pi|{\bf q}-{\bf q}^{\prime}|}\,\,\,{ \rm for}\,\,N=3, \tag{11.28}\] where \(H_{0}^{(1)}\) is the zero order Hankel function of the first kind. To interpret the \(N=2\) case we expand \(G_{0}\) for \(k^{\prime}|{\bf q}-{\bf q}^{\prime}|\gg 1\) (i.e., for observation points many wavelengths distant from the delta function source). The large argument approximation of \(H_{0}^{(1)}\) is \[H_{0}^{(1)}(k^{\prime}|{\bf q}-{\bf q}^{\prime}|)\sim\left(\frac{2}{x}\right)^{1/2 }\exp\left(-\frac{{\rm i}\pi}{4}\right)\frac{\exp[{\rm i}k^{\prime}|{\bf q}-{ \bf q}^{\prime}|]}{\sqrt{k^{\prime}|{\bf q}-{\bf q}^{\prime}|}}.\] Thus, in both the cases \(N=2\) and \(N=3\), the Green function is of the form \[G_{0}\propto\frac{\exp[{\rm i}k^{\prime}|{\bf q}-{\bf q}^{\prime}|]}{|{\bf q}- {\bf q}^{\prime}|^{(N-1)/2}}.\] That is, \(G_{0}\) is an outward propagating cylindrical (\(N=2\)) or spherical (\(N=3\)) wave originating from the point \({\bf q}={\bf q}^{\prime}\). Thus, we have the following picture for the Green function. Since \(k^{\prime}|{\bf q}-{\bf q}^{\prime}|\gg 1\) on the boundary of \({\cal R}\), waves leaving the region \({\cal R}\) may be thought of as local plane waves (the wavelength is much shorter that the radius of curvature of the wavefronts). Thus, the geometrical optics ray approximation (also called the eikonal approximation) is applicable for \({\bf q}\) outside \({\cal R}\). In \({\cal R}\) the Green function \(G\) consists of two parts. One is the local homogeneous potential contribution \(G_{0}\) (which for points \({\bf q}\) too near \({\bf q}^{\prime}\) cannot be approximated using geometrical optics). The other part consists of geometric optics contributions from ray paths that have left \({\cal R}\) but then return to it after bouncing around in the potential. For points \({\bf q}\) outside \({\cal R}\), the Green function \(G\) consists of a sum of geometrical optics contributions from each of the ray paths connecting \({\bf q}^{\prime}\) to \({\bf q}\), \[G({\bf q},\,{\bf q}^{\prime};\,\,E)\simeq-\,\,\frac{1}{\hbar^{(N+1)/2}}\,^{ \infty}\,_{j=1}\,\,a_{j}({\bf q},\,{\bf q}^{\prime};\,E)\exp\left[\frac{{\rm i} }{\hbar}\,S_{j}({\bf q},\,{\bf q}^{\prime};\,E)+{\rm i}\phi_{j}\right.\,, \tag{11.29}\] where \(j\) labels a ray path (Figure 11.5). In (11.29), the quantity \(S_{j}\) is the action along path \(j\), \[S_{j}({\bf q},\,{\bf q}^{\prime};\,E)=\int\limits_{{\bf q}^{\prime}}^{{\bf q} }{\bf p}({\bf q})\,\mathbf{\cdot}\,{\bf d}{\bf q}|_{\rm along\,\,path \,\(\phi_{j}\) is a phase factor (e.g., see Littlejohn (1986) and Maslov and Fedoriuk (1981)), which will not figure in an essential way in our subsequent considerations, and \(a_{j}\) is the wave amplitude whose determination takes into account the spreading or convergence of nearby rays. We emphasize that \(S_{j}\), \(a_{j}\) and \(\phi_{j}\) are independent of \(\hbar\) and are determined purely by consideration of the classical orbits. To calculate \(a_{j}\) in two spatial dimensions (\(N=2\)), consider Figure 11.6. In this figure we show two infinitesimally separated rays originating from the source point \({\bf q}^{\prime}\); \(l\) denotes distance along the ray; the radius \(r_{0}\) is chosen so that the circle lies in \({\cal R}\) and satisfies \(k^{\prime}r_{0}\gg 1\); \({\rm d}s(l)\) denotes the differential arclength along the wavefront (perpendicular to the rays). By conservation of probability flux we have \[|a(r_{0})|^{2}\cdot(r_{0}){\rm d}s(r_{0})=|a(l)|^{2}\cdot(l){\rm d}s(l),\] where \(\dot{}\cdot=|\partial H/\partial{\bf p}|\) is the particle speed. Thus, we have \[|a(l)|=|a(r_{0})|\;\;\frac{\dot{}\cdot(r_{0})}{\dot{}(l)}\frac{{\rm d}s(r_{0}) }{{\rm d}s(l)}\;^{1/2}. \tag{11.31}\] Since \(r_{0}\) lies within \({\cal R}\) and satisfies \(k^{\prime}r_{0}\gg 1\), we can find \(a(r_{0})\) by using the large argument approximation of \(H_{0}^{(1)}(k^{\prime}r_{1})\), which gives the following result for \(|{\bf q}-{\bf q}^{\prime}|=r_{1}\), \[G_{0}\sim-\;\;\frac{m}{\hbar^{2}}\frac{\exp({\rm i}\pi/4)}{(2\pi)^{1/2}}\frac{ \exp({\rm i}k^{\prime}r_{1})}{(k^{\prime}r_{1})^{1/2}}.\] Comparing this with (11.29) we obtain \[a(r_{0})=\frac{m\exp({\rm i}\pi/4)}{2\pi r_{0}}\left\{2m[E-V({\bf q}^{\prime}) ]\right\}^{-1/4}. \tag{11.32}\]Knowing \(a(r_{0})\) we can use (11.31) to calculate \(a_{j}({\bf q},\,{\bf q}^{\prime};\,E)\) at any point \({\bf q}\) along a classical (ray) trajectory. Special consideration of wave effects, not included in the geometrical optics ray picture, is necessary at points where \({\rm d}s(l)=0\) (i.e., caustics and foci; e.g., see Ozorio de Almeida (1988)). Note that, for chaotic trajectories, nearby orbits separate exponentially, with the consequence that \({\rm d}s(r_{0})/{\rm d}s(l)\), and hence also \(a(l)\), on average decrease exponentially with the distance \(l\) along the orbit (ray). The quantity of interest appearing in (11.25) should more properly be written as \[\lim_{{\bf q}\rightarrow{\bf q}^{\prime}}{\rm Im}[G({\bf q},\,{\bf q}^{\prime} ;\,E)].\] For \({\bf q}\) very close to \({\bf q}^{\prime}\), there is a short path directly from \({\bf q}\) to \({\bf q}^{\prime}\), plus many long indirect paths from \({\bf q}\) to \({\bf q}^{\prime}\) (cf. Figure 11.7). Each gives a contribution to \(\lim_{{\bf q}\rightarrow{\bf q}^{\prime}}{\rm Im}[G]\). For the short direct path, the geometrical optics approximation is not valid, but we may use \(G_{0}\) to obtain this contribution. For the indirect paths the geometrical optics approximation is valid. Thus we write \[d(E)=d_{0}(E)+\tilde{d}(E) \tag{11.33}\] where \(d_{0}(E)\) and \(\tilde{d}(E)\) represent the direct and indirect contributions, \[d_{0}(E)=\frac{1}{\pi}\int\lim_{{\bf q}\rightarrow{\bf q}^{\prime}}{\rm Im}[G_ {0}({\bf q},\,{\bf q}^{\prime};\,E)]{\rm d}^{N}{\bf q}, \tag{11.34}\] verify that \(d_{0}(E)=\tilde{d}(E)\) for the case \(N=3\). Using (11.28) we have for \(k^{\prime}|{\bf q}-{\bf q}^{\prime}|\ll 1\) \[G_{0}({\bf q},\ {\bf q}^{\prime};\ E)\approx-\frac{2\,m}{\hbar^{2}}\frac{1+{\rm i }\,k^{\prime}|{\bf q}-{\bf q}^{\prime}|}{4\pi|{\bf q}-{\bf q}^{\prime}|},\] so that \[\lim_{{\bf q}-{\bf q}^{\prime}}{\rm Im}[G_{0}({\bf q},\ {\bf q}^{\prime};\ E)]=-\,\frac{mk^{\prime}}{2\pi\hbar^{2}}=\ \ \frac{2m}{\hbar^{2}}\ \ \frac{3/2}{\{E-V({\bf q})\}^{1/2}}.\] From (10.34) this yields \[d_{0}(E)=\ \ \frac{2\,m}{\hbar^{2}}\ \ \ ^{3/2}\frac{1}{4\pi^{2}}\bigg{\{}_{E}\ \ \nu_{({\bf q})}[E-V({\bf q})]^{1/2}{\rm d}^{3}{\bf q}.\] Comparing this result with (11.12a) and (11.13d) with \(\sigma_{3}=4\pi\), we indeed verify that \(d_{0}(E)\) is the Weyl result of \(\tilde{d}(E)\). We now focus our attention on obtaining the semiclassical expression for the fluctuation about \(\tilde{d}(E)\), namely \(\tilde{d}(E)\). Since the semiclassical regime corresponds to very small \(\hbar\), the factor \(\exp({\rm i}S_{j}/\hbar)\) in the integrand of (11.35) varies very rapidly with \({\bf q}\). Thus one may use the stationary phase approximation to evaluate the integral. The stationary phase condition is \(S_{j}({\bf q},\ {\bf q};\ E)=0\) or \[[\ \ {}_{{\bf q}}S_{j}({\bf q},\ {\bf q}^{\prime};\ E)+\ \ {}_{{\bf q}^{\prime}}S_{j}({\bf q},\ {\bf q}^{\prime};\ E)]_{{\bf q}={\bf q}^{\prime}}=0.\] From (11.30) this yields \({\bf p}({\bf q})-{\bf p}^{\prime}({\bf q})=0\) where \({\bf p}^{\prime}({\bf q})\equiv{\bf p}({\bf q}^{\prime})|_{{\bf q}^{\prime}={\bf q}}\). Pictures of ray paths for which the stationary phase condition \({\bf p}({\bf q})={\bf p}^{\prime}({\bf q})\) is (\(a\)) not satisfied and (\(b\)) satisfied are shown in Figure 11.8. We see that the stationary phase condition selects out classical periodic orbits. Thus, we have the important result that (11.35) reduces to a sum over all periodic orbits of the classical problem. Carrying out the full integration in (11.35) is technically involved (see Gutzwiller (1990) for an exposition), and so we shall merely quote the result of the further analysis. There are three cases: (1) Unstable periodic orbits. (2) Isolated stable periodic orbits. (3) Nonisolated stable periodic orbits. Case (3) is the situation for integrable systems\({}^{2}\) (Berry and Tabor, 1976). For cases (1) and (2), the following result is obtained, \[\tilde{d}(E)=\frac{1}{\pi\hbar}\ \ \ {}_{k}\ \ \ \frac{T_{k}}{[\det({\bf M}_{k}-{\bf I})]^{1/2}}\exp\biggl{\{}{\rm i}\ \frac{\tilde{S}_{k}(E)}{\hbar}+\tilde{\phi}_{k}\ \ \biggr{\}}. \tag{11.36}\] We refer to a single traversal of a closed ray path as a 'primitive' periodic orbit. In (11.36) the index \(k\) labels the periodic orbits, and the summation includes both primitive and nonprimitive periodic orbits (i.e., multiple traversals are assigned a \(k\) label). The quantitites appearing in (11.36) are as follows. \(\tilde{S}_{k}(E)=\oint{\bf p}\cdot{\bf d}{\bf q}\) where the integral is taken around periodic orbit \(k\) and represents the action for this orbit. \(T_{k}\) is the primitive period of orbit \(k\) and \({\bf M}_{k}\) is the linearized stability matrix for the Poincare map of periodic orbit \(k\). If orbit \(k\) is the \(r\)th round trip of some shorter periodic orbit \(k^{\prime}\), then \(T_{k}=T_{k^{\prime}}\), \(\tilde{S}_{k}=r\tilde{S}_{k^{\prime}}\), \({\bf M}_{k}={\bf M}_{k^{\prime}}^{r}\), and \(\tilde{\phi}=r\tilde{\phi}_{k^{\prime}}\). Perhaps the most interesting aspect of the semiclassical result for \(\tilde{d}(E)\) is that it implies that the periodic orbits lead to oscillations of \(d(E)\) with energy. Expanding a single term in the sum about some energy \(E_{0}\), we have \[\exp\ \frac{{\rm i}\tilde{S}_{k}(E)}{\hbar} \exp\ \frac{{\rm i}\tilde{S}_{k}(E_{0})}{\hbar}+\frac{{\rm i}( \partial\tilde{S}_{k}/\partial E)(E-E_{0})}{\hbar}\] \[= \exp\ \frac{{\rm i}\tilde{S}_{k}(E_{0})}{\hbar}+\frac{{\rm i} \tilde{T}_{k}(E-E_{0})}{\hbar}\,\] where we have made use of the classical expression for the period of a periodic orbit in terms of its actions, \(\partial\tilde{S}_{k}/\partial E=\tilde{T}_{k}=rT_{k}\). Thus, we see that the periodic orbit \(k\) contributes a term to \(\tilde{d}(E)\) that oscillates with energy period \[\delta E_{k} 2\pi\hbar/\tilde{T}_{k}. \tag{11.37}\] Recall from (11.12a) that \(\tilde{d}(E)\quad\hbar^{-N}\). Hence, the number of levels per oscillation, \((\delta E_{k})\tilde{d}(E)\), is \(O(1/\hbar^{N-1})\), which becomes very large for small \(\hbar\) (provided \(N>1\)). Thus we see that there is long range clustering (i.e., long compared to the average level spacing) of the energy levels. Figure 11.8: (\(a\)) The stationary phase condition is not satisfied. (\(b\)) The stationary phase condition is satisfied. Also, as we include longer and longer period orbits in the sum, we see from (11.37) that the resolved scale of \(\tilde{d}(E)\) becomes shorter. Hence, if we only desire a representation of \(\tilde{d}(E)\) smoothed over some scale \(\overline{\delta E}\) (where \(\overline{\delta E}\) is shorter than the scale of the smoothing used to obtain \(\tilde{d}(E)\)), then we do not need to include all the periodic orbits in the summation (11.36); we only need include the _finite_ number of periodic orbits whose period is not too large, \[\tilde{T}_{k}<2\pi\hbar/\overline{\delta E}.\] (This justifies the restriction on given by Eq. (11.9).) In general, it is usually not possible to use the semiclassical trace formula for \(\tilde{d}(E)\) to resolve a large number of individual levels of the spectrum (delta functions of \(d(E)\)). Indeed, even if all the amplitude, action, and phase quantities in the sum in (11.36) could be found for the infinite number of periodic orbits, it is still not clear that the exact delta function density (Figure 11.1(_a_)) would be recovered because the convergence of the sum in (11.36) is unlikely. Nevertheless, some work, which uses the semiclassical trace formula as a starting point, attempts to elimi nate the problem of divergences (Berry and Keating, 1990; Tanner _et al._, 1991; Sieber and Steiner, 1991). This apparently results in a systematic method for obtaining semiclassical approximations to individual energy levels of a classically chaotic system purely in terms of the properties of unstable classical periodic orbits. Such results for classically chaotic systems may be viewed as analogous to the well known Bohr Sommer field procedure for quantizing stable periodic orbits of classically integr able systems. An important result which lends some theoretical support to the random matrix hypothesis (Section 11.1.1) was obtained by Berry3 (1985). He used the trace formula (11.36) and the periodic orbits sum rule for chaotic systems given by Eq. (9.42) to show that the spectral rigidity of classically chaotic systems indeed satisfies the random matrix predictions (11.18a) and (11.18b) for \(L\) out to some maximum scale \[L\ll L_{\rm max}\ \ \ \ (\hbar/T_{\rm min})\tilde{d}(E)\] (cf., Eqs. (11.9) and (11.37)). ### 11.2 Wavefunctions for classically chaotic, bounded, time-independent systems Say we consider a classical system which is ergodic on the energy surface, \(E=H({\bf p},\,{\bf q})\), with Hamiltonian \((p^{2}/2\,m)+V({\bf q})\). Let \(f({\bf p},\,{\bf q})\) be the distribution function of the system such that the fraction of time that a typical orbit spends in some differential volume of phase space located at the phase space point (**p**, **q**) is \(f(\textbf{p},\textbf{q})\text{d}^{N}\textbf{p}\text{d}^{N}\textbf{q}\). Since the orbit is on the \(E=H(\textbf{p},\textbf{q})\) energy surface, \(f(\textbf{p},\textbf{q})\) must be of the form \(C(\textbf{p},\textbf{q})\partial(E-H(\textbf{p},\textbf{q}))\). Since \(f(\textbf{p},\textbf{q})\) does not depend on time, it must be solely a function of isolating constants of the motion. Since we assume the orbit is ergodic on the \(E=H(\textbf{p},\textbf{q})\) energy surface, the only isolating constant is \(H(\textbf{p},\textbf{q})\) itself. Thus \(C(\textbf{p},\textbf{q})\) can be taken to be independent of **p** and **q** (i.e., it is just a constant), and, noting the normalization \(f\text{d}^{N}\textbf{p}\text{d}^{N}\textbf{q}\equiv 1\), we have \[f(\textbf{p},\textbf{q})=\frac{\delta[E-H(\textbf{p},\textbf{q})]}{\delta[E-H( \textbf{p},\textbf{q})]\text{d}^{N}\textbf{p}\text{d}^{N}\textbf{q}}. \tag{11.38}\] This classical result leads to several natural conjectures (Berry, 1977) concerning the form of the eigenfunctions of the Schrodinger equation for such a system. In particular, integrating (11.38) over **p** we obtain \(f\text{d}^{N}\textbf{p}\)\([E-V(\textbf{q})]^{(N/2)-1}U(E-V(\textbf{q}))\). Thus, motivated by the corre spondence principle, Berry conjectures that in the semiclassical limit the eigenfunctions satisfy \[\overline{|\ \random. There is a difference, however, between the GOE and the GUE cases. For GOE, the wavefunction (**q**) is real. This imposes the con straints \(|a_{\bf k}|=|a_{-{\bf k}}|\) and \(\theta_{\bf k}+\theta_{-{\bf k}}=0\). For GUE there is no such con straint, and we can regard \(a_{\bf k}\) and \(a_{-{\bf k}}\) as completely uncorrelated. Thus for GOE we can view for randomly chosen \({\bf q}\) as a real variable obtained from the sum of many random terms. By the central limit theorem, has a Gaussian probability distribution4 (Berry, 1977), \[P(\ \ )=[2\pi^{\frac{-2}{2}}]^{-1/2}{\rm exp}(-{\frac{1}{2}}\ \ ^{2}/\ ^{\frac{-2}{2}}). \tag{11.40a}\] Numerical experiments checking this result have been performed by McDonald and Kaufman (1988) using the stadium billiard of Figure 7.24(_f_). In the case of GUE, \(\ \ ({\bf q})=\ \ r({\bf q})+{\rm i}\ \ _{i}({\bf q})\) with \(\ \frac{r}{2}=\frac{r}{2}\) and \(\frac{i}{2}\) acting like Gaussian uncorrelated random variables and \(\frac{i}{2}=\frac{r}{2}=\frac{r}{2}=\frac{1}{2}\overline{|^{2}}\). The probability distribution is thus predicted to have the form \[P(\ \ \ r,\ \ \ i)=(\pi\overline{|^{2}})^{-1}{\rm exp}-[(\ \ ^{2}_{r}+\ \ ^{2}_{i})/\ \ \ |^{2}]. \tag{11.40b}\] Considering \(\ X=|\ \ |^{2}\), Eqs. (11.40a) and (11.40b) yield the probability distributions \[P_{X}(X)=(8\pi X^{\overline{-2}})^{-1/2}{\rm exp}(-{\frac{1}{2}}X/\ ^{\overline{-2}})\ {\rm for\ GOE}, \tag{11.40c}\] \[P_{X}(X)=(\overline{|\ \ |^{2}})^{-1}{\rm exp}-(X/\overline{|\ \ |^{2}})\ {\rm for\ GUE}. \tag{11.40d}\] Note that the probability of small \(X\) values is much larger for GOE, \(P_{X}(X)\ \ \ \ X^{-1/2}\). Another interesting prediction from the random plane wave hypothesis (11.39) concerns the correlation function \[C(r)=\langle|\ \ ({\bf q})|^{2}|\ \ ({\bf q}+{\bf r})|^{2}\rangle/\langle|\ \ \ ({\bf q})|^{2}\rangle^{2},\] where the angle brackets denote a spatial average. Using (11.39) in the above and averaging over the angle of \({\bf r}\) we obtain \[C(r)=1+\xi J_{0}^{2}(kr), \tag{11.41}\] where \(\xi=2\) for GOE and \(\xi=1\) for GUE and we have used \(J_{0}(kr)=(2\pi)^{-1}\ \ ^{2\pi}_{0}\ {\rm exp}({\rm i}kr\cos\theta){\rm d}\theta\). Heller (1984) examined short wavelength numerical solutions of the Helmholtz equation (also in the stadium billiard) and found rather striking deviations from the random eigenfunction conjecture. In particular, he observed that wavefunctions often tend to have pronounced enhanced amplitudes along the paths of unstable periodic orbits. Different eigen functions exhibit enhancements along different periodic orbits, and can also have enhancements along more than one such orbit. Figure 11.9 from Heller's paper illustrates this phenomenon, which Heller calls'scars.' Bogomolny (1988) and Berry (1989) have utilized the semiclassical Greenfunction (11.29) to show theoretically that an energy band average of many eigenfunctions displays the effect of scarring, while Antonsen _et al._ (1995) consider the statistics of scarring associated with individual eigen functions and periodic orbits. ### 11.3 Temporally periodic systems In our previous discussion we have assumed that the Hamiltonian has no explicit time dependence. In this section we consider the case where the Hamiltonian varies periodically with time. Physical examples of this type where the interplay of classical chaos and quantum mechanics is potentially important occur when sinusoidally varying electromagnetic fields act on atoms (Bayfield and Koch, 1974; Casati _et al._, 1986; Jensen _et al._, 1991; Meerson _et al._, 1979) or molecules (Blumel _et al._, 1986a,b) or on electrons on the surface of a conductor (Blumel and Smilansky, 1984; Jensen, 1984). Another potential experimental system involves a Joseph son junction (Graham _et al._, 1991). Detailed experimental results on such temporally periodic forced systems have been obtained for the case of a hydrogen atom initialized in a highly excited state and subjected to a microwave field (Bayfield and Koch (1974); for reviews see Bayfield (1987) and Jensen _et al_. (1991). The important issue here is the possible ionization of the atom by the field. The experiments reveal a regime where the results are well described by the classical chaotic motion of the electron in the electric field of the wave plus the Coulomb electric field of the proton. In addition, another regime is also observed (Bayfield and Sokol, 1988; Galvez _et al_., 1988; Blumel _et al_., 1991; Arndt _et al_., 1991) where the quantum effects appear to suppress the probability of ionization relative to the classical prediction. This suppression of classical chaotic transport by quantum wave effects appears to be a fundamental consideration in time dependent quantum chaos problems. The effect was first seen, and is most easily understood, within the context of the kicked rotor problem (Figure 7.3) which, in the classical case, leads to the standard map, Eq. (7.15), whose behavior we have discussed in Section 7.3.1. In the remainder of this section we shall restrict our discussion to this one instructive example. The Hamiltonian for the kicked rotor is \(H(p_{\theta},\,\theta,\,t)=p_{\theta}^{2}/2\tilde{I}+K\cos\theta\;\;\;_{\tau}(t)\), where \(\;\;\;_{\tau}(t)\equiv\Sigma_{n}\delta(t-n\tau)\) (cf. Eq. (7.14)). Repla ring \(p_{\theta}\) by the angular momentum operator \(-{\rm i}\hbar\partial/\partial\theta\), yields the time dependent Schrodinger equation, \[{\rm i}\hbar\,\frac{\partial\;\;(\theta,\,t)}{\partial t}=-\frac{\hbar\,^{2} \partial^{2}\;\;\;(\theta,\,t)}{2\tilde{I}}+K\cos\theta\;\;\;_{\tau}(t)\;\;\; (\theta,\,t). \tag{11.42}\] Normalizing time to the period of the kicking \(\tau\) via \(\hat{t}\equiv t/\tau\), and normalizing \(\hbar\) to \(\tilde{I}/\tau\) via \(\hat{\hbar}\equiv\hbar\tau/\tilde{I}\), we obtain \[{\rm i}\hat{\hbar}\,\frac{\partial}{\partial t}=\frac{\hat{\hbar}^{2}}{2}\frac{ \partial^{2}}{\partial\theta^{2}}+\hat{K}\cos\theta\;\;\;(\hat{t})\;\;\;, \tag{11.43}\] where \(\hat{K}\) is the normalized kicking strength, \(\hat{K}\equiv K\tau/\tilde{I}\), and \((\hat{t})\equiv\Sigma_{n}\delta(\hat{t}-n)\). From (11.43) we see that the problem depends on two dimensionless parameters, \(\hat{K}\) and \(\hat{\hbar}\). In contrast, the classical problem (7.15) involves only the single dimensionless parameter \(\hat{K}\) which char acterizes the kicking strength.5 Hence, the dimensionless parameter \(\hat{\hbar}\) may be regarded as characterizing the strength of the quantum effects, and the semiclassical limit corresponds to \(\hat{\hbar}\ll 1\) (assuming \(\hat{K}\;\;\;\;\;O(1)\)). (Alternatively, (11.43) follows from (11.42) by setting \(\tilde{I}=1\) and \(\tau=1\).) In what follows we shall drop the circumflexes over \(t\), \(\hbar\), and \(K\), and henceforth when we write \(t\), \(\hat{\hbar}\) and \(K\) we shall mean the normalized quantities formerly denoted \(\hat{t}\), \(\hat{\hbar}\) and \(\hat{K}\). Equation (11.43) can be dealt with as follows. Let \[{}_{n\pm}(\theta)=\lim_{\varepsilon\to 0^{+}}\;\;\;(\theta,\,n\pm\varepsilon),\] where \(\varepsilon\to 0^{+}\) signifies that the limit to zero is taken with \(\varepsilon\) positive. Thus,\({}_{n^{+}}\) and \({}_{n^{-}}\) denote the wavefunction just after and just before the application of the kick at \(t=n\). Considering the small range of times near a kick, \(n-0^{+}\)\(t\)\(n+0^{+}\), we may neglect the term \(\partial^{2}\)\(/\partial\theta^{2}\), so that we have \[{\rm i}\hbar\partial\ \ /\partial t\equiv K\cos\theta\delta(t-n)\ \.\] Integrating this from \(t=n-0^{+}\) to \(t=n+0^{+}\), we obtain \[{}_{n+}(\theta)=\ \ \ _{n-}(\theta){\rm exp}[-{\rm i}(K/\hbar){\rm cos}\,\theta]. \tag{11.44}\] In the time interval \(n+0^{+}\)\(t\)\((n+1)-0^{+}\) the delta function term is zero, \((t)\equiv 0\), and obeys the equation, \[{\rm i}\frac{\partial}{\partial t}=-\,\frac{\hbar}{2}\frac{\partial^{2}}{ \partial\theta^{2}}\,. \tag{11.45}\] Since \((\theta)\) is periodic in \(\theta\), \((\theta)=\ \ (\theta+2\pi)\), we can solve (11.45) by introducing a Fourier series in \(\theta\) \[(\theta,\ t)=\frac{1}{(2\pi)^{1/2}}\ {}_{l=-\infty}^{+\infty}\phi(l,\ t){\rm exp }({\rm i}l\theta), \tag{11.46a}\] \[\phi(l,\ t)=\frac{1}{(2\pi)^{1/2}}\int_{0}^{2\pi}{\rm exp}(-{\rm i}l\theta)\ \ (\theta,\ t){\rm d}\theta. \tag{11.46b}\] Thus, (11.45) becomes \({\rm i}\partial\phi(l,\ t)/\partial t=(\hbar\,l^{2}/2)\phi(l,\ t)\), and we obtain the result, \[\phi_{(n+1)^{-}}(l)=\phi_{n^{+}}(l){\rm exp}(-{\rm i}\hbar\,l^{2}/2). \tag{11.47}\] The representation (11.46a) is particularly nice in this context, because the Fourier basis functions \({\rm exp}({\rm i}l\theta)\) are eigenfunctions of the angular moment \[-{\rm i}\hbar\partial/\partial\theta.\] Thus, the momenta are quantized at values \(l\hbar\), and the expected value of \(p_{\theta}^{2}\) is \[\overline{p_{\theta}^{2}}\equiv\int_{0}^{2\pi}\ \ \ (-{\rm i}\hbar\partial/\partial\theta)^{2}\ \ \ {\rm d}\theta=\hbar^{2}\ \ \ \ \ \ \ _{l=-\infty}^{+\infty}\ l^{2}|\phi(l,\ t)|^{2}. \tag{11.48}\] For large enough \(K\), we saw in Chapter 7 that the classical kicked rotor yields diffusion in momentum (Eqs. (7.42) (7.44)), \[p^{2}/2\ \ \ \ \ \ Dn \tag{11.49}\] where \(D\ \ \ K^{2}/4\). What happens quantum mechanically in the case of small \(\hbar\)? To try to answer this question one can integrate the Schrodinger equation numerically. A good way to do this is to utilize (11.44), (11.46) and (11.47). That is, advance \({}_{n^{-}}\) to \({}_{n^{+}}\) by multiplying by \({\rm exp}[-{\rm i}(K/\hbar){\rm cos}\,\theta]\); then take the Fourier transform, Eq. (11.46b) (using a fast Fourier transform algorithm) to obtain \(\phi_{n+1}(l)\); then obtain \(\phi_{(n+1)-}(\theta)\) by multiplying by \({\rm exp}(-{\rm i}\hbar\,l^{2}/2)\); then inverse transform back to get \((n+1)_{-}(\theta)\), Eq. (11.46a); and successively repeat this string of operations. The first numerical solution of this problem was done by Casati _et al._ (1977) (see also Hogg and Huberman (1982)). The result they obtained was rather surprising and is schematically illustrated in Figure 11.10. Using (11.48) they plotted \(\overline{p_{\theta}^{2}}\) versus \(n\) starting from an initial zero momentum (\(l=0\)) state, \((\theta,\,0^{+})=(2\pi)^{-1/2}\). They found that for typical small values of \(\hbar\) and typical large values of \(K\), the quantum calculated momentum indeed diffused,6 just as in the classical case (11.49), but only for a finite time, denoted \(n\) in the figure. When \(t\) becomes of order \(n\) quantum effects evidently arrest the diffusion and \(\overline{p_{\theta}^{2}}\) remains bounded as \(t\rightarrow\infty\). As \(\hbar\) is made smaller, \(n\) becomes larger and so does the maximum attained value of \(\overline{p_{\theta}^{2}}\). Thus the evolution looks classical for a longer time when \(\hbar\) is decreased. Nevertheless, if we wait long enough, the quantum limitation of the classical chaotic diffusion eventually manifests itself. This 'localization' phenomenon has been claimed to be the explanation of the observed reduction in the microwave field ionization rate of hydrogen atoms mentioned at the beginning of this section. A nice explanation of the quantum suppression of classical momentum diffusion in the rotor problem has been suggested in the paper of Fishman _et al._ (1982) and is discussed below. Figure 11.10: Schematic illustration of the results of Casati _et al._ Since (11.43) is periodic in time, then, according to Floquet's theorem, its solution can be represented as a superposition of solutions of the form \[\exp(-{\rm i}\omega t)w_{\omega}(\theta,\,t)=\sum_{\omega}\,\exp(-{\rm i}\omega t )_{\quad l}\quad u_{\omega}(l,\,t){\rm exp}({\rm i}l\theta), \tag{11.50}\] where \(w_{\omega}\) and \(u_{\omega}\) are periodic in \(t\) with the period of the driving force. Since in our normalization the period is 1, we have \(w_{\omega}(\theta,\,t)=w_{\omega}(\theta,\,t+1)\) and \(u_{\omega}(l,\,t)=u_{\omega}(l,\,t+1)\). We can regard a solution of the form (11.50) as being exactly analogous to a Bloch wave for the time independent Schrodinger equation in a spatially periodic potential. Here, however, the potential is time periodic rather than space periodic. Substi tuition of (11.50) into the Schrodinger equation of the quantum rotor (11.43) produces an eigenvalue problem. In particular, using Eqs. (11.44), (11.46) and (11.47), we can write the evolution equation for \(\quad{}_{n+}(\theta)\) as \[{}_{(n+1)+}(\theta)={\cal L}[\quad{}_{n+}(\theta)], \tag{11.51}\] where the operator \({\cal L}\) is unitary and is given by \[{\cal L}[f(\theta)]=\frac{1}{2\pi}\quad{}_{l}\quad{}_{0}^{2\pi}{\rm d}\theta^ {\prime}\exp[-{\rm i}\,R(\theta,\,\theta^{\prime},\,l)]f(\theta^{\prime}),\] \[R(\theta,\,\theta^{\prime},\,l)=(K/\hbar){\rm cos}\,\theta-l(\theta-\theta^{ \prime})+(\hbar\,l^{2}/2).\] (\({\cal L}\) is unitary because the operator on the right hand side of (11.42) is Hermitian.) Now since (11.50) is also a solution of our Schrodinger equation, we obtain from (11.51) \[\exp(-{\rm i}\omega)w_{\omega+}(\theta)={\cal L}[w_{\omega+}(\theta)], \tag{11.52}\] where \(w_{\omega+}(\theta)\equiv\lim_{t\to 0^{+}}w_{\omega}(\theta,\,t)\). That is, \(w_{\omega+}(\theta)\) is an eigenfunction of the unitary operator \({\cal L}\), and \(\exp(-{\rm i}\omega)\) is the associated eigenvalue. Since the magnitudes of the eigenvalues of a unitary operator are 1, the quantities \(\omega\) are real. In general, the eigenvalue spectrum for a general problem such as that specified by (11.52) can be either discrete or continuous or a combination of the two types. Part of the result of Grempel _et al_. is that the spectrum in the case of the kicked rotor is essentially discrete. That is, we may label the eigenvalues and the associated eigenfunctions using an integer valued index \(j\) such that a solution of the Schrodinger equation may be written as a discrete sum, \[(\theta,\,t)=\quad{}_{j}\quad A_{j}\exp({\rm i}\omega_{j}t)w_{j}(\theta,\,t),\] (11.53a) or in the Fourier transform (momentum) representation \[\phi(l,\,t)=\quad{}_{j}\quad a_{j}\exp({\rm i}\omega_{j}t)u_{j}(l,\,t), \tag{11.53b}\]where in (11.53a) and (11.53b) \(w_{j}=w_{\alpha_{j}}\) and \(u_{j}=u_{\alpha_{j}}\), and we assume that the functions \(w_{j}(\theta,\,t)\) and \(u_{j}(l,\,t)\) form a complete orthonormal basis in \(\theta\) and \(l\) (at any fixed time \(t\)). Grempel _et al_. present a strong argument that the eigenfunctions are localized in momentum space (i.e., in \(l\)). In particular, they claim that, for a given \(j\), the \(u_{j}(l,\,t)\) will on average decay with \(l\) roughly exponentially away from a central region \(l\)\(\bar{l}_{j}\), \[u_{j}(l,\,t)\quad\exp\left(-\frac{\hbar|l-\bar{l}_{j}|}{\rm L}\right), \tag{11.54}\] where \(\rm L\) is the 'localization length' of the eigenfunction and \(\rm L/\hbar\) is large when \(\hbar\ll 1\). In order to see the consequences of (11.54), say we take an initial condition at a particular momentum \(\hbar l_{0}\), \[(\theta,\,0^{+})=\frac{1}{(2\pi)^{1/2}}\,\exp({\rm i}l_{0}\theta).\] Then the coefficient \(a_{j}\) in (11.53b) is \[a_{j}=u_{j}\,(l_{0},\,0^{+}). \tag{11.55}\] Thus, by (11.54) only eigenfunctions whose centers \(\bar{l}_{j}\) are within a momentum range \(\rm L\) of the initial level \(\hbar l_{0}\) are appreciably excited. Hence, the expected value of \((p_{\theta}-\hbar\,l_{0})^{2}\) cannot become much larger than \(\rm L\). We therefore interpret the quantum limitation to the growth of \(\overline{p_{\theta}^{2}}\) in Figure 11.10 as occurring when \(\rm\overline{p_{\theta}^{2}}\)\(\rm L\). In order to argue the momentum localization of the eigenfunctions, Grempel _et al_. recast the eigenvalue equation (11.52) in the following form: \[T_{l}\overline{u}(l)+\mathop{U}\limits_{r\to 0}U_{r}\overline{u}(l+r)= \varepsilon\overline{u}(l), \tag{11.56}\] where \(\overline{u}(l)\equiv\frac{1}{2}[u_{\alpha}(l,\,0^{+})+u_{\alpha}(l,\,0^{-})]\) (i.e., \(\overline{u}(l)\) is the average of \(u_{\alpha}\) just before and just after the kick), \(T_{l}=\tan(E_{l}/2)\), \(E_{l}=\omega-(\hbar\,l^{2}/2)\), \(U_{r}\) is the \(r\)th Fourier coefficient in the Fourier expansion of \(U(\theta)=-\tan(\frac{1}{2}K\cos\theta)\), and \(\varepsilon=-U_{0}\). Next they note that (11.56) is of the same form as the equation for the quantum wavefunction on a time independent, one dimensional discrete (spatial) lattice. In this analogy \(l\) is the spatial location of a lattice site, \(\varepsilon\) is the energy level (eigenvalue), \(\overline{u}(l)\) is the wavefunction value at lattice site \(l\) corresponding to energy level \(\varepsilon\), \(U_{r}\) is the 'hopping element' to the \(r\)th neighbor, and \(T_{l}\) is the potential energy at site \(l\). In the case where \(T_{l}\) is independent of \(l\), we have a spatially homogenous lattice. The solutions of the eigenvalue problem (11.56) give propagating waves which travel from \(l=-\infty\) to \(l=+\infty\). That is, the eigenstates are of the form \(\overline{u}(l)\)\(\rm exp[{\rm i}k(\varepsilon)l]\) for \(\varepsilon\) in appropriate energy bands. Such solutions are called 'extended', as opposed to localized solutions which decay to zero both at \(l\rightarrow+\infty\) and \(l\rightarrow-\infty\). As a model of what happens when random impurities are introduced into the lattice, one can consider the case where \(T_{l}\) is an externally given random number. In this case, it has been shown that, essentially for any degree of randomness of \(T_{l}\), the \(\overline{n}(l)\) are exponentially localized as in Eq. (11.54). In the condensed matter context, this has the consequence that an electron in the lattice is quantum mechanically spatially localized, and the medium becomes an insulator. This phenomenon is called Anderson localization (Anderson, 1958). For large enough \(l\), the quantity \(T_{l}=\tan[\omega-(\hbar\,l^{2}/2)]\) varies rapidly with \(l\) in an erratic way. Grempel _et al._ contend that \(T_{l}\) may then be thought of as essentially random, even though it is a known deterministic function of \(l\). Thus, by the analogy with Anderson localization, they conclude that the quantum kicked rotor should have localized momentum wavefunctions. The localization length \({}_{\rm L}\) of (11.54) can be roughly estimated as follows (Chirikov _et al._, 1981). By (11.54) and (11.55) those modes most strongly excited are localized around momenta within \({}_{\rm L}\) of \(p_{\theta}=0\) (assuming an initial condition \((\theta,\,0^{+})=(2\pi)^{-1/2}\)). Hence, the effective number of eigenfunctions excited by the initial condition is \({}_{\rm L}/\hbar\). Each eigenfunction has an associated eigenvalue \(\exp(-{\rm i}\omega_{j})\). Thus, the \(\omega_{j}\) must be taken to lie in \([0,\,2\pi]\). Since there are of order \({}_{\rm L}/\hbar\) excited eigenfunctions, the typical frequency spacing between adjacent \(\omega_{j}\) values is \(\delta\omega\)\(2\pi/({}_{\rm L}/\hbar)\). For \(n\)\(/\delta\omega\), the system does not yet 'know' about the individual eigenfunctions, and the quantum nature of the problem is therefore not felt. Thus, for \(n\)\(1/\delta\omega\) we expect that \(\overline{p_{\theta}^{2}}\) increases linearly with time as in the classical problem. At \(n\)\(1/\delta\omega\) the existence of discrete localized eigenfunctions becomes felt, and the quantum diffusion is arrested. Thus, \[n\)\(1/\delta\omega\)\({}_{\rm L}/\hbar,\) where \(n\) is the turnover time shown in Figure 11.10. In addition, at the turnover, the characteristic spread will be the localization length \({}_{\rm L}\). Since classical diffusion with diffusion coefficient \(D\) applies for times up to \(n\), we have \(Dn\)\({}_{\rm L}^{2}\). Using \(n\)\({}_{\rm L}/\hbar\) and \(D\)\(K^{2}/4\), the above yields the desired estimate of \({}_{\rm L}\), \({}_{\rm L}\)\(K^{2}/\hbar\), and \(n\)\(K^{2}/\hbar^{2}\). Thus, as previously noted, \({}_{\rm L}\) and \(n\) increase as \(\hbar\) is reduced (the time interval of classical behavior lengthens). The above argument giving the estimate (11.57) implies that very good coherence of the quantum waves must be maintained for the relatively long time \(n\). Thus, one might expect that the delicate interference effects leading to localization could be strongly affected by noise. Ott _et al._ (1984a) consider the effect of noise by adding a small fluctuating component of root mean square size \(\sigma\) to the strength of the delta function kicks. They find that localization is destroyed when \[\sigma\quad\hbar^{2}/K. \tag{11.58}\] For small \(\hbar\), this level of noise produces only a tiny increase in the classical diffusion, but completely removes the quantum limitation at time \(n\). That is, with noise of magnitude (11.58), the quantum evolution is such that \(\overrightarrow{p_{\theta}^{2}}\) \(2nD\) for all time, just as in the classical case. Experimental observations on ionization of atoms by microwave fields also apparently show evidence of the destruction of localization by noise (Blumel _et al._, 1991; Arndt _et al._, 1991). ### 11.4 Quantum chaotic scattering For scattering problems one is interested in unbounded orbits in phase space and a time independent Hamiltonian. We shall limit the discussion to one particular result (Blumel and Smilansky, 1988) which seems particularly interesting. (Some further works and references on quantum chaotic scattering are Blumel (1991), Smilansky (1992), Jalabert _et al._ (1990), Cvitanovic and Eckhardt (1989), Gaspard and Rice (1989b,c), and Gutzwiller (1983).) Blumel and Smilansky consider the scattering matrix for an incoming state \(I\) scattering to an outgoing state \(I^{\prime}\). By using the semiclassical approximation for the scattering matrix element \(S_{II^{\prime}}(E)\), they find that classical chaos in the scattering problem results in enhanced fluctuations of \(S_{II^{\prime}}(E)\) with variation of \(E\). Specifically they consider the correlation function, \[C_{II^{\prime}}(\varepsilon)=\langle\ S_{II^{\prime}}(E)S_{II^{\prime}}(E+ \varepsilon)\,\] where the angle brackets denote an energy average over a range that is classically small, but quantum mechanically large in that it includes many wiggles of \(S_{II^{\prime}}(E)\). They find the important result that, in the semiclassical approximation, \(C_{II^{\prime}}(\varepsilon)\) is related to the classically obtained probability distribution \(P(E,\ T)\), where \(P(E,\ T){\rm d}T\) is the classical probability that a randomly chosen orbit experiences a delay time (defined on in Chapter 5) between \(T\) and \(T+{\rm d}T\). Their result is \[C_{II^{\prime}}(\varepsilon)\quad\int{\rm d}T\!P(E,\ T){\rm exp}({\rm i} \varepsilon\,T/\hbar). \tag{11.59}\]This provides an interesting direct connection between a classical quantity \(P(E,\,T)\) and an inherently quantum quality \(C_{II^{\prime}}(\varepsilon)\). In particular, (11.59) implies that, if \(P(E,\,T)\) decays exponentially with \(T\) (cf. Eq. (5.1)), then \(C_{II^{\prime}}(\varepsilon)\) has a Lorentzian shape. ## Problems 1. Obtain (11.13d) and (11.12b). 2. Consider a rectangular billiard of dimension \(a\) by \(b\). Obtain the energy levels and plot some of them versus \(b\) with \(a\) held fixed, thus verifying that, for this integrable billiard, energy levels do not repel (Figure 11.3(\(a\))). 3. Verify for \(N=2\) that \(d_{0}(E)\) given by (11.34) yields the Weyl result. Hint: for small \(z\) the expansion of the Hankel function is \(H_{0}^{(1)}(z)=1+(2i/\pi)[\ln(z/2)+\gamma]\)\(+\)\(O(z^{2}\ln z)\), where \(\gamma\) is a constant. 4. For \(a=1\) and \(b=\surd 2\) in Problem 2 obtain the first 1000 energy levels, and make a histogram approximation to \(P(s)\) using bins of size \(s=0.2\). Explain your steps in obtaining \(P(s)\). Plot \(\mathrm{e}^{-s}\) on the same graph, and comment. 5. For the case of a chaotic billiard GOE applies in the absence of an applied magnetic field. As a magnetic field is turned on and increased there will be a transition from GOE to GUE. Use the random plane wave hypothesis, Eq (11.39), to address this issue. Do this by assuming that there is some correlation between \(\theta_{\mathbf{k}}\) and \(\theta_{-\mathbf{k}}\) but that \(\theta_{\mathbf{k}}\) and \(\theta_{-\mathbf{k}}\) are otherwise uncorrelated and random in \([0,\,2\pi]\). For simplicity take \(|a_{\mathbf{k}}|=1\). Show that Eq. (11.41) applies with the possibility of varying continuously6 from 2 (GOE) to 1 (GUE). In particular show that \(=1+\langle\cos(\theta_{\mathbf{k}}+\theta_{-\mathbf{k}})\) where the angle brackets represent an average over \(\mathbf{k}\). 6. Obtain the evolution of \(\overline{p_{\theta}^{2}}\) for the quantum kicked rotor for the case where \(\hbar=4\pi\). In particular, show that \(\overline{p_{\theta}^{2}}\) is proportional7 to \(n^{2}\) (for large \(n\)). 7. Derive Eq. (11.56) for the quantum kicked rotor. Footnote 7: The \(\mathrm{e}^{-s}\) is the \(\mathrm{e}^{-s}\). ## Notes 1. One might call the subject of this chapter 'wave chaos' rather than 'quantum chaos' to emphasize this generality, but we shall adhere to the more traditional terminology. 2. Another, somewhat exceptional, case giving nonisolated periodic orbits is the chaotic'stadium' billiard of Figure 7.24(\(f\)). In that case there is a continuous one parameter family of neutrally stable periodic orbits bouncing directly back and forth between the two straight parallel segments of the perimeter of the stadium. 3. Berry's considerations are built on work by Hannay and Ozorio de Almeida (1984). 4. That is, for randomly chosen \(\mathbf{q}\), the probability that an eigen wavefunction value lies between and \(+\)d is \(P(\ )\)d. Although the distributions (11.40a, b) are Gaussian, a random superposition of plane waves does lead to significant spatial correlations (O'Conner _et al._, 1987; Blumel _et al._, 1992). 5. In (7.15) we set \(\tau/\tilde{I}=1\) so that \(K\) and \(\hat{K}\) are identical. 6. \(P_{X}(X)\) in the case intermediate between GOE and GUE is discussed by Zyczkowski and Lenz (1991) and by Chung _et al_. (2000). 7. For special values of \(\hbar\), called quantum resonances, Izraelev and Shepelyansky (1979) show that \(\overline{p_{\theta}^{2}}\) increases quadratically (rather than linearly) with time. Thus, the behavior is like the acceleration of a free particle under a constant force (rather than diffusion in momentum). The quantum resonances occur when \(\hbar\) is a rational number times \(4\pi\) (see Problem 6). The typical behavior, which occurs when \(\hbar\) is not on a resonance, is diffusive (Figure 11.10).
## Introduction: A NEW AGE OF DYNAMICS _In the beginning, how the heav'ns and earth rose out of chaos_. J. Milton _Paradise Lost_, 1665 ### What is chaotic dynamics? For some, the study of dynamics began and ended with Newton's Law of \(F=mA\). We were told that if the forces between particles and their initial positions and velocities were given, one could predict the motion or history of a system forever into the future, given a big enough computer. However, the arrival of large and fast computers has not fulfilled the promise of infinite predictability in dynamics. We now know that the motion of very simple dynamical systems cannot always be predicted far into the future. Such motions have been labeled _chaotic_, and their study has promoted a discussion of some exciting new mathematical ideas in dynamics. Three centuries after the publication of Newton's _Principia_ (1687), it is appropriate that new phenomena have been discovered in dynamics and that new mathematical concepts from topology and geometry have entered this venerable science. The nonscientific concept of chaos1 is very old and is often associated with a physical state or human behavior without pattern and out of control. The term _chaos_ often stirs fear in humankind because it implies that governing laws or traditions no longer have control over events such as prison riots, civil wars, or a world war. Yet there is always the hope that some underlying force or reason is behind the chaos or can explain why seemingly random events appear unpredictable. Footnote 1: The origin of the word _chaos_ is a Greek verb which means _to gape open_ and which was often used to refer to the primeval emptiness of the universe before things came into being (_Encyclopaedia Britannica_, Vol. 5, p. 276). To the stoics, chaos was identified with water and the watery state which follows the periodic destruction of the earth by fire. In _Metamorphoses_, Ovid used the term to denote the raw and formless mass in which all is disordered and from which the ordered universe is created. A modern dictionary definition of chaos (Funk and Wagnalls) provides two meanings: (i) utter disorder and confusion and (ii) the unformed original state of the universe. In the physical sciences, the paragon of chaotic phenomena is turbulence. Thus, a rising column of smoke or the eddies behind a boat or aircraft wing2 provide graphic examples of chaotic motion. For example, the flow pattern behind a cylinder (Figure 1-1) and the mixing of drops of color in paint (Color Plate 10) illustrate the basic nature of chaotic dynamics. The fluid mechanician, however, believes that these events are not random because the governing equations of physics for each fluid element can be written down. Also, at low velocities, the fluid patterns are quite regular and predictable from these equations. Beyond a critical velocity, however, the flow becomes turbulent. A great deal of the excitement in nonlinear dynamics today is centered around the hope that this transition from ordered to disordered flow may be explained or modeled with relatively simple mathematical equations. What we hope to show in this book is that these new ideas about turbulence extend to other problems in physics as well. It is the recognition that chaotic dynamics are inherent in all of nonlinear physical phenomena that has created a sense of revolution in physics today. Footnote 2: The reader should look at the beautiful collection of photos of fluid turbulent phenomena compiled by Van Dyke (1982). We must distinguish here between so-called random and chaotic motions. The former is reserved for problems in which we truly do not know the input forces or we only know some statistical measures of the parameters. The term _chaotic_ is reserved for those _deterministic_ problems for which there are no random or unpredictable inputs or parameters. The existence of chaotic or unpredictable motions from the classical equations of physics was known by Poincare.3 Consider the following excerpt from his essay on _Science and Method_: Footnote 3: Henri Poincaré (1854–1912) was a French mathematician, physicist, and philosopher whose career spanned the grand age of classical mechanics and the revolutionary ideas of relativity and quantum mechanics. His work on problems of celestial mechanics led him to questions of dynamic stability and the problem of finding precise mathematical formulas for the dynamic history of a complex system. In the course of this research he invented the ‘‘the method of sections,’’ now known as the _Poincaré section_ or _Poincaré map_. See Holmes (1990b) for a modern discussion of Poincaré’s work. It may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible. Figure 1.1: Turbulent eddies in the flow of fluid behind a cylinder. (Courtesy of C. Williamson, Cornell University.) In the current literature, _chaotic_ is a term assigned to that class of motions in deterministic physical and mathematical systems whose time history has a _sensitive dependence on initial conditions_. Two examples of mechanical systems that exhibit chaotic dynamics are shown in Figure 1-2. The first is a thought experiment of an idealized billiard ball (rigid body rotation is neglected) which bounces off the sides of an elliptically shaped billiard table. When elastic impact is assumed, the energy remains conserved, but the ball may wander around the table without exactly repeating a previous motion for certain elliptically shaped tables. Another example, which the reader with access to a laboratory can see for oneself (see Appendix C), is the ball in a two-well potential shown in Figure 1-2\(b\). Here the ball has two equilibrium states when the table or base does not vibrate. However, when the table vibrates with periodic motion of large enough amplitude, the ball will jump from one well to the other in an apparently random manner; that is, periodic input of one frequency leads to a randomlike output with a Figure 1-2: (_a_) The motion of a ball after several impacts with an eliptically shaped billiard table. The motion can be described by a set of discrete numbers (\(s_{i}\), \(\phi_{i}\)) called a _map_. (_b_) The motion of a particle in a two-well potential under periodic excitation. Under certain conditions, the particle jumps back and forth in a periodic way—that is, LRLR..., or LLRLRL..., and so on. For other conditions the jumping is chaotic—that is, it shows no pattern in the sequence of symbols L and R. broad spectrum of frequencies. The generation of a continuous spectrum of frequencies below the single input frequency is one of the characteristics of chaotic vibrations (Figure 1-3). Loss of information about initial conditions is another property of a chaotic system. Suppose one has the ability to measure a position with accuracy \(\Delta x\) and a velocity with accuracy \(\Delta v\). Then in the position-velocity plane (known as the _phase plane_) we can divide up the space into areas of size \(\Delta x\)\(\Delta v\) as shown in Figure 1-4. If we are given initial conditions to the stated accuracy, we know the system is somewhere in the shaded box in the phase plane. But if the system is chaotic, this uncertainty grows in time to \(N(t)\) boxes as shown in Figure 1-4\(b\). The growth in uncertainty given by \[N\approx N_{0}e^{ht}\] (1-1.1) is another property of chaotic systems. The constant \(h\) is related to the concept of _entropy_ in information theory (e.g., see Shaw, 1981, 1984) and will also be related to another concept called the _Lyapunov exponent_ (see Chapter 6), which measures the rate at which nearby trajectories of a system in phase space diverge. A _positive_ value of this Lyapunov exponent for a particular dynamical system is a quantitative measure of chaos. Figure 1-3: The power spectral density (Fourier transform) of chaotic motion in a two-well potential. (After Y. Ueda, Kyoto University.) ### Why Fractal Dynamics? The reader may ask: With predictability lost in chaotic systems, is there any order left in the system? For dissipative systems the answer is yes; there is an underlying structure to chaotic dynamics. This structure is not apparent by looking at the dynamics in the conventional way, that is, the output versus time or from frequency spectra. One must search for this order in phase space (position versus velocity). There one will find that chaotic motion exhibits a new geometric property called _fractal_ structure. Examples of fractal patterns are illustrated in the color plates. Fractals are geometric structures that appear at many scales. One of the goals of this book is to teach how to discover the fractal structure in chaotic vibrations as well as to measure the loss of information in these randomlike motions. ### Why Study Chaotic Dynamics? The subject of chaos has certainly become newsworthy over the past few years--the study of mathematical chaos that is. Many popular magazines have carried articles on the new studies into mathematics of chaotic dynamics. But engineers have always known about Figure 1.4: An illustration of the growth of uncertainty or loss of information in a dynamical system. The black box at time \(t=t_{0}\) represents the uncertainty in initial conditions. chaos--it was called _noise_ or _turbulence_, and "'fudge" factors or factors of safety were used to design around these apparent random unknowns that seem to crop up in every technical device. So what is new about chaos? First, the recognition that chaotic vibrations can arise in low-order, nonlinear deterministic systems raises the hope of understanding the source of randomlike noise and doing something about it. Second, the new discoveries in nonlinear dynamics bring with them new concepts and tools for detecting chaotic vibrations in physical systems and for quantifying this "deterministic noise" with new measures such as fractal dimensions and Lyapunov exponents. Since the turn of the century, mathematicians have also known that certain dynamical systems possessed irregular solutions. Poincare, as noted in the above quote, was aware of chaotic solutions, as was Birkhoff in the early part of this century. Van der Pol and Van der Mark (1927) reported "irregular noise" in experiments with an electronic oscillator in the magazine _Nature_. So what is new about chaos? What is new about chaotic dynamics is the discovery of a seemingly underlying order which holds out the promise of being able to predict certain properties of noisy behavior. Perhaps the greatest hope lies in the possibility of understanding turbulence in fluid, thermofluid, and thermochemical systems. Turbulence is one of the few remaining unsolved problems of classical physics, and the recent discovery of deterministic systems which exhibit chaotic oscillations has created much optimism about solving the mysteries of turbulence. But already this optimism has been tempered by the complexities of chaotic dynamics in thermofluid systems, especially the spatial aspects of fluid flow as illustrated in Figure 1-1. However, there may be more immediate payoffs in the study of chaotic phenomena in systems with fewer degrees of freedom, such as low-order nonlinear mechanical devices and nonlinear circuits. ## Sources of Chaos Chaotic vibrations occur when some strong nonlinearity exists. Examples of nonlinearities in mechanical and electromagnetic systems include the following: Gravitational forces in the solar system Nonlinear elastic or spring elements Nonlinear damping such as friction Backlash, play, or limiters or bilinear springs Fluid-related forces Nonlinear boundary conditions Nonlinear feedback control forces in servosystems Nonlinear resistive, inductive, or capacitative circuit elements Diodes, transistors, and other active devices Electric and magnetic forces Nonlinear optical properties, lasers In mechanical continua, nonlinear effects arise from a number of different sources which include the following: 1. Kinematics--for example, convective acceleration, Coriolis and centripetal accelerations 2. Constitutive relations--for example, stress versus strain 3. Boundary conditions--for example, free surfaces in fluids, deformation-dependent constraints 4. Nonlinear body forces--for example, magnetic or electric forces 5. Geometric nonlinearities associated with large deformations in structural solids such as beams, plates and shells How such nonlinearities enter the laws of mechanics can be seen by looking at the equation of momentum balance in continuum mechanics, \[\nabla\cdot\mathbf{i}\,+\,\mathbf{f}=\rho\left(\frac{\partial\mathbf{v}}{ \partial t}+\mathbf{v}\cdot\nabla\mathbf{v}\right)\] (1-1.2) where \(\mathbf{i}\) is the stress tensor, \(\rho\) is the density, and the right-hand side represents the acceleration. Nonlinearities can enter this equation through the stress-strain or stress-strain rate relations in the first left-hand term. Nonlinear body forces such as occur in magnetohydrodynamics or plasma physics can enter the body force term \(\mathbf{f}\). Finally, on the right-hand side of Eq. (1-1.2), we see an explicit nonlinear term in the convective acceleration. This term appears in many fluid flow problems and is one of the sources of turbulence in fluids. In the classic Navier-Stokes equations of fluid mechanics, derived from the momentum balance Eq. (1-1.2), one can see that the nonlinearity resides in the convective acceleration or kinematic term:\[v\nabla^{2}\nu\,-\,\frac{1}{\rho}\,\nabla P\,=\,\frac{\partial\nu}{\partial t}\,+\, \nu\cdot\nabla\nu\] (1-1.3) where \(\nu\) is the kinematic viscosity and \(P\) is the pressure. The viscous term on the left-hand side is linear and is based on the assumption of a Newtonian fluid. One can imagine that if one goes beyond the study of the Navier-Stokes equation to include nonlinear viscous fluids (non-Newtonian fluids) or elastoplastic materials, there is a vast array of nonlinear and chaotic phenomena to be discovered in mechanics, electromagnetics, and acoustics. ### Where Have Chaotic Vibrations Been Observed? From the previous discussion, one can see that chaotic phenomena can be observed in many physical systems. Since the writing of the first edition of this book, many new phenomena have been reported in the scientific and engineering literature. A partial list of the physical systems known to exhibit chaotic vibrations includes the following: * Selected closed- and open-flow fluid problems * Selected chemical reactors * Vibrations of buckled elastic structures * Mechanical systems with play or backlash such as gears * Flow-induced or aeroelastic problems * Magnetomechanical actuators * Large, three-dimensional vibrations of structures such as beams and shells * Systems with sliding friction * Rotating or gyroscopic systems * Nonlinear acoustic systems * Simple forced circuits with diodes or \(p\)-\(n\) transistor elements * Harmonically forced circuits with nonlinear capacitance and * inductance elements * Feedback control devices * Laser and nonlinear optical systems * Video feedback A few objects in the solar system (e.g., Hyperion, Halley's comet) Cardiac oscillations Earthquake dynamics Extreme maneuver aircraft and ship dynamics Iterative optimal design algorithms Econometric models These are but a few of the many phenomena in which chaos has been uncovered. Descriptions of specific examples are given in Chapter 4. A question asked by most novices to the field of chaotic dynamics is: If chaos is so pervasive, why was it not seen earlier in experiments? Two responses to this question come to mind. First, if one goes back and reads earlier papers on experiments in nonlinear vibrations, one often finds a brief mention of nonperiodic phenomena buried in a discussion of more classical nonlinear vibrations (see Chapter 4 for examples). Second, Joseph Keller, an applied mathematician at Stanford University, in responding to this question in a lecture, speculates that earlier scientists and engineers were taught almost exclusively in linear mathematical ideas, including linear algebra and differential equations. Hence, it was natural, Keller summarizes, that when approaching dynamic experiments in the laboratory, they looked only for phenomena that fit the linear mathematical models. As to why theorists had not come upon those ideas earlier, there is evidence that some did, like Poincare and Birkhoff. And, those dynamicists working in energy-conserving systems (Hamiltonian dynamics), especially theorists in the former Soviet Union, knew about stochastic behavior in certain theoretical models (see, e.g., Sagdeev et al., 1988). However, specific manifestations of chaotic solutions had to wait for the arrival of powerful computers with which to calculate the long time histories necessary to observe and measure chaotic behavior. Some day in the future an interesting history will be written on the interdependence between computer technology and the mathematics of fractal and nonlinear processes in the late 20th century. ### Classical nonlinear vibration theory: a brief review In this section, we present a short review of classical vibration theory, both linear and nonlinear. This is meant simply to define and review a few ideas in nonlinear dynamics concerning periodic vibration so we may later be able to contrast these with chaotic vibration. Readers desiring more detailed discussion in classical nonlinear vibration should consult books such as Stoker (1950), Minorsky (1962), Nayfeh and Mook (1979), or Hagedorn (1988). We begin with a brief review of linear vibration concepts. ### Linear Vibration Theory The classic paradigm of linear vibrations is the spring-mass system shown in Figure 1-5 along with its electric circuit analog. When there is no disturbing force, the undamped system vibrates with a frequency that is independent of the amplitude of vibration: \[\omega_{0}=\left(\frac{k}{m}\right)^{\!\!1/2}=\left(\frac{1}{LC}\right)^{\!\!1 /2}\] (1-2.1) In this state, energy flows alternately between elastic energy in the spring (electric energy in the capacitor \(C\)) and kinetic energy in the mass (magnetic energy in the inductor \(L\)). The addition of damping \(c\neq 0\), \(R\neq 0\)) introduces decay in the free vibrations so that the amplitude of the mass (or charge in the circuit) exhibits the following time dependence: Figure 1-5: (_a_) The classic, mechanical spring–mass–dashpot oscillator. (_b_) The electrical circuit analog of a damped oscillator. \[x(t)\,=\,A_{0}e^{-\gamma t}\,\cos[(\omega_{0}^{2}-\gamma^{2})^{1/2}\,t\,+\,\varphi_ {0}]\] (1-2.2) where \[\gamma\,=\,\frac{c}{2m}\quad\text{or}\quad\gamma\,=\,\frac{R}{2L}\] The system is said to be _underdamped_ when \(\gamma^{2}\,<\,\omega_{0}^{2}\), _critically damped_ when \(\gamma^{2}\,=\,\omega_{0}^{2}\), and _overdamped_ when \(\gamma^{2}\,>\,\omega_{0}^{2}\). One of the classic phenomena of linear vibratory systems is that of _resonance_ under harmonic excitation. For this problem, the differential equation that models the system is of the form (e.g., see Thompson, 1965) \[\ddot{x}\,+\,2\gamma\dot{x}\,+\,\omega_{0}^{2}x\,=\,f_{0}\,\cos\,\Omega t\] (1-2.3) If one fixes \(f_{0}\) and varies the driving frequency \(\Omega\), the absolute magnitude of the steady-state displacement of the mass (after transients have damped out) reaches a maximum close to the natural frequency \(\omega_{0}\), or more precisely at \(\Omega\,=\,(\omega_{0}^{2}\,-\,\gamma^{2})^{1/2}\). This phenomenon is sketched in Figure 1-6. The effect is more pronounced when the damping is small. This is indeed the case in structural systems, and engineers are familiar with the problem of fatigue failures in structures and machines owing to large, resonance-excited vibrations. When a linear mechanical system has many degrees of freedom, Figure 1-6: Classical resonance curves (response amplitude versus frequency) for the forced motion of a damped _linear_ oscillator for different values of the damping \(\gamma\). one often models it as a coupled set of spring-mass oscillators leading to the phenomena of multiple resonant frequencies when the system is harmonically forced. This behavior has often led vibration analysts to assume that every peak in a vibration frequency spectrum is associated with at least one mode of degree of freedom. In nonlinear vibrations, this is not the case. A one-degree-of-freedom nonlinear system can generate many frequency spectra in contrast to its linear counterpart, as was shown in Figure 1-3. In any event, the mathematical theory of linear systems is well understood and has been codified in sophisticated computer software packages. Nonlinear problems are another story. ### Nonlinear Vibration Theory _Nonlinear_ effects can enter the problem in many ways. A classic example is a nonlinear spring where the restoring force is not linearly proportional to the displacement. For the case of a symmetric nonlinearity (equal effects for compression or tension), the equation of motion takes the following form: \[\ddot{x}\ +\ 2\gamma\dot{x}\ +\ \alpha x\ +\ \beta x^{3}\ =\ f(t)\] (1-2.4) When the system is undamped and \(\dot{f}(t)=0\), periodic solutions exist where the natural frequency increases with amplitude for \(\beta>0\). This model is often called a _Duffing equation_, after the mathematician who studied it. If the system is acted on by a periodic force, in the classical theory one assumes that the output will also be periodic. When the output has the same frequency as the force, the resonance phenomena for the nonlinear spring is shown in Figure 1-7. If the amplitude of the forcing function is held constant, there exists a range of forcing frequencies for which three possible output amplitudes are possible as shown in Figure 1-7. One can show that the dashed curve in Figure 1-7 is unstable so that a _hysteretic effect_ occurs for increasing and decreasing frequencies. This is called a _jump phenomenon_ and can be observed experimentally in many mechanical and electrical systems. Other periodic solutions can also be found such as _subharmonic_ and _superharmonic_ vibrations. If the driving force has the form \(f_{0}\) cos \(\omega t\), then a subharmonic oscillation may take the form \(x_{0}\cos(\omega t/n\ +\ \varphi)\) plus higher harmonics (\(n\) is an integer). Subharmonics play an important role in prechaotic vibrations, as we shall see later. Nonlinear resonance theory depends on the assumption that periodic input yields periodic output. However, it is this postulate that has been challenged in the new theory of chaotic vibrations. _Self-excited oscillations_ are another important class of nonlinear phenomena. These are oscillatory motions which occur in systems that have no periodic inputs or periodic forces. Several examples are shown in Figure 1-8. In the first, the friction created by relative motion between a mass and moving belt leads to oscillations. In the second example there exists the whole array of aeroelastic vibrations in which the steady flow of fluid past a solid object on elastic restraints produces steady-state oscillations. A classic electrical example is the vacuum tube circuit studied by Van der Pol and shown in Figure 1-9. In each case, there is a steady source of energy, a source of dissipation, and a nonlinear restraining mechanism. In the case of the Van der Pol oscillator, the source of energy is a dc voltage. It manifests itself in the mathematical model of the circuit as a negative damping: \[\ddot{x}\,-\,\gamma\,\dot{x}(1\,-\,\beta x^{2})\,+\,\omega_{0}^{2}x\,=\,0\] (1-2.5) For low amplitudes, energy can flow into the system, but at higher amplitudes the nonlinear damping limits the amplitude. In the case of the Froude pendulum (e.g., see Minorsky, 1962, Chap. 28), the constant rotation of the motor provides an energy input. For small vibrations the nonlinear friction is modeled as negative damping, whereas for large vibrations the amplitude of the vibration is limited by the nonlinear term \(\beta\dot{\theta}^{3}\): \[\ddot{\theta}\,+\,\alpha\,\sin\,\theta\,=\,T_{0}\,+\,\gamma\dot{\theta}(1\,-\, \beta\dot{\theta}^{2})\] (1-2.6) Figure 1-7: Classical resonance curve for a _nonlinear_ oscillator with a hard spring when the response is periodic with the same period as the driving force. [\(\alpha\) and \(\beta\) refer to Eq. (1-2.4).] The oscillatory motions of such systems are often called _limit cycles_. The phase plane trajectories for the Van der Pol equation is shown in Figure 1.1. Small motions spiral out to the closed asymptotic trajectory, whereas large motions spiral onto the limit cycle. (In Figures 1.1.1.1 and 1.1.1.1, \(y\,=\,\dot{x}\).) Two questions are often asked when studying problems of this kind: (1) What is the amplitude and frequency of the limit cycle vibrations? (2) For what parameters will stable limit cycles exist? In the case of the Van der Pol equation, it is convenient to normalize the dependent variable by \(\sqrt{\beta}\) and the time by \(\omega_{0}^{-1}\) so the equation assumes the form Figure 1.8: Example of self-excited oscillations: (_a_) dry friction between a mass and moving belt; (_b_) aeroelastic forces on a vibrating airfoil; and (_c_) negative resistance in an active circuit element. \[\ddot{x}\ -\ \varepsilon\dot{x}(1\ -\ x^{2})\ +\ x\ =\ 0\] (1-2.7) where \(\varepsilon=\gamma/\omega_{0}\). For small \(\varepsilon\), the limit cycle solution is a circle of radius 2 in the phase plane; that is, \[x\ =\ 2\ \cos\ t\ +\ \cdots\] (1-2.8) where the \(+\ \cdots\) indicates third-order harmonics and higher. When \(\varepsilon\) is larger, the motion takes the form of _relaxation oscillations_ shown Figure 1-10: Limit cycle solution in the phase plane for the Van der Pol oscillator. Figure 1-9: Sketch of a vacuum tube circuit with limit cycle oscillation of the type studied by Van der Pol. in Figure 1-11, with a nondimensional period of around 1.61 when \(\varepsilon>10\). ### Quasiperiodic Oscillators A more complicated problem is the case when a periodic force is added to the Van der Pol system: \[\ddot{x}\ -\ \gamma\dot{x}(1\ -\ \beta x^{2})\ +\ \omega_{0}^{2}x\ =f_{0}\ \cos\ \omega_{1}t\] (1-2.9) Because the system is nonlinear, _superposition of free and forced oscillations is not valid_. Instead, if the driving frequency is close to the limit cycle frequency, the resulting periodic motion will become _entrained_ at the driving frequency. Frequency locking is a well-known classical phenomenon in nonlinear oscillations. When the difference between driving and free oscillation frequencies is large, a new phenomenon is possible in the Van der Pol system--_combination oscillations_--sometimes called _almost periodic_ or _quasiperiodic_ solutions. Combination oscillation solutions take the form \[x\ =\ b_{1}\ \cos\ \omega_{1}t\ +\ b_{2}\ \cos\ \omega_{2}t\] (1-2.10) When \(\omega_{1}\) and \(\omega_{2}\) are incommensurate, that is, \(\omega_{1}/\omega_{2}\) is an irrational Figure 1-11: Relaxation oscillation for the Van der Pol oscillator. number, the solution is said to be _quasiperiodic_. In the case of the Van der Pol equation [Eq. (1-2.9)], \(\omega_{2}\equiv\omega_{0}\); this is the free oscillation limit cycle frequency (e.g., see Stoker, 1950, p. 166). More will be said about quasiperiodic vibrations later, but because they are not periodic, they may be mistaken for chaotic solutions, which they are not. [For one, the Fourier spectrum of Eq. (1-2.10) is just two spikes at \(\omega=\omega_{1}\), \(\omega_{2}\), whereas for chaotic solutions the spectrum is broad and continuous.] The phase plane portrait of (1-2.10) is not closed when \(\omega_{1}\) and \(\omega_{2}\) are incommensurate, so another method is used to portray the quasiperiodic function graphically. To do this we stroboscopically sample \(x(t)\) with a period equal to \(2\pi/\omega_{1}\); that is, let \[t_{n}=\frac{n2\pi}{\omega_{1}}\] (1-2.11) and denote \(x(t_{n})=x_{n}\), \(\dot{x}(t_{n})=v_{n}\). Then Eq. (1-2.10) becomes \[x_{n}=b_{1}+b_{2}{\rm cos}\,\frac{2\pi\,n\omega_{2}}{\omega_{1}},\qquad v_{n} =-\omega_{0}b_{2}{\rm sin}\,\frac{2\pi\,n\omega_{2}}{\omega_{1}}\] (1-2.12) As \(n\) increases, the points \(x_{n}\), \(v_{n}\) move around an ellipse in the stroboscopic phase plane (called a _Poincare map_), as shown in Figure 1-12. When \(\omega_{2}/\omega_{1}\) is incommensurate, the set of points \(\{x_{n}\), \(v_{n}\}\) for \(n\rightarrow\infty\) fill in a closed curve given by Figure 1-12: Stroboscopic plot of quasiperiodic solutions of the Van der Pol equation [Eq. (1-2.9)] in the Poincaré plane \(\Sigma\). \[(x_{n}-b_{1})^{2}+\left(\frac{v_{n}}{\omega_{0}}\right)^{2}=b_{2}^{2}\] (1-2.13) Quasiperiodic oscillations also occur in systems with more than one degree of freedom. ### Dynamics of Lossless or Conservative Systems In some areas of physics, one can assume that there are no energy dissipation mechanisms. Furthermore, in many mechanical and electromagnetic systems the forces and voltages can be derived from a potential energy function. Sometimes these systems are called _conservative_ or _Hamiltonian_ dynamics after the great Irish dynamicist, William R. Hamilton (1805-1865), whose mathematical formulation helped clarify the analysis of such systems. These systems exhibit a set of dynamic phenomena that are discussed in many of the classical and modern books on dynamics such as Goldstein (1980), Arnold (1978), and Sagdeev, Usikov, and Zaslavsky (1988). Among the interesting phenomena that differ from linear dynamics are nonlinear resonance, stochastic chaos, and diffusion in phase space. What further distinguishes conservative systems from dissipative oscillators is that there are no transient motions in the dynamics followed by limiting motions. In other words, there are no attractors in conservative dynamics. Each initial condition results is a unique orbit, which may be periodic, quasiperiodic, or chaotic. However, the chaotic motion does not have the kind of fractal structure that we find in dissipative systems. Conservative or Hamiltonian dynamics is often used in applications to orbital dynamics in astronomy or the motions of charged particles in plasma devices or high-energy accelerators. Also, the mathematics of such lossless systems is sometimes used as the starting point in the analysis of systems with small dissipation. ### Nonlinear Resonance in Conservative Systems This is a phenomenon that is central to the study of conservative dynamics yet is not easily accessible to the novitiate. This is because some of the discoveries were only made in the second half of the 20th century and also because the phenomena is still coded by names of the theoreticians who made these discoveries such as Kolmogorov,Arnold, and Moser (KAM theory). However, we will make an attempt at a brief description without the mathematical rigor. Resonance is a phenomenon that occurs between two or more coupled oscillating systems. Two models are the following: \[\begin{split}\ddot{x}\,+\,\frac{\partial V(x,y)}{\partial x}=0\\ \ddot{y}\,+\,\frac{\partial V(x,y)}{\partial y}=0\end{split}\] (1-2.14) and \[\begin{split}\ddot{x}\,+\,\frac{\partial V(x)}{\partial x}=A\, \cos\Omega t\end{split}\] (1-2.15) By rewriting (1-2.15) we obtain a form similar to (1-2.14) \[\begin{split}\ddot{x}\,+\,\frac{\partial V(x)}{\partial x}=y\\ \ddot{y}\,+\,\Omega^{2}y\,=\,0\end{split}\] (1-2.16) For example, for the rotation of a pendulum under gravity, we obtain \(V(x)=\omega_{0}^{2}(1\,-\,\cos\,x)\), using nondimensional variables. The second model (1-2.15) is a special case of the first (1-2.14) where the second oscillator is uncoupled from the first. An example of a conservative system with periodic, quasiperiodic, and chaotic or stochastic motions can be found in the periodically forced pendulum. Different initial conditions can lead to all three types of motion as illustrated in Figure 1-13 for a fixed forcing amplitude and frequency. In this figure the continuous motion is replaced by a set of points (\(\theta_{n}\), \(\dot{\theta}_{n}\)) which represent the angular position and velocity at times synchronous with the phase of the driving force. Using this so-called Poincare section, periodic orbits show up as a finite set of points, quasiperiodic orbits show up as closed curves, and stochastic orbits show up as the diffuse set of points shown in Figure 1-13. These chaotic or stochastic orbits seem to be close to the saddle points of the unforced motions of the pendulum shown in Figure 1-13. As we shall see in Chapters 3 and 6, the existence of saddle points gives one a clue to the possibilities for chaos. ### Frequency Spectra of Nonlinear Oscillators In the case of (1.2-15) when \(A=0\), the first oscillator can exhibit periodic oscillations as in a linear system, but the frequency depends on the initial conditions. Thus, for example, the Duffing oscillator \[\ddot{x}\,+\,\omega_{0}^{2}x\,+\,\beta x^{3}=0\] (1-2.17) (discussed above) has a continuous frequency spectrum shown in Figure 1-14\(a\). The spectrum shown in Figure 1-14\(b\) is for a particle bouncing between two stationary walls (sometimes called a _Fermi oscillator_), where the frequency depends on the initial velocity. If we now couple this oscillator to a second oscillator, we see that it is very easy to get a resonance by choosing the right initial conditions. In a classical undamped linear oscillator with natural frequency \(\omega_{0}\) and driving frequency \(\Omega\), resonance occurs only when \(\Omega=\omega_{0}\). (In many mechanical and civil engineering applications, resonance means that a small oscillator can easily drive a large structure into unwanted large-amplitude oscillations.) However, in a nonlinear oscillator, such Figure 1-13: Stroboscopic pictures [\(t_{n}=n(2\pi/\Omega)\)] of the dynamics of a periodically forced pendulum with no damping for different initial conditions. Isolated dots indicate periodic motion, continuous lines represent quasiperiodic motion, and a diffuse set of dots represents chaotic or stochastic orbits. (\(\ddot{x}\,+\,\sin\,x=A\,\sin\,wt\))as the Duffing example (1-2.17), one can often achieve resonance at either multiples or integer fractions (i.e., harmonics or subharmonics) of the driving frequency \(\Omega\) by simply choosing the right amplitude \(A\) that produces a frequency of oscillation \(\omega(A)\) such that \[\omega(A)\;=\;p\Omega/q,\quad\mbox{or}\quad q\omega\;=\;p\Omega\] (1-2.18) Thus theoretically, the consequence of a continuous frequency spectrum for the free oscillations is an _infinite number of possible resonances_ in the driven oscillator problem (1-2.15). The same can be said for two coupled oscillators (1-2.14). For two coupled oscillators (e.g., two pendulums in Figure 1-15), we identify a measure of the amplitude of each oscillator \(J_{1}\), \(J_{2}\) and the phase or relative time in the cycle of oscillation \(\theta_{1}\), \(\theta_{2}\), such that \(\omega_{1}\;=\;\dot{\theta}_{1}\), \(\omega_{2}\;=\;\dot{\theta}_{2}\) are the frequencies. Then nonlinear resonance can occur when \[n\dot{\theta}_{1}\;=\;m\dot{\theta}_{2}\] (1-2.19) which can be satisfied by choosing the proper initial conditions \(J_{10}\), \(J_{20}\). Thus, the resonance condition can be rewritten as \[n\omega_{1}(J_{10})\;-\;m\omega_{2}(J_{20})\;=\;0\] (1-2.20) As in the linear case, resonance means that energy can be easily exchanged between two systems, which can lead to interesting and perhaps even chaotic dynamics. A classic experiment in this phenomen Figure 1-14: (\(a\)) Frequency spectrum for the free vibrations of the Duffing oscillator [Eq. (1-2.17)] without damping. \(X_{\rm max}\) represents the initial amplitude. (\(b\)) Frequency spectrum of a mass oscillating between two walls. \(V_{0}\) represents the initial velocity. non is a compound pendulum with a \(2:1\) frequency ratio. Experimental models are described by Rott (1970) and in Appendix C. In general, when there are three or more coupled nonlinear oscillators, each of which has some identifiable phase or angle variable \(\theta_{i}\) such that \(\dot{\theta}_{1}=\omega_{i}\) is the frequency, then nonlinear resonance can occur when the following relation holds: \[n\omega_{1}\ +\ m\omega_{2}\ +\ \cdots\ +\ p\omega_{N}\ =\ 0\] (1-2.21) where \(n\), \(m\), \(p\), and so on, are positive or negative integers. The consequences of nonlinear resonance in coupled oscillators are most profound and form the basis for understanding chaos in conservative or nondissipative dynamic systems. ### Torus Map The motion of two coupled oscillators can sometimes be visualized as a particle moving on a toroidal surface as shown in Figure 1-12. If Figure 1-15: Schematic picture of the dynamics of two coupled oscillators. the particle motion on the torus consists of a position vector \(\mathbf{r}(t)\) \[\mathbf{r}\,=\,\mathbf{R}\,+\,\boldsymbol{\rho}\] Here the motion around the major axis given by \(\mathbf{R}(t)\) occurs at frequency \(\omega_{1}\), whereas the motion around the minor axis described by \(\boldsymbol{\rho}(t)\) has a frequency \(\omega_{2}\). The motion \(x(t)\) in Eq. (1-2.10) can then be thought of as two projections of this particle motion; that is, \(x(t)\,=\,X(t)\,+\,\xi(t)\). \(X(t)\) is the scalar projection of \(\mathbf{R}\) onto the fixed plane \(\Sigma\) shown in Figure 1-12, and \(\xi(t)\) is the projection \(\boldsymbol{\rho}\) onto the horizontal plane. Another useful description of the dynamics is to only look at the points of penetration of the toroidal orbit onto the fixed plane \(\Sigma\) as it slices through one side of the torus. This is known as a _synchronous point mapping_ (e.g., see Minorsky, 1962) or, in modern terms, a _Poincare map_ as shown in Figure 1-12. To obtain an analytical expression for this map, we define another state variable \(V(t)\) as simply the time derivative of the multifrequency motion \(x(t)\,=\,A_{1}\,\cos\,\omega_{1}t\,+\,A_{2}\cos\,\omega_{2}t\). Then one chooses a phase angle of the first oscillator (e.g., \(\omega_{1}t_{n}\,=\,2\pi n\)). The phase plane dynamics are then described by two discrete time expressions (\(v_{n}\,=\,\dot{x}(t_{n})\)) \[\begin{split} x_{n\,+\,1}&=f(x_{n},\,v_{n})\\ v_{n\,+\,1}&=\,g(x_{n},\,v_{n})\end{split}\] (1-2.22) Finally, by rescaling the vertical axis and using polar coordinates, one obtains a first-order difference equation--that is, define \[\tan\,\varphi_{n}=\frac{v_{n}/\omega_{1}}{x_{n}}\] (1-2.23) and \[\varphi_{n\,+\,1}\,=\,\varphi_{n}\,+\,2\pi\,\frac{\omega_{2}}{\omega_{1}}\] (1-2.24) More generally, one often finds a map of the form \[\varphi_{n\,+\,1}\,=\,\varphi_{n}\,+\,F(\varphi_{n})\] (1-2.25) with\[F(\varphi_{n}\;+\;2\pi)\;=\;F(\varphi_{n})\] (1-2.26) This map is known as a _circle map_ or is sometimes known as a _twist map_. One can see that in the simplest case when \(F=2\pi\omega_{2}/\omega_{1}\) and when \(\omega_{2}/\omega_{1}\) is irrational, the succession of points on the orbit will trace out a circle in the \((x_{n}\,,\,v_{n}/\omega_{1})\) plane. On the other hand, if \(\omega_{1}/\omega_{2}=p/q\) (\(p/q\) are integers), then for \(\Delta\varphi=\varphi_{n+q}\,-\,\varphi_{n}\), the orbit will visit precisely "\(p\)" points around the circle. This discussion may seem like a complicated way to view the motion of two oscillators, but this model of toroidal motion and the resulting circle map has become an important conceptual as well as practical analytical tool to analyze complex dynamics of coupled systems, as will be shown in a later chapter. ### Local Geometric Theory of Dynamics Modern ideas about nonlinear dynamics are often presented in geometric terms or pictures. For example, the motion of an undamped oscillator, \(\ddot{x}\;+\;\omega_{0}x\,=\,0\), can be represented in the phase plane \((x,\,\dot{x})\) by an ellipse. In this picture, time is implicit and the time history runs clockwise around the ellipse. The size of the ellipse depends on the given initial conditions for \((x,\,\dot{x})\). More generally for nonlinear problems, one first finds the equilibrium points of the system and examines the motion around each equilibrium point. The local motion is characterized by the nature of the eigenvalues of the linearized system. Thus, if the dynamical model can be represented by a set of first-order differential equations \[\dot{\mathbf{x}}\;=\;\mathbf{f}(\mathbf{x})\] (1-2.27) where \(\mathbf{x}\) represents a vector whose components are the state variables, then the _equilibrium points_ are defined by \(\dot{\mathbf{x}}\;=\;0\), or \[\mathbf{f}(\mathbf{x}_{e})\;=\;0\] (1-2.28) For example, in the case of the harmonic oscillator, there is just one equilibrium pint at the origin \(\mathbf{x}\;=\;(x,\,v\equiv\dot{x})\), \(x_{e}\;=\;0\), \(v_{e}\;=\;0\). To determine the nature of the dynamics about \(\mathbf{x}\;=\;\mathbf{x}_{e}\), one expands the function \(\mathbf{f}(\mathbf{x})\) in a Taylor series about each equilibrium point \(\mathbf{x}_{e}\) and examines the dynamics of the linearized problem. To illustrate the method, consider the set of two first-order equations:\[\begin{split}\dot{x}&=f(x,y)\\ \dot{y}&=g(x,y)\end{split}\] (1-2.29) When time does not appear explicitly in the functions \(f(\quad)\) and \(g(\quad)\), the problem is called \(autonomous\). The equilibrium points must satisfy two equations: \(f(x_{e},\,y_{e})=0\) and \(g(x_{e},\,y_{e})=0\). Introducing small variables about each equilibrium point, that is, \[x\,=\,x_{e}\,+\,\eta\quad\text{and}\quad y\,=\,y_{e}\,+\,\xi\] the linearized system can be written in the form \[\frac{d}{dt}\begin{cases}\eta\\ \xi\end{cases}=\begin{bmatrix}\frac{\partial f}{\partial x}&\frac{\partial f}{ \partial y}\\ \frac{\partial g}{\partial x}&\frac{\partial g}{\partial y}\end{bmatrix}\begin{cases} \eta\\ \xi\end{cases}\] (1-2.30) where the derivatives are evaluated at the point \((x_{e},\,y_{e})\). Some authors use the notation \(\nabla\mathbf{F}\) or \(D\mathbf{F}\), where \(\mathbf{F}=(f,\,g)\), to represent the matrix of partial derivatives in Eq. (1-2.30). The nature of the motion about each equilibrium point is determined by looking for eigensolutions \[\begin{cases}\eta\\ \xi\end{cases}=\begin{cases}\alpha\\ \beta\end{cases}e^{st}\] (1-2.31) where \(\alpha\) and \(\beta\) are constants. The motion is classified according to the nature of the two eigenvalues of \(D\mathbf{F}\) [i.e., whether \(s\) is real or complex and whether Real(\(s\)) \(>0\) or \(<0\).] Sketches of trajectories in the phase plane for different eigenvalues are shown in Figure 1-16. For example, the \(saddle\)\(point\) is obtained when both eigenvalues \(s\) are real, but \(s_{1}<0\) and \(s_{2}>0\). A \(spiral\) occurs when \(s_{1}\) and \(s_{2}\) are complex conjugates. The \(stability\) of the linearized system (1-2.30) depends on the sign of Real(\(s\)). When one of the real parts of \(s_{1}\) and \(s_{2}\) is positive, the motion about the equilibrium point is \(unstable\). If the roots are not pure imaginary numbers, then theorems exist to show that the local motion of the linearized system is qualitatively similar to the original nonlinear system (1-2.29). Pure oscillatory motion in the linearized system (\(s\,=\,\pm i\omega\)) requires further analysis to establish the stability of the nonlinear system. These ideas for a second-order system can be generalized to higher-dimensional phase spaces (e.g., see Arnold, 1978 or Guckenheimer and Holmes, 1983). ##### Bifurcations As parameters are changed in a dynamical system, the stability of the equilibrium points can change as well as the number of equilibrium points. The study of these changes in nonlinear problems as system parameters are varied is the subject of _bifurcation theory_. Values of these parameters at which the qualitative or topological nature of motion changes are known as _critical_ or _bifurcation_ values. As an example, consider the solutions to the undamped Duffing oscillator \[\ddot{x}\;+\;\alpha x\;+\;\beta x^{3}\;=\;0\] (1-2.32) One can first plot the equilibrium points as a function of \(\alpha\). As \(\alpha\) changes from positive to negative, one equilibrium point splits into Figure 1-16: Classical phase plane portraits near four different types of equilibrium points for a system of two time-independent differential equations. three points. Dynamically, one center is transformed into a saddle point at the origin and two centers (Figure 1-17). This kind of bifurcation is known as a _pitchfork_. Physically, the force \(-(\alpha x\ +\ \beta x^{3})\) can be derived from a potential energy function. When \(\alpha\) becomes negative, a one-well potential changes into a double-well potential problem. This represents a qualitative change in the dynamics, and thus \(\alpha=0\) is a critical bifurcation value. Another example of a bifurcation is the emergence of limit cycles in physical systems. In this case, as some control parameter is varied, a pair of complex conjugate eigenvalues \(s_{1}\), \(s_{2}=\pm i\omega\ +\ \gamma\) cross from the left-hand plane (\(\gamma<0\), a stable spiral) into the right-hand plane (\(\gamma>0\), an unstable spiral) and a periodic motion emerges known as Figure 1-17: Phase plane trajectories for an oscillator with a nonlinear restoring force [Duffing’s equation, Eq. (1-2.32)]: \((a)\) Hard spring problem; \(\alpha,\beta>0\). \((b)\) Soft spring problem; \(\alpha>0\), \(\beta<0\). \((c)\) Two-well potential; \(\alpha<0\), \(\beta>0\). a _limit cycle_. This type of qualitative change in the dynamics of a system is known as a _Hopf bifurcation_ and is illustrated in Figure 1-18. The theory we have just described is called a _local_ analysis because it only tells what happens dynamically in the vicinity of each equilibrium point. The piece de resistance in classical dynamical analysis is to piece together all the local pictures and describe a _global_ picture of how trajectories move between and among equilibrium points. Such analysis is tractable when bundles of different trajectories corresponding to different initial conditions move more or less together as a laminar fluid flow. Such is the case when the phase space has only _two_ dimensions. However, when there are three or more first-order equations, the bundles of trajectories can split apart and get tangled up into what we now call _chaotic motions_. #### Strange Attractors From this brief review, one can see that there are three classic types of dynamical motion: Figure 1-18: Bifurcation diagrams: (_a_) Pitchfork bifurcation for Duffing’s equation [Eq. (1-2.32)]—transition from one to two stable equilibrium positions. (_b_) Hopf bifurcation—transition from stable spiral to limit cycle oscillation. 1. Equilibrium 2. Periodic motion or a limit cycle 3. Quasiperiodic motion These states are called _attractors_, because if some form of damping is present the transients decay and the system is "attracted" to one of the above three states. The purpose of this book is to describe another class of motions in nonlinear vibrations that is not one of the above classic attractors. This new class of motions is chaotic in the sense of not being predictable when there is a small uncertainty in the initial condition, and is often associated with a state of motion called a _strange attractor_. The classic attractors are all associated with classic geometric objects in phase space, the equilibrium state is associated with a point, the periodic motion or limit cycle is associated with a closed curve, and the quasiperiodic motion is associated with a surface in a three-dimensional phase space. The "'strange attractor," as we shall see in later chapters, is associated with a new geometric object (new relative to what is now taught in classical geometry) called a _fractal set_. In a three-dimensional phase space, the fractal set of a strange attractor looks like a collection of an infinite set of sheets or parallel surfaces, some of which are separated by distances which approach the infinitesimal. This new attractor in nonlinear dynamics requires new mathematical concepts and a language to describe it as well as new experimental tools to record it and give it some quantitative measure. The relationship between bifurcations and chaos is discussed in a recent book by Thompson and Stewart (1986). ### Maps and flows Mathematical models in dynamics generally take one of three forms: differential equations (or _flows_), difference equations (called _maps_), and _symbol dynamic equations_. The term _flow_ refers to a bundle of trajectories in phase space originating from many contiguous initial conditions. The continuous time history of a particle is the most familiar example of a flow to those in engineering vibrations. However, certain qualitative and quantitative information may be obtained about a system by studying the evolution of state variables at discrete times. In particular, in this book we shall discuss how to obtain difference evolution equations from continuous time systems through the use of the Poincare section. These Poincare maps can sometimes be used to distinguish between various qualitative states of motion such as periodic, quasiperiodic, or chaotic. In some problems not only time is restricted to discrete values, but knowledge of the state variables may be limited to a finite set of values or categories such as red or blue or zero or one. For example, in the double-well potential of Figure 1-2\(b\), one may be interested only in whether the particle is in the left or right well. Thus, an orbit in time may consist of a sequence of symbols LRRLRLLLR.... A periodic orbit might be LRLR... or LLRLLR.... In the new era of nonlinear dynamics, all three types of models are used to describe the evolution of physical systems. [See Crutchfield and Packard (1982) or Wolfram (1986) for a discussion of symbol dynamics.] In a periodically forced vibratory system, a Poincare map may be obtained by stroboscopically measuring the dynamic variables at some particular phase of the forcing motion. In an _n_-state variable problem, one can obtain a Poincare section by measuring the \(n\) - 1 variables when the _n_th variable reaches some particular value or when the phase space trajectory crosses some arbitrary plane in phase space as shown in Figure 1-19 (see also Chapters 2 and 5). If one has knowledge of the time history between two penetrations of this plane, one can relate the position at _t__n_+1 to that at \(t_{n}\) through given functions. For example, for the case shown in Figure 1-19, \[\xi_{n\,+\,1}\,=\,f(\xi_{n},\,\eta_{n})\quad\mbox{and}\quad\eta_{n\,+\,1}\,=\, g(\xi_{n},\,\eta_{n})\] (1-3.1) Figure 1-19 Poincaré section: construction of a difference equation model (map) from a continuous time dynamical model. The mathematical study of such maps is similar to that for differential equations. One can find equilibrium or fixed points of the map, and one can classify these fixed points by the study of linearized maps about the fixed point. If \({\bf x}_{n+1}={\bf f}({\bf x}_{n})\) is a general map of say \(n\) variables represented by the vector \({\bf x}\), then a fixed point satisfies \[{\bf x}_{e}\,=\,{\bf f}({\bf x}_{e})\] (1-3.2) The iteration of a map is often written \({\bf f}({\bf f}({\bf x}))={\bf f}^{(2)}({\bf x})\). Using this notation, an "\(m\)-cycle" or \(m\)-periodic orbit is a fixed point that repeats after \(m\) iterations of the map, that is, \[{\bf x}_{0}\,=\,{\bf f}^{(m)}({\bf x}_{0})\] (1-3.3) Implied in these ideas is the notion that periodic motions in continuous time history show up as fixed points in the difference equations obtained from the Poincare sections. Thus, the most generally accepted paradigms for the study of the transition from periodic to chaotic motions is the study of simple one-dimensional and two-dimensional maps. (See Chapter 3 for a discussion of maps.) ### Three Paradigms for Chaos Perhaps the simplest example of a dynamic model that exhibits chaotic dynamics is the logistic equation or population growth model (e.g., see May, 1976): \[x_{n+1}\,=\,ax_{n}\,-\,bx_{n}^{2}\] (1-3.4) The first term on the right-hand side represents a growth or birth effect, whereas the nonlinear term accounts for the limits to growth such as availability of energy or food. If the nonlinear term is neglected (\(b\,=\,0\)), the linear equation has an explicit solution: \[x_{n+1}\,=\,ax_{n};\qquad x_{n}\,=\,x_{0}a^{n}\] (1-3.5) This solution is stable for \(|a|<1\) and unstable for \(|a|>1\). In the latter case, the linear model predicts unbounded growth, which is unrealistic. The nonlinear model (1-3.4) is usually cast in a nondimensional form: \[x_{n+1}\,=\,\lambda x_{n}(1\,-\,x_{n})\] (1-3.6)This equation has at least one equilibrium point, \(x=0\). For \(\lambda>1\), two equilibrium points exist [i.e., solutions of the equation \(x=\lambda x(1-x)\)]. To determine the stability of a map \(x_{n+1}=f(x_{n})\), one looks at the value of the slope \(|f^{\prime}(x)|\) evaluated at the fixed point. The fixed point is unstable if \(|f^{\prime}|>1\). In the case of the logistic equation [Eq. (1-3.6)] when \(1<\lambda<3\), there are two fixed points, namely, \(x=0\) and \(x=(\lambda-1)/\lambda\); the origin is unstable and the other point is stable. For \(\lambda=3\), however, the slope at \(x=(\lambda-1)/\lambda\) becomes greater than 1 (\(f^{\prime}=2-\lambda\)) and both equilibrium points become unstable. For parameter values of \(\lambda\) between 3 and 4, this simple difference equation exhibits many multiple-period and chaotic motions. At \(\lambda=3\), the steady solution becomes unstable, but a two-cycle or double-period orbit becomes stable. This orbit is shown in Figure 1-20. The value of \(x_{n}\) repeats every two iterations. Figure 1-20: Possible solutions to the quadratic map [logistic equation (1-3.6)]. _Top_: Steady period-1 motion. _Middle_: Period-2 and period-4 motions. _Bottom_: Chaotic motions. For further increases of \(\lambda\), the period-2 orbit becomes unstable and a period-4 cycle emerges, only to bifurcate to a period-8 cycle for a higher value of \(\lambda\). This period-doubling process continues until \(\lambda\) approaches the value \(\lambda_{\approx}=3.56994\)... Near this value, the sequence of period-doubling parameter values scales according to a precise law: \[\frac{\lambda_{n+1}-\lambda_{n}}{\lambda_{n}-\lambda_{n-1}}\!\to\!\frac{1}{ \delta},\,\delta=4.66920\,\ldots\] (1-3.7) The limit ratio is called the _Feigenbaum number_, named after the physicist who discovered the properties of this map in 1978. (See Gleick, (1987), for the story of this discovery) Beyond \(\lambda_{\approx}\), chaotic iterations can occur; that is, the long-term behavior does not settle down to any simple periodic motion. There are also certain narrow windows \(\Delta\lambda\) for \(\lambda_{\approx}<\lambda<4\) for which periodic orbits exist. Periodic and chaotic orbits of the logistic map are shown in Figure 1-21 by plotting \(x_{n+1}\) versus \(x_{n}\). This map is not only useful as a paradigm for chaos, but it has been shown that other maps \(x_{n+1}=f(x_{n})\), in which \(f(x)\) is double or multiple valued, behave in a similar manner with the same scaling law (1-3.7). Thus, the phenomenon of period doubling or bifurcation parameter scaling has been called a _universal_ property for certain classes of one-dimensional difference equation models of dynamical processes. Figure 1-21: Graphical solution to a first-order difference equation. The example shown is the quadratic map (1-3.6). ### 1.3 Maps and Flows Period doubling and Feigenbaum scaling (1-3.7) have been observed in many physical experiments (see Chapter 4). This suggests that for many continuous time history processes, the reduction to a difference equation through the use of the Poincare section has the properties of the quadratic map (1-3.4)--hence the importance of maps to the study of differential equations. [See Chapter 3 for a further discussion of the logistic equation (1-3.4).] #### Henon and Horseshoe Maps Of course, most physical systems require more than one state variable, and it is necessary to study higher-dimensional maps. One extension of the Feigenbaum problem (1-3.6) is a two-dimensional map proposed by Henon (1976), a French astronomer: \[x_{n+1} = 1\,-\,\alpha\,x_{n}^{2}\,+\,y_{n}\] (1-3.8) \[y_{n+1} = \beta x_{n}\] Note that if \(\beta=0\), we recover the quadratic map. When \(|\beta|<1\), the map contracts areas in the \(xy\) plane. It also stretches and bends areas in the phase plane as illustrated in Figure 1-22. This stretching, contraction, and bending or folding of areas in phase space is analogous to the making of a horseshoe. Multiple iterations such as _horseshoe_ Figure 1-22: Transformation of a rectangular collection of initial conditions under an iteration of the second-order set of difference equations called a _Henon map_ (1-3.8) showing stretching, contraction, and folding which leads to chaotic behavior (\(\alpha=1.4\), \(\beta=0.3\)). maps_ lead to complex orbits in phase space and loss of information about initial conditions and chaotic behavior. An illustration of the ability of a simple map to produce complex motions is provided in Figure 1-23. In one iteration of the map, a rectangular area is stretched in the vertical direction, contracted in the horizontal direction, and folded or bent into a horseshoe and placed over the original area. Thus, points originally in the area get mapped back onto the area, except for some points near the bend in the horseshoe. If one follows a group of nearby points after many iterations of this map, the original neighboring cluster of points gets dispersed to all sectors of the rectangular area. This is tantamount to a loss of information as to where a point originally started from. Also, the original area gets mapped into a finer and finer set of points, as shown in Figure 1-23. This structure has a fractal property that is a characteristic of a chaotic attractor which has been labeled ''strange.' This fractal property of a strange attractor is illustrated in the Henon map, Figure 1-24. Blowups of small regions of the Henon attractor reveal finer and finer structure. This self-similar structure of chaotic Figure 1-23: The horseshoe map showing how stretching, contraction, and folding leads to fractal-like properties after many iterations of the map. ### 3 MAPS and FLOWS Figure 1-24: (_a_) The locus of points for a chaotic trajectory of the Henon map (\(\alpha=1.4,\,\beta=0.3\)). (_b_) Enlargement of strange attractor showing finer fractal-like structure. attractors can often be revealed by taking Poincare maps of experimental chaotic oscillators (see Chapters 2 and 5). The fractal property of self-similarity can be measured using a concept of fractal dimension, which is discussed in Chapter 7. It is believed by some mathematicians that horseshoe maps are fundamental to most chaotic differential and difference equation models of dynamic systems (e.g., see Guckenheimer and Holmes, 1983). This idea is the centerpiece of a method developed to find a criterion for when chaotic vibrations are possible in a dynamical system and when predictability of future time history becomes sensitive to initial conditions. This Melnikov method has been used successfully to develop criteria for chaos for certain problems in one-degree-of-freedom nonlinear oscillation (e.g., see Chapter 6). ### The Lorenz Attractor and Fluid Chaos For many readers, the preceding discussion on maps and chaos may not be convincing as regards unpredictability in real physical systems. And were it not for the following example from fluid mechanics, the connection between maps, chaos, and differential equations of physical systems might still be buried in mathematics journals. In 1963, an atmospheric scientist named E. N. Lorenz of M.I.T. proposed a simple model for thermally induced fluid convection in the atmosphere.4 Fluid heated from below becomes lighter and rises, whereas heavier fluid falls under gravity. Such motions often produce convection rolls similar to the motion of fluid in a circular torus as shown in Figure 1-25. In Lorenz's mathematical model of convection, three state variables are used (\(x\), \(y\), \(z\)). The variable \(x\) is proportional to the amplitude of the fluid velocity circulation in the fluid ring, while \(y\) and \(z\) measure the distribution of temperature around the ring. The so-called Lorenz equations may be derived formally from the Navier-Stokes partial differential equations of fluid mechanics (1-1.3) (e.g., see Chapter 4). The nondimensional forms of Lorenz's equations are Footnote 4: Lorenz credits Saltzman (1962) with actually discovering nonperiodic solutions to the convection problem in which he used a system of the five first-order equations. Mathematicians, however. chose instead to study Lorenz’s simpler third-order set of equations (1-3.9). Thus flows the course of scientific destinies. \[\begin{array}{l}\dot{x}\,=\,\sigma(y\,-\,x)\\ \dot{y}\,=\,\rho x\,-\,y\,-\,xz\\ \dot{z}\,=\,xy\,-\,\beta z\end{array}\] (1-3.9)The parameters \(\sigma\) and \(\rho\) are related to the Prandtl number and Rayleigh number, respectively, and the third parameter \(\beta\) is a geometric factor. Note that the only nonlinear terms are \(xz\) and \(xy\) in the second and third equations. For \(\sigma=10\) and \(\beta=8/3\) (a favorite set of parameters for experts in the field), there are three equilibria for \(\rho>1\) for which the origin is an unstable saddle (Figure 1-26). When \(\rho>25\), the other two equilibria become unstable spirals and a complex chaotic trajectory moves between all three equilibria as shown in Figure 1-27. It was Lorenz's insistence in the years following 1963 that such motions were not artifacts of computer simulation but were inherent in the equations themselves that led mathematicians to study these equations further (e.g., see Sparrow, 1982). Since 1963, hundreds of papers have been written about these equations, and this example has become a classic model for chaotic dynamics. These equations are also similar to those that model the chaotic behavior of laser devices (e.g., see Haken, 1985). Systems of other third-order equations have since been found to exhibit chaotic behavior. For example, the forced motion of a nonlinear oscillator can be written in a form similar to that of (1-3.9); New Figure 1-25: \(Top\): Sketch of fluid streamlines in a convection cell for steady motions. \(Bottom\): One-dimensional convection in a circular tube under gravity and thermal gradients. ton's law for a particle under a force \(F(x,\,t)\) is written \[m\ddot{x}\,=\,F(x,\,t)\] (1-3.10) To put (1-3.10) into a form for phase space study, we write \(y=\dot{x}\). Furthermore, if the mass is periodically forced, one can reduce the second-order nonautonomous system (1-3.10) to an autonomous system of third-order equations. Thus, we assume \[F(x,\,t) = m(f(x,y)\,+\,g(t))\] \[g(t\,+\,\tau) = g(t)\] By defining \(z=\omega t\) and \(\omega=2\pi/\tau\), the resulting equations become \[\dot{x} = y\] \[\dot{y} = f(x,y)\,+\,g(z)\] (1-3.11) \[\dot{z} = \omega\] A specific case that has strong chaotic behavior is the Duffing oscillator \(F=\,-(ax\,+\,bx^{3}\,+\,cy)\) (see Chapters 2 and 4). Figure 1-26: Sketch of local motion near the three equilibria for the Lorenz equations [Eqs. (1-3.9)]. It is worth noting that for a two-dimensional phase space, solutions to autonomous systems cannot exhibit chaos because the solution curves of the "flow" cannot cross one another. However, in the forced oscillator or the three-dimensional phase space, these curves can become "tangled" and chaotic motions are possible. ##### Quantum Chaos The focus of this book is unpredictability in classical Newtonian physics. But, what about the possibility of quantum chaos? We have all learned in elementary physics that as one approaches the microscopic scale, the motion of a particle must be described by a wave packet whose amplitude gives the probability of locating the particle. Quantum mechanics tells one about the motion of these wave or probability Figure 1.27: Trajectory of a chaotic solution to the Lorenz equations for thermofluid convection [Eqs. (1-3.9)] (numerical integration). packets, but not about the precise motion of the particle. There is a transition region where both classical and quantum descriptions should give approximately the same answer. So, when a classical system exhibits chaotic dynamics near the quantum limit, the question naturally arises as to what would be the quantum description of this classical chaos. This should be a fundamental question in physics today. There are those who think it is (e.g., see Ford, 1988). But for others there is a belief that we still do not know if we have posed the question properly; that is, perhaps quantum chaos is a concept full of redundancy or is an oxymoron. After all, what does unpredictable unpredictability mean? Still some experiments both physical and computational have been performed (e.g., see Pool, 1989 and Koch, 1990) in an attempt to settle the question of quantum chaos. At the time of this writing (1991), however, the subject is still in debate. The existence or nonexistence of quantum chaos will not be resolved in this book. The reader is referred to the publications of some of the notable participants in the debate (e.g., Jensen, 1989 and Ford, 1988). One gets the feeling, however, that in the spirit of T. Kuhn's theory of scientific revolutions, that physicists are still looking for the right 'paradigm' for quantum chaos--that is, a kind of Lorenz quantum model. ## Closing Comments Dynamics is the oldest branch of physics. Yet 300 years after publication of Newton's _Principia_, new discoveries are still emerging. The ideas of Euler, Lagrange, Hamilton, and Poincare that followed, once conceived in the context of planetary mechanics, have now transcended all areas of physics. As the new science of dynamics gave birth to the calculus in the 17th century, so today modern nonlinear dynamics has ushered in new ideas of geometry and topology, such as fractals, which the 21st-century scientist must master to grasp the subject fully. The ideas of chaos go back in Western thought to the Greeks. But these ideas centered on the order in the world that emerged from a formless chaotic, fluid world in prehistory. G. Mayer-Kress (1985) of Los Alamos National Laboratory has pointed out that the idea of chaos in Eastern thought, such as Taoism, was associated with patterns within patterns, eddies within eddies as occur in the flow of fluids (e.g., see the Japanese kimono design in Figure 1-28). The view that order emerged from an underlying formless chaos and that this order is recognized only by predictable periodic patterns was the predominant view of 20th-century dynamics until the last two decades. What is replacing this view is the concept of chaotic events resulting from orderly laws, not a formless chaos, but one in which there are underlying patterns, fractal structures, governed by a new mathematical view of our "orderly"' world. Since the first edition of this book was published, new discoveries continue to appear, such as multifractals, spatial complexity, and hyperchaos. The range of applications continues to grow. And, while Figure 1.28: Fractal-like pattern in a Japanese kimono design. (Courtesy of Mitsubishi Motor Corp.)
## Chapter 4 How to Identify Chaotic Vibrations "_What will prove altogether remarkable is that some very simple schemes to produce erratic numbers behave identically to some of the erratic aspects of natural phenomena._" Mitchell Feigenbaum, 1980 Theorists and experimentalists approach a dynamical problem from different sides: The former is given the equations and looks for solutions, whereas the latter is given the solution and is looking for the equations or mathematical model. In this chapter we present a set of diagnostic tests that can help identify chaotic oscillations and the models that describe them in physical systems. Although this chapter is written primarily for those not trained in the mathematical theory of dynamics, theoreticians may find it of interest to see how theoretical ideas about chaos are realized in the laboratory. In a later chapter (Chapter 6), we present some predictive criteria as well as more sophisticated diagnostic tests for chaos. This, however, requires some mathematical background, such as the theory of fractal sets (Chapter 7) and Lyapunov exponents (Chapter 6). Engineers often have to diagnose the source of unwanted oscillations in physical systems. The ability to classify the nature of oscillations can provide a clue as to how to control them. For example, if the system is thought to be _linear_, large periodic oscillations may be traced to a _resonance_ effect. However, if the system is _nonlinear_, alimit cycle_ may be the source of periodic vibration, which in turn may be traced to some _dynamic instability_ in the system. In order to identify nonperiodic or chaotic motions, the following checklist is provided: 1. Identify nonlinear elements in the system. 2. Check for sources of random input. 3. Observe time history of measured signal. 4. Look at phase plane history. 5. Examine Fourier spectrum of signal. 6. Take Poincare map or return map of signal. 7. Vary system parameters (look for bifurcations and routes to chaos). In later chapters we discuss more advanced techniques. These include measuring two properties of the motion: fractal dimension and Lyapunov exponents. Also, probability density functions can be measured. In the following, we go through the above-cited checklist and describe the characteristics of chaotic vibrations. To focus the discussion, the vibration of the buckled beam (double-well potential problem) is used as an example to illustrate the characteristics of chaotic dynamics. A diagnosis of chaotic vibrations implies that one has a clear definition of such motions. However, as research uncovers more complexities in nonlinear dynamics, a rigorous definition seems to be limited to certain classes of mathematical problems. For the experimentalist, this presents a difficulty because his or her goal is to discover what mathematical model best fits the data. Thus at this stage of the subject, we will use a collection of diagnostic criteria as well as a variety of classes of chaotic motions (see Table 2-1). The experimentalist is encouraged to use two or more tests to obtain a consistent picture of the chaos. To help sort out the growing definitions and classes of chaotic motions, we list the most common attributes without mathematical formulas, but with the most successful diagnostic tools in parentheses. _Characteristics of Chaotic Vibrations_ Sensitivity to changes in initial conditions [often measured by Lyapunov exponent (Chapter 6) and fractal basin boundaries (Chapter 7)]Broad spectrum of Fourier transform when motion is generated by a single frequency [measured by fast Fourier transform (FFT) using modern electronic spectrum analyzers] Fractal properties of the motion in phase space which denote a strange attractor [measured by Poincare maps, fractal dimensions (Chapter 7)] Increasing complexity of regular motions as some experimental parameter is changed--for example, period doubling [often the Feigenbaum number can be measured (Chapters 1, 3, and 6)] Transient or intermittent chaotic motions; nonperiodic bursts of irregular motion (intermittency) or initially randomlike motion that eventually settles down into a regular motion [measurement techniques are few but include the average lifetime of the chaotic burst or transient as some parameter is varied; the scaling behavior might suggest the correct mathematical model (see Chapter 6)] ### Nonlinear System Elements A chaotic system must have nonlinear elements or properties. A _linear system cannot exhibit chaotic vibrations_. In a linear system, periodic \begin{table} \begin{tabular}{p{341.4pt}} _Regular Motion—Predictable:_ Periodic oscillations, quasiperiodic motion; not sensitive to changes in parameters or initial conditions \\ _Regular Motion—Unpredictable:_ Multiple regular attractors (e.g., more than one periodic motion possible); long-time motion sensitive to initial conditions \\ _Transient Chaos:_ Motions that look chaotic and appear to have characteristics of a strange attractor (as evidenced by Poincaré maps) but that eventually settle into a regular motion \\ _Intermittent Chaos:_ Periods of regular motion with transient bursts of chaotic motion; duration of regular motion interval unpredictable \\ _Limited or Narrow-Band Chaos:_ Chaotic motions whose phase space orbits remain close to some periodic or regular motion orbit; spectra often show narrow or limited broadening of certain frequency spikes \\ _Large-Scale or Broad-Band Chaos—Weak:_ Dynamics can be described by orbits in a low-dimensional phase space \(3\leq n<7\) (1–3 modes in mechanical systems), and usually one can measure fractal dimensions \(<7\); chaotic orbits traverse a broad region of phase space; spectra show broad range of frequencies especially below the driving frequency (if one is present) \\ _Large-Scale Chaos—Strong:_ Dynamics must be described in a high-dimensional phase space; large number of essential degrees of freedom present, spatial as well as temporal complexity; difficult to measure reliable fractal dimension; dynamical theories currently unavailable \\ \end{tabular} \end{table} Table 2.1: Classes of Motion in Nonlinear Deterministic Systemsinputs produce periodic outputs of the same period after the transients have decayed (Figure 2-1). (Parametric linear systems are an exception.) In mechanical systems, nonlinear effects include the following: 1. Nonlinear elastic or spring elements 2. Nonlinear damping, such as stick-slip friction 3. Backlash, play, or bilinear springs 4. Most systems with fluids 5. Nonlinear boundary conditions Nonlinear elastic effects can reside in either material properties or geometric effects. For example, the relation between stress and strain in rubber is nonlinear. However, while the stress-strain law for steel is usually linear below the yield stress, large displacement bending of a beam, plate, or shell may exhibit nonlinear relations between the applied forces or moments and displacements. Such effects in mechanics due to large displacements or rotations are usually called _geometric nonlinearities_. In electromagnetic systems, nonlinear properties arise from the following: 1. Nonlinear resistive, inductive, or capacitive elements 2. Hysteretic properties of ferromagnetic materials 3. Nonlinear active elements such as vacuum tubes, transistors, and lasers 4. Moving media problems: for example, \(\mathbf{v}\times\mathbf{B}\) voltages, where \(\mathbf{v}\) is a velocity and \(\mathbf{B}\) is the magnetic field Figure 2-1: Sketch of the input–output possibilities for linear and nonlinear systems. 5. Electromagnetic forces: for example, \(\mathbf{F}=\mathbf{J}\times\mathbf{B}\), where \(\mathbf{J}\) is current and \(\mathbf{F}=\mathbf{M}\cdot\nabla\mathbf{B}\), where \(\mathbf{M}\) is the magnetic dipole strength Common electric circuit elements such as diodes and transistors are examples of nonlinear devices. Magnetic materials such as iron, nickel, or ferrites exhibit nonlinear constitutive relations between the magnetizing field and the magnetic flux density. Some investigators have created negative resistors with bilinear current-voltage relations by using operational amplifiers and diodes (see Chapter 5). The task of identifying nonlinearities in the system may not be easy: first, because we are often trained to think in terms of linear systems; and second, the major components of the system could be linear but the nonlinearity arises in a subtle way. For example, the individual elements of a truss structure could be linearly elastic, but the way they are fastened together could have play and nonlinear friction present; that is, the nonlinearities could reside in the boundary conditions. In the example of the buckled beam, identification of the nonlinear element is easy (Figure 2-2). Any mechanical device that has more than one static equilibrium position either has play, backlash, or nonlinear stiffness. In the case of the beam buckled by end loads (Figure 2-2\(a\)), the geometric nonlinear stiffness is the culprit. If the beam is buckled by magnetic forces (Figure 2-2\(b\)), the nonlinear magnetic forces are the sources of chaos in the system. ### Random Inputs In classical linear random vibration theory, one usually treats a model of a system with random variations in the applied forces or model parameters of the form \[[m_{0}+m_{1}(t)]\ddot{x}+[c_{0}+c_{1}(t)]\dot{x}+[k_{0}+k_{1}(t)]x=f_{0}(t)+f_ {1}(t)\] where \(m_{1}(t)\), \(c_{1}(t)\), \(k_{1}(t)\), and \(f_{1}(t)\) are assumed to be random time functions with given statistical measures such as the mean or standard deviation. One then attempts to calculate the statistical properties of \(x(t)\) in terms of the given statistical measures of the random inputs. In chaotic vibrations there are no _assumed_ random inputs; that is, the applied forces or excitation are assumed to be deterministic. By definition, chaotic vibrations arise from deterministic physical systems or nonrandom differential or difference equations. Although noise is always present in experiments, even in numerical simulations,it is presumed that large nonperiodic signals do not arise from very small input noise. Thus, a large output-signal-to-input-noise ratio is required if one is to attribute nonperiodic response to a deterministic system behavior. #### Observation of Time History Usually, the first clue that the experiment has chaotic vibrations is the observation of the signal amplitude with time on a chart recorder or oscilloscope (Figure 2-3). The motion is observed to exhibit no visible pattern or periodicity. This test is _not foolproof_, however, because a motion could have a long-period behavior that is not easily detected. Also, some nonlinear systems exhibit quasiperiodic vibrations where two or more incommensurate periodic signals are present. Figure 2-2: Nonlinear, multiple equilibrium state problems: (_a_) buckling of a thin elastic beam column due to axial end loads and (_b_) buckling of an elastic beam due to nonlinear magnetic body forces. ### Phase Plane Consider a one-degree-of-freedom mass with displacement \(x(t)\) and velocity \(v(t)\). Its equation of motion, from Newton's law, can be written in the form \[\begin{split}\dot{x}&=v\\ \dot{v}&=\frac{1}{m}f(x,v,t)\end{split}\] (2-1) where \(m\) is the mass and \(f\) is the applied force. The phase plane is defined as the set of points (\(x\), \(v\)). (Some authors use the momentum \(mv\) instead of \(v\).) When the motion is periodic (Figure 2-4\(a\)), the phase plane orbit traces out a closed curve which is best observed on an analog or digital oscilloscope. For example, the forced oscillations of a linear spring-mass-dashpot system exhibit an elliptically shaped orbit. However, a forced nonlinear system with a cubic spring element may show an orbit which crosses itself but is still closed. This can represent a subharmonic oscillation as shown in Figure 2-4\(a\). Systems for which the force does not depend explicitly on time--for example,\(f=f(x,v)\) in Eq. (2-1)--are called _autonomous_. For autonomous nonlinear systems (no harmonic inputs), periodic motions are referred to as _limit cycles_ and also show up as closed orbits in the phase plane (see Chapter 1). Chaotic motions, on the other hand, have orbits which never close or repeat. Thus, the trajectory of the orbits in the phase plane will tend to fill up a section of the phase space as in Figure 2-4\(b\). Although Figure 2-3: Time history of chaotic motions of a buckled elastic beam showing jumps between the two stable equilibrium states. Figure 2.4: (_a_) Period-2 motion for forced motion of a buckled beam in the phase plane (bending strain versus strain rate). (_b_) Chaotic trajectory for forced motion of a buckled beam. this wandering of orbits is a clue to chaos, continuous phase plane plots provide very little information and one must use a modified phase plane technique called _Poincare maps_ (see below). Often, one has only a single measured variable \(v(t)\). If \(v(t)\) is a velocity variable, then one can integrate \(v(t)\) to get \(x(t)\) so that the phase plane consists of points [\(\int_{0}^{t}v\ dt\), \(v(t)\)]. ### Pseudo-Phase Space Method Another technique that has been used when only one variable is measured is the time-delayed pseudo-phase-plane method (also called the _embedding space method_). For a one degree-of-freedom system with measurement \(x(t)\), one plots the signal versus itself but delayed or advanced by a fixed time constant: [\(x(t)\), \(x(t\ +\ T)\)]. The idea here is that the signal \(x(t\ +\ T)\) is related to \(\dot{x}(t)\) and should have properties similar to those in the classic phase plane [\(x(t)\), \(\dot{x}(t)\)]. In Figure 2-5 we show a pseudo-phase-plane orbit for a harmonic oscillator for different time delays. If the motion is chaotic, the trajectories do not close (Figure 2-6). The choice of \(T\) is not crucial, except to avoid a natural period of the system. When the state variables are greater than two (e.g., position, velocity, time, or forcing phase), the higher-dimensional pseudo-phase-space trajectories can be constructed using multiple delays. For example, a three-dimensional space can be constructed using a vector with components (\(x(t)\), \(x(t\ +\ T)\), \(x(t\ +\ 2T)\)). More will be said about this technique in Chapter 5. ### Fourier Spectrum and Autocorrelation One of the clues to detecting chaotic vibration is the appearance of a broad spectrum of frequencies in the output when the input is a single-frequency harmonic motion or is dc (Figure 2-7). This characteristic of chaos becomes more important if the system is of low dimension (e.g., one to three degrees of freedom). Often, if there is an initial dominant frequency component \(\omega_{0}\), a precursor to chaos is the appearance of subharmonics \(\omega_{0}/n\) in the frequency spectrum (see below). In addition to \(\omega_{0}/n\), harmonics of this frequency will also be present of the form \(m\omega_{0}/n\) (\(m\), \(n\ =\ 1\), \(2\), \(3\), \(...\)). An illustration of this test is shown in Figure 2-7. Figure 2-7\(a\) shows a single spike in both the driving force and the response of a buckled beam. Figure 2-7\(b\) shows a broad spectrum, indicating possible chaotic motions. One must be cautioned against concluding that multiharmonic outputs imply chaotic vibrations, because the system in question might have many hidden degrees of freedom of which the observer is un Figure 2.5: (\(a\)) Phase-plane trajectory of Duffing oscillator (1-2.4); \(\alpha=-1\), \(\beta=1\). (\(b\)) Pseudo-phase-plane trajectory for the periodic oscillator in (\(a\)) for two delay times. Figure 2.6: (_a_) Phase-plane trajectory for chaotic motion of a particle in a two-well potential (buckled beam) under periodic forcing (1-2.4); \(\alpha=-1\), \(\beta=1\). (_b_) Pseudo-phase-plane trajectory of chaotic motion in (_a_). aware. In large-degree-of-freedom systems, the use of the Fourier spectrum may not be of much help in detecting chaotic vibrations unless one can observe changes in the spectrum as one varies some parameter such as driving amplitude or frequency. Another useful measure of the predictability of the motion is an autocorrelation function \[A(\tau)=\frac{1}{T}\int_{T}^{-T}\!\!f(t)f(t+\tau)\,dt\] where \(T\) is very large compared to the dominant periods in the motion. An autocorrelation of a periodic signal produces a periodic function \(A(\tau)\) as shown in Figure 2-8\(a\). But \(A(\tau)\) for a chaotic or random signal shows \(A(\tau)\!\to 0\), for \(\tau>\tau_{c}\) where \(\tau_{c}\) is some characteristic time (Figure 2-8\(b\)). Modern signal processing electronics can calculate \(A(\tau)\) as well as the Fourier transform in real time as the data is gathered. Hence, both tools are useful for experimental bifurcation studies because one can look for qualitative changes in \(A(\tau)\) as some parameter is varied. The characteristic time \(\tau_{c}\) is a measure of the time the motion can be Figure 2-7: (\(a\)) Frequency spectrum of buckled elastic beam for low-amplitude excitation—linear periodic response. (\(b\)) Frequency spectrum of buckled elastic beam for larger excitation—broad-band response of beam due to chaotic vibration. Figure 2-8: (\(a\)) Autocorrelation function for periodic motions. (\(b\)) Autocorrelation function for a chaotic signal. predicted in the future, and is believed by some researchers to be related to the Lyapunov exponent (see Chapter 6). #### Poincare Maps and Return Maps In the mathematical study of dynamical systems, a map refers to a time-sampled sequence of data \(\{x(t_{1}),\,x(t_{2}),\,\ldots,\,x(t_{n}),\,\ldots,\,x(t_{N})\}\) with the notation \(x_{n}=x(t_{n})\). A simple deterministic map is one in which the value of \(x_{n+1}\) can be determined from the values of \(x_{n}\). This is often written in the form (see Chapter 3) \[x_{n+1}\,=\,f(x_{n})\] (2-2) This can be recognized as a _difference equation_. The idea of a map can be generalized to more than one variable. Thus, \(x_{n}\) could represent a vector with \(M\) components \(x_{n}=(Y1_{n},\,Y2_{n},\,\ldots,\,YM_{n})\) and Eq. (2-2) could represent a system of \(M\) equations. For example, suppose we consider the motion of a particle as Figure 2-9: (_a_) Phase-plane Poincaré map showing a period-3 subharmonic motion of a periodically forced buckled beam. (_b_) Chaotic motion near a period-3 subharmonic. displayed in the phase plane (\(x(t),\dot{x}(t)\)). However, if instead of looking at the motion continuously, we look only at the dynamics at discrete times, then the motion will appear as a sequence of dots in the phase plane (Figures 1-19 and 2-9). If \(x_{n}\equiv x(t_{n})\) and \(y_{n}\equiv\dot{x}(t_{n})\), this sequence of points in phase plane represents a _two-dimensional_ map: \[\begin{split} x_{n\,+\,1}&=f(x_{n},y_{n})\\ y_{n\,+\,1}&=\,g(x_{n},y_{n})\end{split}\] (2-3) When the sampling times \(t_{n}\) are chosen according to certain rules, to be discussed below, this map is called a _Poincare map_. #### Poincare Maps for Forced Vibration Systems When there is a driving motion of period \(T\), a natural sampling rule for a Poincare map is to choose \(t_{n}=nT\,+\,\tau_{0}\). This allows one to distinguish between periodic motions and nonperiodic motions. For example, if the sampled harmonic motion shown in Figure 2-4\(a\) is synchronized with its period, its "map" in the phase plane will be two points. If the output, however, were a subharmonic of period 3, the Poincare map would consist of a set of three points as shown in Figure 2-9\(a\). Another nonchaotic Poincare map is shown in Figure 2-10, where the motion consists of two _incommensurate_ frequencies Figure 2-10: Phase-plane Poincaré map showing a quasiperiodic motion of a periodically forced two-degree-of-freedom beam in a two-well magnetic potential. \[x(t)\,=\,C_{1}\sin(\omega_{1}t\,+\,d_{1})\,+\,C_{2}\sin(\omega_{2}t\,+\,d_{2})\] (2-4) where \(\omega_{1}/\omega_{2}\) is an irrational number. If one samples at a period corresponding to either frequency, the map in the phase plane will become a continuous closed figure or orbit. This motion is sometimes called _almost-periodic_ or _quasiperiodic_ motion or "motion on a torus" and is not considered to be chaotic (see also Figure 1-12). Finally if the Poincare map does not consist of either a finite set of points (Figure 2-9\(a\)) or a closed orbit (Figure 2-10), the motion may be Figure 2-11: (\(a\)) Poincaré map of chaotic motion of a buckled beam with low damping. (\(b\), \(c\)) Poincaré map of chaotic motion of a buckled beam for higher damping showing fractal-like structure of a strange attractor. [From Moon (1980a) with permission of ASME, copyright 1980.] chaotic (Figure 2-11). Here we must distinguish between damped and undamped systems. In undamped or lightly damped systems the Poincare map of chaotic motions often appear as a cloud of unorganized points in the phase plane (Figure 2-11\(a\)). Such motions are sometimes called _stochastic_ (e.g., see Lichtenberg and Lieberman, 1983). In damped systems the Poincare map will sometimes appear as an infinite set of highly organized points arranged in what appear to be parallel lines as shown in Figure 2-11\(b,c\). In numerical simulations, one can enlarge a portion of the Poincare map (see Figure 2-12) and observe further structure. If this structured set of points continues to exist after several enlargements then the map is said to be _fractal-like_, and one says the motion behaves as a _strange attractor_. This embedding of structure within structure is often referred to as a _Cantor set_ (see Chapter 7). The appearance of fractal-like or Cantor-set-like patterns in the Poincare map of a vibration history is a strong indicator of chaotic motions. The classes of patterns of Poincare maps are listed in Table 2-2. ### Poincare Maps in Autonomous Systems Steady-state vibrations can also be generated without periodic or random inputs if the motion originates from a dynamic instability such as wind-induced flutter in an elastic structure (Figure 2-13) or a temperature-gradient-induced convective motion in a fluid or gas (e.g., Benard convection, Figure 1-25). One is then led to ask how to choose the sampling times in a Poincare map. Here the discussion gets a little abstract. Consider the lowest-order chaotic system governed by three first-order differential equations (e.g., the Lorenz equations of Chapter 1). In an electromechanical system the variables \(x(t)\), \(y(t)\), and \(z(t)\) could represent displacement, velocity, and control force as in a feedback controlled system. We then imagine the motion as a trajectory in a three-dimensional phase space (Figure 2-14). A Poincare map can be defined by constructing a two-dimensional oriented surface in this space and looking at the points \((x_{n},y_{n},z_{n})\) where the trajectory pierces this surface. For example, we can choose a plane \(n_{1}x\,+\,n_{2}y\,+\,n_{3}z=c\) with normal vector \(\mathbf{n}\equiv(n_{1},\,n_{2},\,n_{3})\). As a special case, choose points Figure 2-12 Poincaré map of chaotic vibration of a forced nonlinear oscillation showing self-similar structure at finer and finer scales. where \(x=0\). Then the Poincare map consists of points which pierce this plane with the same sense; that is, if \(\mathbf{s}(t)\) represents a unit vector along the trajectory, \(\mathbf{s}(t_{n})\cdot\mathbf{n}\) must always have the same sign. This definition of the Poincare map actually includes the case when the system is periodically forced. Consider, for example, a forced nonlinear oscillator with equations of motion Figure 2.13: Examples of self-excited vibrations: (\(a\)) fluid flow over an elastic plate and (\(b\)) gas flow over a liquid interface. \[\dot{x} = y\] (2-5) \[\dot{y} = F(x,y)\,+f_{0}\text{cos}(\omega t\,+\,\phi_{0})\] (2-6) Then this system can be made to look like an autonomous one by defining \[z\,=\,\omega t\,+\,\phi_{0}\] (2-7) and \[\dot{x} = y\] (2-8) \[\dot{y} = F(x,y)\,+f_{0}\text{cos}\,z\] (2-9) \[\dot{z} = \omega\] (2-10) Thus, a natural sampling time is chosen when \(z=\text{constant}\). This system can be thought of as a cylindrical phase space where the values of \(z\) are restricted: \(0\leq z\leq 2\pi\). A picture of several Poincare maps is then given as in Figure 2-15 for different values of \(z\) (see also Chapter 5, Figure 5-7). #### Reduction of Dynamics to One-Dimensional Maps In Chapter 1 we saw that simple one-dimensional maps or difference equations of the form \(x_{n+1}=f(x_{n})\) can exhibit period-doubling bifurcations and chaos Figure 2-14: Sketch of time evolution trajectories of a third-order system of equations and a typical Poincaré plane. when the function \(f(x)\) has at least one maximum (or minimum), as in Figure 1-21. Period-doubling phenomena have been observed in so many different complex physical systems (fluids, lasers, \(p\)-\(n\) electronic junctions) that in many cases the dynamics may sometimes be modeled as a one-dimensional map. This is especially possible in systems with significant dissipation. To check this possibility, one samples some dynamic variable using a Poincare section as discussed above; that is, \(x_{n}=x(t=t_{n})\). Then one plots each \(x_{n}\) against its successor value \(x_{n+1}\). This is sometimes called a \(return\ map\). Two criteria must be met to declare the system chaotic. First, the points \(x_{n+1}\) versus \(x_{n}\) must appear to be clustered in some apparent functional relation; and second, this function \(f(x)\) must be multivalued--such as when it has a maximum or a minimum. If this be the case, one then attempts to fit a polynomial function to the data and uses this mapping to do numerical experiments or analysis along the lines of the quadratic map (Chapters 1 and 3). Examples of this technique may be found in Shaw (1984) in the problem of a dripping faucet and in Rollins and Hunt (1982) in an experiment with a varactor diode in an electrical Figure 2-15: Sketch of a strange attractor for a forced nonlinear oscillator: product space of the Poincaré plane and the phase of the forcing excitation. circuit (see also Chapter 3 for a discussion of these problems). This technique is discussed further in Chapter 5. An example using experimental data from the vibration of a levitated magnet over a high-temperature superconductor is shown in Figure 2-16 (see Moon, 1988). ### Bifurcations: Routes to Chaos #### Periodic to Chaotic Motions Through Parameter Changes In conducting any of these tests for chaotic vibrations, one should try to vary one or more of the control parameters in the system. For example, in the case of the buckled structure (Figure 2-2), one can vary either the forcing amplitude or forcing frequency, or in the case of the nonlinear circuit, one can vary the resistance. The reason for this procedure is to see if the system has steady or periodic behavior for some range of the parameter space. In this way, one can have confidence that the system is in fact deterministic and that there are no hidden inputs or sources of truely random noise. In changing a parameter, one looks for a pattern of periodic responses. One characteristic precursor to chaotic motion is the appearance of subharmonic periodic vibrations. There may in fact be many patterns of prechaos behavior. Several models of prechaotic behavior have been observed in both numerical and physical experiments (see Gollub and Benson, 1980 or Kadanoff, 1983). #### Period-Doubling Route to Chaos Period doubling in physical systems have been observed experimentally in all branches of classical physics, chemistry, and biology as well as in many technical devices. Although this route to chaos in ubiquitous in science, it is by no means the only path to unpredictable dynamics. In the period-doubling phenomenon, one starts with a system with a fundamental periodic motion. Then as some experimental parameter is varied, say \(\lambda\), the motion undergoes a bifurcation or change to a periodic motion with twice the period of the original oscillation. As \(\lambda\) is changed further, the system bifurcates to periodic motions with twice the period of the previous oscillation. One outstanding feature of this scenario is that the critical values of \(\lambda\) at which successive period doublings occur obey the following scaling rule (see also Chapters 1 and 3): \[\frac{\lambda_{n}-\lambda_{n-1}}{\lambda_{n+1}-\lambda_{n}}\to\delta=4.6692016\] (2-11) as \(n\to\delta\). (This is called the _Feigenbaum number_, named after the Figure 2.16: (_a_) Sketch of vibrating magnet near a high-temperature superconducting material. (_b_) Return map based on the amplitude signal from the strain gage. [From Moon (1988) with permission of North-Holland Publishing Company.] physicist who discovered this scaling behavior.) In practice, this limit approaches \(\delta\) by the third or fourth bifurcation. This process will accumulate at a critical value of the parameter, after which the motion becomes chaotic. This phenomenon has been observed in a number of physical systems as well as numerical simulations. The most elementary mathematical equation that illustrates this behavior is a first-order difference equation (see Chapters 1 and 3): \[x_{n+1}\ =\ \lambda x_{n}(1\ -\ x_{n})\] (2-12) As the system parameter is changed beyond the critical value, chaotic motions exist in a band of parameter values. However, these bands may be of finite width; that is, as the parameter is varied, periodic windows may develop. Periodic motions in this regime may again undergo period-doubling bifurcations, again leading to chaotic motions (see Section 6.3). The period-doubling model for the route to chaos is an elegant, aesthetic model and has been described in many popular articles. However, while many physical systems exhibit properties similar to those of (2-12), many other systems do not. Nevertheless, when chaotic vibrations are suspected in a system, it is worthwhile checking to see if period doubling is present. #### Bifurcation Diagrams A widely used technique for examining the prechaotic or postchaotic changes in a dynamical system under parameter variations is the bifurcation diagram (an example is shown in Figure 2-17). Here some measure of the motion (e.g., maximum amplitude) is plotted as a function of a system parameter such as forcing amplitude or damping constant. If the data are sampled using a Poincare map, it is very easy to observe period doubling and subharmonic bifurcations as shown in the experimental data for a nonlinear circuit from a paper by Bryant and Jeffries (1984\(a\),_b_) at the University of California, Berkeley. However, when the bifurcation diagram loses continuity, it may mean either quasiperiodic motion or chaotic motion and further tests are required to classify the dynamics. #### Quasiperiodic Route to Chaos Although period doubling is the most celebrated scenario for chaotic vibration, there are several other schemes that have been studied and observed. In one proposed by Newhouse et al. (1978), they imagine a system which undergoes successive dynamic instabilities before chaos. For example, suppose a system is initially in a steady state and becomes dynamically unstable after changing some parameter (e.g., flutter). As the motion grows, nonlinearities come into effect and the motion becomes a limit cycle. Such transitions are called _Hopf bifurcations_ in mathematics (e.g., see Abraham and Shaw, 1983). If after further parameter changes the system undergoes two more Hopf bifurcations so that three simultaneous coupled limit cycles are present, chaotic motions become possible. Thus, the precursor to such chaotic motion is the presence of two simultaneous periodic oscillations. When the frequencies of these oscillations, \(\omega_{1}\) and \(\omega_{2}\), are not commensurate, the observed motion itself is not periodic but is said to be _quasiperiodic_ [see Eq. (2-4)]. As discussed above, the Poincare map of a quasiperiodic motion is a closed curve in the phase plane (Figure 2-10). Such motions are imagined to take place on the surface of a torus where the Poincare map represents a plane which cuts the torus (see Figure 2-18). If \(\omega_{1}\) and \(\omega_{2}\) are incommensurate, the trajectories fill the surface of the torus. If Figure 2-17: Experimental bifurcation diagram for a periodically forced nonlinear circuit with a \(p\)–\(n\) junction; periodically sampled current versus drive amplitude voltage. [From Van Buskirk and Jeffries (1985) with permission of The American Physical Society, copyright 1985.] \(\omega_{1}/\omega_{2}\) is a rational number, the trajectory on the torus will eventually close, although it might perform many orbits in both angular directions of the torus before closing. In the latter case the Poincare map will become a set of points generally arranged in a circle. Chaotic motions are often characterized in such systems by the breakup of the quasiperiodic torus structure as the system parameter is varied (Figure 2-19). Evidence for the three-frequency transition to chaos have been observed in flow between rotating cylinders (Taylor-Couette flow) where vortices form with changes in the rotation speed. Three Fourier spectra from one such experiment are shown in Figure 2-20. In the top figure, one periodic motion appears to be present. In the middle figure, two major motions are evident. In the bottom figure, we have the sign of an increase in broad-band noise which is characteristic of chaotic behavior. It should be noted that for some physical systems, one may observe all three patterns of prechaotic oscillations and many more depending on the parameters of the problem. The benefit in identifying a particular prechaos pattern of motion with one of these now "classic" models is that a body of mathematical work on each exists which may offer better understanding of the chaotic physical phenomenon under study. Figure 2.19: (_a_) Poincaré section of a quasiperiodic motion in Rayleigh–Benard thermal convection with a frequency ratio close to \(\omega_{1}/\omega_{2}=2.99\). (_b_) Breakup of the torus surface prior to the onset of chaos. [From Bergé (1982).] ### Quasiperiodicity and Mode-Locking One phenomenon that sometimes appears in searching for patterns of dynamic behavior in periodically driven systems is _mode-locking_. This behavior typically occurs in physical systems with natural limit cycle generators, such as negative resistance circuits, unstable control systems, aeroelastic oscillators, biochemical and chemical oscillators, Figure 2.21: Sketch of the time history for intermittent-type chaos. Figure 2.20: Evidence for the three-frequency transition to chaos in the flow between rotating cylinders (Taylor–Couette flow); the rotation difference increases from top to bottom. [From Swinney and Gullub (1978).]and thermofluid connections, such as a Rayleigh-Benard cell. Suppose, for example, such a system exhibits a natural periodic limit cycle of frequency \(\omega_{1}\) and is then perturbed by an external periodic force of frequency \(\omega_{2}\). Then one can ask, at what frequency will the combined system oscillate? A very nice review of the phenomena of mode-locking and quasiperiodicity is given in Glazier and Libchaber (1988). The early observation of mode-locking or frequency-locking goes back to the Dutch physicist Christiaan Huygens (1629-1695), who observed how two pendulum clocks attached to a common structure become synchronized. Glazier and Libchaber (1988) give many modern references of experimental observations of mode-locking in physical and biological systems including chemistry, solid-state physics, fluid mechanics, and biology. In certain problems, mode locking can be observed fairly easily. One example is shown in Figure 2-22, which shows experimental data of a biological oscillator excited by a periodic electrical stimulus Figure 2-22: Phase-locked oscillations between a periodic electrical stimulus (upper spikes on traces), and self-oscillations of a group of chick heart cells. (_Top_) \(N\!:\!M=1\!:\!1\). (_Middle_) \(N\!:\!M=2\!:\!1\). (_Bottom_) \(N\!:\!M=2\!:\!3\). [From Guevera et al. (1990) with permission of Harcourt Brace Jovanovich, Inc.] (Guevara et al., 1990). Without the stimulus, the oscillator, which is a collection of cells from a chick heart, will produce a periodic train of electrical pulses called _action potentials_. When the periodic external stimulus is applied in very short time durations, one can observe the two types of signals in the output as shown in Figure 2-22. In a mode-locked condition, for every \(N\) stimuli pulses there are \(M\) action potentials. Shown in Figure 2-22 are ratios \(N\!:\!M\) of \(1\!:\!1\), \(2\!:\!1\), \(2\!:\!3\). If the frequency of the stimulus is changed in a small way, the ratio \(N\!:\!M\) is preserved--this is the mode-locking. If the change in the control frequency is sufficiently large, then the motion may become quasiperiodic or may become locked in another \(N\!:\!M\) ratio. Thus, there is a _finite width_ of the _control frequency_\(\omega_{N\!:\!M}=\Delta f_{2}\) in which the \(N\!:\!M\) ratio is fixed. This width, \(\omega_{N\!:\!M}\), then depends on the strength or amplitude of the control stimulus. Plotted in the plane of control amplitude and frequency, each fixed \(N\!:\!M\) mode-locking regime looks like a wedge shaped region as shown in Figure 2-23. These mode-locking regimes are called _Arnold tongues_ in honor of the Soviet dynamicist who provided the mathematical theory. The data in Figure 2-23 are from an experiment involving thermofluid convection in mercury in a small box with external excitation provided by a magnetic body force. One can see that the width of the tongues grows with the strength of the periodic stimulus. At a certain amplitude these tongues overlap, creating hysteresis and the possibility of cha Figure 2-23: Experimental Arnold torques in the driving amplitude–frequency plane for thermal convection in mercury (Rayleigh–Benard motion) driven periodically by passing electric current through the mercury in the presence of a small magnetic field. The insets show the relative widths near the golden mean \(\sigma_{G}\) and the silver mean \(\sigma_{S}\). [From Glazier and Libchaber (1988) © 1988 IEEE.] otic dynamics. Within each tongue one can also have period doubling in which the mode-locked ratio goes to \(2N\!:\!2M\), \(4N\!:\!4M\), and so on. The phenomena of mode-locking and quasiperiodicity can often be modeled by a one-dimensional map called the _circle map_ (1-2.25) (see also Chapter 3): \[\theta_{n+1}\;=\;\theta_{n}\;+\;\Omega\;+\;\kappa\;\sin\;\theta_{n}\pmod{2\pi}\] where the angular variable \(\theta\) is periodic in \(2\pi\) radians. This model comes from a geometric view of the driven pendulum as the motion of a particle on a torus as discussed in Chapter 1 and in Figure 2-18. ### Transient Chaos Sometimes chaotic vibrations appear for some parameter changes but eventually settle into a periodic or quasiperiodic motion after a short time. According to Grebogi et al. (1983b), such transient chaos is a consequence of a crisis or the sudden disappearance of sustained chaotic dynamics. Thus, experiments and numerical simulation should be allowed to run for a time after one thinks the system is in chaos even if the Poincare map seems to be mapping out a fractal structure characteristic of strange attractors. Transient problems are important in technical dynamics. New methods of characterizing transient chaotic behavior and unpredictability have recently received attention. See e.g. the work of Tel (1990). ### Conservative Chaos Although much of the new excitement about nonlinear dynamics has focused on chaotic dynamics in dissipative systems, chaotic behavior in nondissipative or so-called conservative systems had been known for some time. In fact, the search for solutions to the equations of celestial mechanics in the late 19th century led mathematicians like Poincare to speculate that many dynamic problems were sensitive to initial conditions and hence were unpredictable in the details of the motions of orbiting bodies. The study of chaotic dynamics in energy-conserving systems, while not the principal focus of this book, has received much attention in the literature and sometimes is found under the heading of "Hamiltonian Dynamics," which refers to the methods of Hamilton (and also Jacobi) that are used to solve nonlinear problems in multi-degree-of-freedom nondissipative systems [e.g., see Chapter 1; also see the excellent monographs by Lichtenberg and Lieberman (1983) and by Rasband (1990)]. Examples of conservative systems in the physical world include orbital problems in celestial mechanics and the behavior of particles in electromagnetic fields. Hence, much of the work in this field has been done by those interested in plasma physics, astronomy, and astrophysics (e.g., see Sagdeev et al., 1988 and Zaslavsky et al., 1991). Although most earth-bound dynamics problems have some energy loss, some, like structural systems or microwave cavities, have very little damping and over a finite period of time can behave like a conservative or Hamiltonian system. An example might be the vibration of an orbiting space structure. Also, conservative system dynamics provides a limiting case for small damping dynamic analysis. Thus, while we do not attempt to present a rigorous or lengthy summary of Hamiltonian dynamics, it is useful to discuss the general features of these problems. Typically, energy-conserving systems can exhibit the same types of bounded vibratory motion as lossy systems including periodic, subharmonic, quasiperiodic, and chaotic motions. One of the main differences, however, between vibrations in lossy and lossless problems is that chaotic orbits in lossy systems exhibit a fractal structure in the phase space whereas chaotic orbits in lossless systems do not. Chaotic orbits in conservative systems tend to visit all parts of a subspace of the phase space uniformly; that is, they exhibit a uniform probability density over restricted regions in the phase space. Thus, lossless systems exhibit Poincare maps different from those of lossy problems. However, the use of Lyapunov exponents as a measure of nearby orbit divergence is still valid. An example of a system with no dissipation is the ball bouncing on an elastic table where the table is moving and the impact is assumed to be lossless or elastic. Details of this problem are discussed in Chapter 3. ### Lyapunov Exponents and Fractal Dimensions The tests for chaotic vibrations described in this chapter are mainly qualitative and involve some judgment and experience on the part of the investigator. Quantitative tests for chaos are available and have been used with some success. Two of the most widely used criteria are the Lyapunov exponent (see Chapter 6) and the fractal dimension (see Chapter 7). In summary, these two indicators are currently interpreted as follows:1. Positive Lyapunov exponents imply _chaotic dynamics_. 2. Fractal dimension of the orbit in phase space implies the existence of a _strange attractor_. The Lyapunov exponent test can be used for both dissipative or nondissipative (conservative) systems, whereas the fractal dimension test only makes sense for dissipative systems. The Lyapunov exponent test measures the sensitivity of the system to changes in initial conditions. Conceptually, one imagines a small ball of initial conditions in phase space and looks at its deformation into an ellipsoid under the dynamics of the system. If \(d\) is the maximum length of the ellipsoid and \(d_{0}\) is the initial size of the initial condition sphere, the Lyapunov exponent \(\lambda\) is interpreted by the equation \[d\,=\,d_{0}2^{\lambda(t-t_{0})}\] One measurement, however, is not sufficient, and the calculation must be averaged over different regions of phase space. This average can be represented by \[\lambda\,=\,\lim_{N\to\pm}\frac{1}{N}\sum_{1}^{N}\frac{1}{(t_{i}\,-\,t_{0i})} \log_{2}\frac{d_{i}}{d_{0i}}\] A more detailed discussion is given in Chapter 6 along with references. The fractal dimension is related to the discussion of the horseshoe map in Chapter 1. There we saw that in a chaotic dynamic system, regions of phase space are stretched, contracted, folded, and remapped onto the original space. This remapping for dissipative systems leaves gaps in the phase space. This means that orbits tend to fill up less than an integer subspace in phase space. The fractal dimension is a measure of the extent to which orbits fill a certain subspace, and a noninteger dimension is a hallmark of a _strange attractor_. There are many definitions of fractal dimension, but the most basic one is derived from the notion of counting the number of spheres \(N\) of size \(\varepsilon\) needed to cover the orbit in phase space. Basically, \(N(\varepsilon)\) depends on the subspace of the orbit. If it is a periodic or limit cycle orbit, then \(N(\varepsilon)\approx\varepsilon^{-1}\). When the motion lies on a strange attractor, \(N(\varepsilon)\approx\varepsilon^{-d}\) or \[d\,=\,\lim_{\begin{subarray}{c}N\to\pm\\ \varepsilon\to 0\end{subarray}}\frac{\log N}{\log(1/\varepsilon)}\] Further discussion is given in Chapter 7. Although both quantitative tests can be automated using computer control, experience and judgment are still required to provide a conclusive assessment as to whether the motion is _chaotic_ or a _strange attractor_. Finally, almost all physical examples of _strange attractors_ have been found to be chaotic; that is, noninteger \(d\) implies \(\lambda>0\). However, a few mathematical models and physical problems have been studied where one does not imply the other. ### Strange-Nonchaotic Motions As more research is done in the field of modern nonlinear dynamics, discoveries are made of new categories of motions. We have seen the list grow from periodic, subharmonic, quasiperiodic, intermittent, chaotic, and hyperchaotic to spatially and temporally chaotic dynamics. However, nearly a decade ago J. Yorke and coworkers at the University of Maryland suggested that a new type of motion was possible, namely, strange nonchaotic motions--that is, motions which were geometrically fractal in the phase space, but whose Lyapunov exponents were _not_ positive. At the time, almost all the dynamic models with physical relevance and dissipation exhibited at least one positive Lyapunov when the attractor looked fractal. And to some extent the terms _chaotic_ and _strange attractor_ were often used interchangeably. The original example of Grebogi, Ott et al., in 1984, of a strange-nonchaotic attractor, however, looked to many to be a singular case, not relevant to physical problems. However, through a series of papers this group has amassed convincing evidence for this type of dynamic, which is especially relevant to the systems which exhibit multiple sources of oscillation (either forced or autonomous) and exhibit quasiperiodic motion [e.g., see Grebogi et al. (1984), Ding et al. (1989a,b), and Ditto et al. (1990)]. We will not discuss the theory of such motions in great depth, but we will summarize one of the experimental examples for which strange-nonchaotic motions are thought to occur. Strange-nonchaotic attractors are difficult to diagnose experimentally because reliable methods for calculating Lyapunov exponents are not readily available. However, another tool that has been used is the scaling properties of the Fourier spectrum of the time series (Ditto et al., 1990). Define \(|S(w)|\) as Fourier transform of the signal sampled at one of the forcing frequencies. Then the spectral distribution function \(N(s)\) is defined as the number of peaks in the \(|S(w)|\) with amplitude greater than s. For two-frequency quasiperiodic attractors, \(N\sim\ln\,s\)For three-frequency quasiperiodic attractors, \(N\sim[\ln\,s]^{2}\), and for strange-nonchaotic motions, \(N\sim s^{\,-a}\), \(1<a<2\). The experiment described in Ditto et al. (1990) consists of a thin elastica clamped at the base and initially buckled under gravity. The material used was an amorphous magnetostrictive ribbon called MET-GLAS which exhibits large changes in the effective Young's modulus in a magnetic field. The ribbon was placed in a vertical magnetic field which had two frequency components \[{\bf B}\ =\ B_{\,1}{\rm cos}\ \omega_{1}t\ +\ B_{\,2}{\rm cos}\ \omega_{2}t\] where \(B_{\,1}\), \(B_{\,2}\sim 0.5\)-\(0.9\) Oe. The data recorded the change in curvature of the ribbon near the clamped end. Evidence for strangeness was obtained by taking a Poincare section triggered on one of the driving frequencies. The two-frequency quasiperiodic motion exhibited a closed circular figure. However, the surface of section of the so-called strange-nonchaotic motion showed a fractal-like pattern. The spectral distribution function (Figure 2-24) showed the theoretical scaling \(N\sim s^{\,-a}\) for strange-nonchaotic motions with \(a\ =\ 1.25\). Figure 2-24Spectral distribution function \(N(s)\) for a buckled magnetoelastic cantilever beam driven by a vertical magnetic field \(H=H_{\,1}{\rm cos}\ \omega_{1}t\ +\ H_{\,2}{\rm cos}\ \omega_{2}t\) with \(\omega_{2}=\gamma\omega_{1}\), \(\gamma=(\sqrt{5-1})/2\). The upper curve is for strange-nonchaotic motion (\(H_{1}=0.71\), \(H_{2}=0.80\)), and the lower curve is for quasiperiodic motion (\(H=0.71\), \(H_{2}=0.53\)). [From Ditto et al. (1990).] It is clear that further research will be needed to better define these peculiar motions. But there is now evidence that the new world of nonlinear dynamics has a growing list of new species of complex motion. For a numerical study of strange-nonchaotic motion in a Van der Pol oscillator, the reader should see the paper by Kapitaniak et al. (1990). ## Problems **2-1**: Use the definition of a linear operator to show that the equation for the motion of a damped pendulum \(\ddot{q}\,+\,c\dot{q}\,+\,b\,\sin\,q=0\) is _not_ linear. **2-2**: Consider a ball bouncing between two walls (neglect gravity) for which one wall has a small periodic motion. Show that the dynamics is _not_ governed by a linear operator. **2-3**: Sketch the power spectrum density of **(a)**: \((\cos\,\omega t)^{3}\) **(b)**: \(\left(\cos\frac{\omega t}{2}\right)^{4}\) **2-4**: Consider the output of a nonlinear oscillator to have a primary frequency component plus a subharmonic: \[x(t)\,=\,A\,\cos\frac{\omega}{2}\,t\,+\,B\,\cos(\omega t\,+\,\phi_{0})\] If one defines a second state variable \(\dot{x}=y\), then when will the orbit as plotted in the phase plane (\(x\), \(y\)) show a double loop? Will this be any different if the signal had primary and harmonic components (i.e., \(\omega\), \(2\omega\))? **2-5**: Suppose that the output of a dynamical system has two frequency components, that is, \[x(t)\,=\,A\,\cos\,\omega_{1}t\,\,+\,B\,\cos(\omega_{2}t\,\,+\,\phi)\] **(a)**: If one takes a Poincare map on the phase \(\omega_{1}t\), show that the map (\(x\), \(y\,=\,\dot{x}\)) is an ellipse when \(\omega_{1}/\omega_{2}\) is incommensurate.
## Chapter 3 Models for Chaos; Maps and Flows _All the richness in the natural world is not a consequence of complex physical law, but arises from the repeated application of simple laws._ L. P. Kadanoff1 Footnote 1: L. P. Kadanoff is a physicist at the University of Chicago. Quote taken from _Physics Today,_ March 1991, page 9. ### 3.1 Introduction To those of us educated in the physical sciences, the infinitesimal differential calculus was the first abstract mathematical tool that one struggled with on the road to understanding mathematical physics. Later, we learned to model the world with the calculus of differential equations. In this view, the physical world was reduced to sets of differential equations: the Navier-Stokes equation of fluid mechanics; the equations for elasticity for solids and structures such as beams, plates, and shells; Maxwell's equations for electromagnetics; and the heat equation for thermal problems. The solutions of these differential equations are drawn in phase space as continuous orbits. A bundle of such trajectories, corresponding to different initial conditions, generates orbits that look like the flow of a fluid--hence the modern term, _flows_, to describe the dynamics of continuous time systems. Thus it comes with some surprise to many in the physical sciences that somedynamical phenomena can be exactly represented by finite time _difference_ equations, or _maps_ as they are now called. However, for students of the biological and social sciences, the dynamic laws are more often posed as relationships between events at discrete time intervals, and the use of difference equations is more natural (e.g., see May, 1976, 1987). After all, the events of birth and death or the publication of unemployment statistics occur at specific times. Today, however, the study of modern dynamics in the physical sciences requires a working knowledge of iterated maps, especially to understand the basic nature of chaotic behavior. As illustrated in Chapter 2, the Poincare section of the dynamics of a physical system described by three first-order differential equations naturally leads to a discrete time map on the plane as illustrated in Figure 3-1. Thus, we begin with the study of two coupled nonlinear difference equations or second-order maps. #### The Geometry of Mappings: Maps on the Plane The dynamics of maps can be described both geometrically and algebraically. We will first take a look at the qualitative properties of maps and then look at more specific examples for more quantitative information. Figure 3-1: Sketch showing the relation between a continuous time orbit in a 3-D phase space and a 2-D discrete time point mapping. By a mapping we mean a transformation of a contiguous set of points from one set of positions to another. The dynamic evolution of a particular initial point under repeated application of this transformation is called an _iterated map_ or simply a _map_. We begin with a discussion of maps on the plane or of two-dimensional (2-D) maps which are written as a set of two coupled difference equations. For most of the maps discussed in this book we assume that they can be written in explicit form,2 that is, \[\begin{array}{l}x_{n\,+\,1}=F(x_{n},y_{n})\\ y_{n\,+\,1}=G(x_{n},y_{n})\end{array}\] (3-1.1) We also assume that each point \((x_{n},y_{n})\) has a unique iterate and that each iterate \((x_{n\,+\,1},y_{n\,+\,1})\) has a unique antecedent. This implies that an inverse mapping can be found: \[\begin{array}{l}x_{n}=F^{-1}(x_{n\,+\,1},y_{n\,+\,1})\\ y_{n}=G^{-1}(x_{n\,+\,1},y_{n\,+\,1})\end{array}\] (3-1.2) For example, in the case of the most general form of a quadratic, polynomial, area-preserving mapping of the plane (see Henon, 1969), one can find \(F^{-1}\), \(G^{-1}\) quite easily; that is, suppose \[\begin{array}{l}x_{n\,+\,1}=F(x_{n},y_{n})=x_{n}\mbox{cos}\;\alpha-(y_{n}-x _{n}^{2})\mbox{sin}\;\alpha\\ y_{n\,+\,1}=G(x_{n},y_{n})=x_{n}\mbox{sin}\;\alpha+(y_{n}-x_{n}^{2})\mbox{cos} \;\alpha\end{array}\] (3-1.3) Then it can be shown that the inverse is given by \[\begin{array}{l}x_{n}=x_{n\,+\,1}\mbox{cos}\;\alpha+y_{n\,+\,1}\mbox{sin}\; \alpha\\ y_{n}=-x_{n\,+\,1}\mbox{sin}\;\alpha+y_{n\,+\,1}\mbox{cos}\;\alpha+(x_{n\,+\,1} \mbox{cos}\;\alpha+y_{n\,+\,1}\mbox{sin}\;\alpha)^{2}\end{array}\] Area-preserving mappings arise in the dynamics of conservative or so-called Hamiltonian dynamics. When the functions \(F\), \(G\) are continuous and differentiable, the change in the transformation of a small differential area under one iteration of the mapping is measured by the Jacobian, defined by the determinate \[J=\left|\begin{array}{cc}\frac{\partial F}{\partial x}&\frac{\partial F}{ \partial y}\\ \frac{\partial G}{\partial x}&\frac{\partial G}{\partial y}\end{array}\right|\] (3-1.4) Sometimes the following notation is used: \[J=\frac{\partial(F,G)}{\partial(x,y)}\] Area-preserving maps have \(|J|=1\) as one can check by the above example of Henon (1969). When dissipation is present in physically relevant maps, \(|J|<1\), or areas contract under the mapping. ##### Impact Oscillator Maps There are many examples in physics and engineering where the dynamics of a particle are linear within finite time intervals and where energy is put in or taken out in very short time periods between these linear dynamic intervals. Such examples arise in particle accelerators, or in the motion of two gears with play between the teeth. As an example of how a 2-D map can arise in dynamics, we consider the motion of an oscillator under periodic impacts (Figure 3-2). An analysis of a general impact oscillator has been given by Helleman (1980a) based on the motion of a proton in intersecting storage rings. A similar problem involving a particle in a plasma may be found in Sagdeev et al. (1988). The basic equation in these problems is a linear oscillator which is periodically impacted at time intervals of \(2\pi n/\Omega\) in which the momentum change depends on the motion: \[\ddot{x}+\omega_{0}^{2}x=f(x)\sum\delta(t-2\pi n/\Omega)\] (3-1.5) where \(\delta(t-\tau_{0})\) is the classical delta function whose integral is unity. Between impulses the solution is proportional to \(A\)\(\cos(\omega_{0}t+\varphi_{0})\). Piecing together these linear solutions using the above equation, one can show that the displacement after impact \(x_{n}\) and the velocity after each impact \(v_{n}\equiv\dot{x}(t_{n}^{+})\) satisfy the following difference equations: \[\begin{array}{l}x_{n+1}=x_{n}\mbox{cos }\beta\,+\,v_{n}(\mbox{sin }\beta)/\omega_{0}\\ v_{n+1}=-x_{n}\omega_{0}\mbox{sin }\beta\,+\,v_{n}\mbox{cos }\beta\,+f(x_{n+1})\end{array}\] (3-1.6) where \(\beta=2\pi\omega_{0}/\Omega\). It is easy to show that this is an area-preserving map, that is, \(J=1\). #### Classification of Map Dynamics We describe here a few typical motions of discrete time dynamical systems. In particular we focus on 2-D maps in the plane. As discussed above, they arise quite naturally from a Poincare map of three-state variable continuous time dynamics. ##### Fixed Points As the term implies, iteration of the map at a fixed point or _equilibrium_ point \(\mathbf{x}_{e}\) brings the system to the same point in one time Figure 3.2: A linear oscillator with periodic, amplitude-dependent impulse forces. cycle, that is, \[\mathbf{x}_{e}\,=\,F(\mathbf{x}_{e})\] (3-1.7) For example, in the case of the _cubic map_ \[F\,=\,y,\ \ \ \ \ G\,=\,\,ax\,-\,bx^{3}\,+\,cy\] (3-1.8) given by Holmes (1979) in the study of chaos in a buckled beam, the fixed points are found from the equations \[x_{e}\,=\,y_{e}\,,\ \ \ \ \ \ y_{e}\,=\,ax_{e}\,-\,bx_{e}^{3}\,+\,cy_{e}\] (3-1.9) _Cycle Points. Cycle points_ are similar to fixed points except that the dynamics undergo several iterations before returning to the fixed point, that is, \[\mathbf{x}_{n\,+\,m}\,=\,\mathbf{x}_{n}\ \ \ \text{or}\ \ \ \ \mathbf{x}_{n}\,=\,F^{(m)}(\mathbf{x}_{n})\] (3-1.10) Note that, as illustrated in Figure 3-3, each of the \(m\) points in the cycle is a cycle point. The so-called Standard map appears in many applications in physics, including accelerator particle dynamics (Lichtenberg and Lieber Figure 3-3: An \(m\)-cycle orbit of a 2-D map. man, 1983) as well as the dynamics of a bouncing particle on a vibrating surface (Holmes, 1982). In this map the state variables (\(\varphi_{n}\), \(v_{n}\)) represent the time or phase of impact (mod \(2\pi\)) and the velocity after impact. Thus, the map operates on a cylindrical phase plane: \[\begin{array}{l}\varphi_{n+1}=\varphi_{n}+v_{n}\pmod{2\pi}\\ v_{n+1}=\alpha v_{n}-\gamma\cos(\varphi_{n}+v_{n})\end{array}\] (3-1.11) where \(0<\alpha\leq 1\). When \(\alpha=1\) there is no energy dissipation and the map is area-preserving. If this transformation is defined by \(T\), then the fixed points of the period-2 motion are the fixed points of \(T^{2}\) where the superscript indicates that the mapping is applied twice. The equations which determine these points for the standard map are given by \[\begin{array}{l}\varphi_{1}=\varphi_{0}+v_{0}\\ v_{1}=\alpha v_{0}-\gamma\cos(\varphi_{0}+v_{0})\\ \varphi_{2}=\varphi_{0}=\varphi_{1}+v_{1}\\ v_{2}=v_{0}=\alpha v_{1}-\gamma\cos(\varphi_{1}+v_{1})\end{array}\] Thus we have four equations in four unknowns. The details of the solution can be found in Holmes (1982). Two examples of cycle points are shown graphically in Figure 3-4\(a\),_b_. One is for the standard map and the other is for the quadratic map discussed by Henon (1969). In the case of the quadratic map there are two sets of cycle-3 points. These are the fixed points of the transformation \(T^{3}\). The set with the ellipses around each point are called _centers_ and are stable, whereas the set with what looks like two crossed curves going through them are _saddle points_ and are unstable. #### Quasiperiodic Motions As discussed in Chapter 1, when two oscillators have incommensurate frequencies, the combined motion in a map describes an elliptic-type orbit in the map as shown in Figure 1-13. In the case of multiple-period fixed points or cycles, each of the fixed points in the cycle is surrounded by ellipse-shaped curves. In a quasiperiodic motion close Figure 3.4: (_a_) Standard map (no dissipation) showing cycle points. (_b_) Quadratic map of Henon (1969) showing three-cycle points. to these fixed points, the orbit will visit each of the ellipses in turn, slowly mapping out a closed curve after many iterations of the map. ##### Stochastic Orbits In the case of conservative or area-preserving maps (Hamiltonian system), a chaotic orbit often occurs near a saddle point and the orbit is characterized by a uniformly dense collection of points with no apparent order. An example is shown in Figure 3-5. These chaotic orbits can have positive Lyapunov exponents, indicating a sensitive dependence on initial conditions, but the orbit does not exhibit fractal structure as in a dissipative map. ##### Fractal Orbits These motions are typical for dissipative maps; that is, \[\left|\frac{\partial(x_{n+1},y_{n+1})}{\partial(x_{n},y_{n})}\right|<1\] Figure 3-5 Poincaré map of a parametrically forced pendulum showing periodic orbits (_points_), quasiperiodic orbits (_closed curves_), and stochastic orbits (_diffuse set of points_). Relative change in length, \(\gamma=0.05\); (forcing frequency) \(\omega=2.0\). Examples include the Henon map [Eq. (1-3.8)] shown in Figures 1-24, 3-6, and the standard map [Eq. (3-1.11)] shown in Figure 3-11. Although the motion is unpredictable, the orbit traces out an infinite set of curves which may be viewed by looking at finer and finer scales as in Figure 1-24. ### Local Stability of 2-D Maps As in the analysis of dynamics governed by ordinary differential equations, the dynamic behavior of systems described by sets of difference equations can be analyzed by looking at linearized maps near the fixed points. Let us assume that the origin is a fixed point; then for a 2-D map [Eq. (3.1-1)] the linear map takes the form \[\begin{array}{l}x_{n\,+\,1}=\,ax_{n}\,+\,by_{n}\\ y_{n\,+\,1}=\,cx_{n}\,+\,dy_{n}\end{array}\] (3-2.1) where \[\begin{bmatrix}a&b\\ c&d\end{bmatrix}=\begin{bmatrix}\frac{\partial F}{\partial x}&\frac{\partial F }{\partial y}\\ \frac{\partial G}{\partial x}&\frac{\partial G}{\partial y}\end{bmatrix}\] Figure 3-6: The Henon map (1-3.8) showing fractal structure. \(\alpha=1.4\); \(\beta=0.3\). and the derivatives of the nonlinear map functions are evaluated at (0, 0). The analysis of linear maps is straightforward and can be found in several texts (e.g., see Bender and Orzag, 1978). One begins by guessing at a power-law type of solution which for linear difference equations takes the form \[\begin{Bmatrix}x_{n}\\ y_{n}\end{Bmatrix}=\lambda^{n}\begin{Bmatrix}e_{1}\\ e_{2}\end{Bmatrix}\] (3-2.2) Substitution of this solution into the linear equations (3-2.1) leads to the following eigenvalue problem: \[\begin{bmatrix}(a\,-\,\lambda)&b\\ c&(d\,-\,\lambda)\end{bmatrix}\begin{Bmatrix}e_{1}\\ e_{2}\end{Bmatrix}=0\] or \[\lambda^{2}-(a\,+\,d)\lambda\,+\,ad\,-\,cb=0\] (3-2.3) These equations establish the stability criteria for a linear map. \begin{tabular}{l l} \(|\lambda|<1\): & Solution is stable \\ & Nonlinear system is stable \\ \(|\lambda|>1\): & Solution is unstable \\ & Nonlinear system is unstable \\ \(|\lambda|=1\): & Solution is neutrally stable \\ & Stability of nonlinear system depends on nonlinear terms in map. \\ \end{tabular} Similar to the analysis for differential equations, one must find the corresponding eigenvectors along with determination of the two eigenvalues, \(\lambda\). When these vectors exist, they establish the directions along which simple multiplicative dynamics \(x_{n\,+\,1}=\lambda x_{n}\) takes place. For an arbitrary initial condition, however, the total solution takes the form \[x_{n}=c_{1}\lambda_{1}^{n}e_{11}+c_{2}\lambda_{2}^{n}e_{21}\] (3-2.4) when \(\lambda_{1}\), \(\lambda_{2}\) are distinct. Here \((e_{11},\,e_{21})\) is the eigenvector correspond Figure 3.7: Fixed points of 2-D maps: (_a_) stable node, (_b_) unstable node, (_c_) saddle point, (_d_) spiral points. ing to the eigenvalue \(\lambda_{1}\). Rules for special cases for identical eigenvalues can be found in Bender and Orzag (1978). A few classic examples of local fixed point dynamics of 2-D maps are worth mentioning and are illustrated in Figure 3-7. _1. Stable Node_, \(|\lambda_{1}|\), \(|\lambda_{2}|<l\) (\(Both\ Real\)). If \(0<\lambda_{1}<1\), then for initial conditions along the corresponding eigenvector direction the motion moves toward the origin without changing sign. However, if \(\lambda_{1}\) or \(\lambda_{2}<0\), then the motion can flip between positive and negative eigen-directions while still moving closer to the fixed point. _II. Unstable Node_, \(|\lambda_{1}|\), \(|\lambda_{2}>l\) (\(Both\ Real\)). The same comments in case I hold except the motion moves away from the fixed point and the motion is called _unstable_. _III. Saddle Point_, \(|\lambda_{1}|\), \(<l\), \(|\lambda_{2}|>l\) (\(Both\ Real\)). Initial conditions along the \(\lambda_{1}\) eigenvector result in inward motion; however, and small component along the \(\lambda_{2}\) eigenvector takes the total solution away from the origin and the solution is _unstable_. _IV. \(\lambda_{1}\), \(\lambda_{2}=\alpha\pm i\beta=\rho e^{\mp i\varphi}\)._ These fixed points are known as stable or unstable _spirals_ depending on whether \(\rho<1\) or \(\rho>1\). Sometimes the term stable or unstable _focus_ is used. Note that although these solutions are analogous to oscillatory motion in differential equations, one can have oscillating map solutions in cases I, II, and III as well as in spiral solutions. When \(|\lambda|>1\) or \(|\lambda|<1\), these points are known as _hyperbolic_ points and the stability of the linearized map gives a good picture of the stability of the nonlinear map in the neighborhood of the fixed point. When \(|\lambda|=1\), however, these points are sometimes called _elliptic_ points, and the stability of the nonlinear map depends on the terms of higher order than the linear ones. When one has multiple-period fixed points or an \(N\)-cycle point, the stability can be determined by looking at the linearized map of the \(T^{N}\) map at one of the cycle points. Finally, to get a complete picture of the dynamics of nonlinear maps, one must look at how sets of points transform under the map. This is the subject of the next section. ### Global dynamics of 2-D maps In the previous section we looked at the dynamics of specific trajectories in the neighborhood of a fixed point. Here we examine how the map transforms a set of contiguous points under one or more iterations. In particular, it is a special type of 2-D map called a _horseshoe_ or _baker's_ map that produces stretching and folding and is thought to be the fundamental mechanism for the creation of chaotic dynamics. In the following we will look at simple maps that produce translation, dilatation, shearing, stretching, and folding. The first four effects can be accomplished with a linear map. ##### Linear Transformation A general linear transformation takes the form \[\mathbf{x}_{n+1}\ =\ \mathbf{b}\ +\ \mathbf{A}\ \cdot\ \mathbf{x}_{n}\] (3-3.1) where the constant vector \(\mathbf{b}\) and the elements of the matrix \(\mathbf{A}\) are assumed to be real. The constant \(\mathbf{b}\) represents a uniform translation. To examine the effect of \(\mathbf{A}\), consider the case \(\mathbf{b}=0\), and consider the following special cases (see Figure 3-8). \[\mathbf{A}\ =\ \begin{bmatrix}r&0\\ 0&r\end{bmatrix}\qquad\text{(dilatation)}\] (3-3.2) In this case, an area contracts or expands uniformly such that circles remain circles. This is similar to the uniform thermal expansion of a material due to heating. Areas change by a factor \(\det\ \mathbf{A}=r^{2}\) under each iteration of the map. \[\mathbf{A}\ =\ \begin{bmatrix}s&0\\ 0&c\end{bmatrix}\qquad\text{(stretching}\ s>1\text{, contraction}\ c<1)\] (3-3.3) Under this transformation, a small square is deformed into a rectangular shape, with the \(x\)-axis direction being stretched by a factor \(s>1\) while the \(y\)-axis direction is contracted by a factor \(c<1\). The change of area, \(\det\ \mathbf{A}=sc\), depends on the relative amounts of stretching and contracting. For conservative dynamics, \(sc=1\). \[\mathbf{A}\ =\ \begin{bmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{bmatrix}\qquad\text{(rotation)}\] (3-3.4) This operation rotates areas around the origin by an angle \(\alpha\). Areas are preserved--that is, \(\det\ \mathbf{A}=1\). ### 3.3 Global Dynamics of 2-D Maps Figure 3.8: Geometric properties of four linear transformations. (iv) \[\mathbf{A}=\begin{bmatrix}1&c\\ 0&1\end{bmatrix}\qquad\text{(pure shear transformation)}\] (3-3.5) This is a horizontal shearing operation which moves the points in the upper half-plane to the right and which moves those in the lower half-plane to the left. A square area becomes a rhombus, and areas remain preserved under this transformation--that is, det \(\mathbf{A}=1\). However, this deformation is identical to case ii--that is, a combined stretching and contraction along \(45^{\circ}\) directions with \(sc=1\). (v) \[\mathbf{A}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\qquad\text{(reflection about $y$-axis)}\] (3-3.6) Our final example is a simple reflection about the \(y\)-axis. This can be generalized by using a composition of two linear mappings. First define \(T_{1}\) as a reflection about the \(y\)-axis, then define \(T_{2}\) as a rotation \(\alpha\) to obtain a mapping \[x_{n+1}=T_{2}\circ T_{1}(x_{n})\] or \[\mathbf{A}=\begin{bmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{bmatrix}\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}=\begin{bmatrix}\cos\alpha&\sin\alpha\\ \sin\alpha&-\cos\alpha\end{bmatrix}\] Generalizations of the other mappings (i-iv) can be done in a similar way and are left as an exercise. ### Folding in 2-D Maps--Horseshoes A simple quadratic map that produces folding as in Figure 1-22 is a map similar to but slightly different from Eq. (3-1.3): \[\begin{split} x_{n+1}&=x_{n}\\ y_{n+1}&=y_{n}-\beta x_{n}^{2}\end{split}\] (3-3.7) This map is area-preserving--that is, \(J=1\). However, a thin rectangular area centered on the \(x\)-axis is deformed into the shape of a horseshoe under one iteration. ### Composition of Maps--Henon Map When two transformations \(T_{1}\), \(T_{2}\) are performed on a set of points, we denote the operation by \(T_{2}\circ T_{1}\) and call this a composition. The now classic paradigm of a strange attractor generated by a simple 2-D quadratic difference equation is the Henon map (Henon, 1976) introduced in Chapter 1: \[\begin{array}{l}x_{n+1}=1\,+\,y_{n}\,-\,\alpha x_{n}^{2}\\ y_{n+1}=\beta x_{n}\end{array}\] (3-3.8) This map is area-contracting provided that \(0<\beta<1\). This map can be seen as a composition of five transformations \(T_{5}\circ T_{4}\circ T_{3}\circ T_{2}\circ T_{1}\) defined as follows (see Figure 3-9): \[\begin{array}{l}T_{1}\colon\qquad\mathfrak{b}=(0,1)\qquad\text{(translation)}\\ T_{2}\colon\qquad x_{n+1}=x_{n},\quad y_{n+1}=y_{n}\,-\,\alpha_{n}^{2}\qquad \text{(folding)}\\ T_{3}\colon\qquad\mathbf{A}=\left[\begin{array}{cc}0&1\\ -1&0\end{array}\right]\qquad\text{(rotation, $\alpha=-\pi/2$)}\\ T_{4}\colon\qquad\mathbf{A}=\left[\begin{array}{cc}1&0\\ 0&-1\end{array}\right]\qquad\text{(reflection)}\\ T_{5}\colon\qquad\mathbf{A}=\left[\begin{array}{cc}1&0\\ 0&\beta\end{array}\right]\qquad\text{(contraction)}\end{array}\] As discussed in Chapter 1, repeated applications of this map leads to a multiple stretching and folding and the fractal properties of a strange attractor. ### The Horseshoe Map The previous example of a horseshoe mapping uses a continuous quadratic polynomial function. However, a conceptually simpler mapping was proposed by Smale in 1962 as a model for complex dynamics, as previously discussed in Chapter 1 (Figure 1-23). A similar transformation is the baker's map (see Chapter 6 (6-4.26)) which is analogous to rolling, cutting and folding of pastry or bread dough (Figure 6-33). [MISSING_PAGE_EMPTY:18] ### 3.4 Saddle Manifolds, Tangles, and Chaos As a prelude to Chapter 6 ("Criteria for Chaotic Vibrations"), it can be said that one of the keys to the kingdom of chaos is the search for the existence of horseshoe-like maps. In many systems, strange attractors are organized around saddle-type fixed points--that is, points at which orbits come into a small region of phase space along a _stable manifold_ and go out along paths close to an _unstable manifold_. This saddle-type fixed point creates the contraction and stretching mechanisms for a horseshoe map. The nonlinear aspect of the map is required to produce the folding or bending mechanism of the horseshoe map. Repeated iteration of the stretching, contracting, and folding of regions of phase space lead to unpredictability and, in the case of dissipative systems, lead to the fractal nature of the dynamics in the map. Thus the calculation of fixed points and determining the nature of their stability is no mere academic exercise, but can be used along with computational tools to give a clue to the nature of the chaotic attractor. This is illustrated for the case of the standard map [Eqs. (3.1-11)] which has been used to study the dynamics of a ball bouncing on a vibrating surface (e.g., see Guckenheimer and Holmes, 1983). In this model, \(x_{n}\) represents the phase in the driving period at which the ball hits the table (mod \(2\pi\)), and \(y\) represents the velocity of the ball after impact. The Poincare map is taken at the point of contact resulting in the following difference equations: \[\eqalign{x_{n+1}&=x_{n}+y_{n}\pmod{2\pi}\cr y_{n+1}&=\alpha y_{n}-\gamma\cos( x_{n}+y_{n})}\] This map is a variation of impact oscillator maps [Eqs. (3.1-6)]. The Jacobian of this map is \(\alpha\), which represents the loss of energy on impact. Here \(\alpha<1\) and areas are contracted with each iteration of the map. One can show that fixed points of this map are given by \[\eqalign{y_{e}&=2\pi n\cr x_{e}&=\cos^{-1}\left[{(\alpha-1)2\pi n\over\gamma} \right]}\] [See Holmes (1982) for a more complete description.]The stability of each fixed point can be ascertained by looking at the eigenvalues of the linearized Jacobian matrix of the map \[[\nabla F]\left\{\begin{matrix}\overline{x}\\ \overline{y}\end{matrix}\right\}=\lambda\left\{\begin{matrix}\overline{x}\\ \overline{y}\end{matrix}\right\}\] In this problem there are two eigenvalues at each fixed point \(\lambda_{1}\), \(\lambda_{2}\) and there are two eigenvectors whose direction is given by \[\overline{y}\,=\,(\lambda\,-\,1)\overline{x}\] To take a specific example, consider the fixed point \((x_{e},\,y_{e})=(\pi/2,\,0)\) of the map (3-4.1) for which \[\lambda^{2}\,-\,(1\,+\,\alpha\,+\,\gamma)\lambda\,+\,\alpha\,=\,0\] One can show that both eigenvalues are real and that \(\lambda_{1}>1\), \(\lambda_{2}<1\), which implies a saddle-type fixed point. Figure 3-10 shows the direction of the stable and unstable manifolds at the point \((\pi/2,\,0)\). However, to determine the shape of these manifolds far from the fixed point, one must use numerical methods. To numerically calculate the unstable manifold, we note that a map does not generate a continuous orbit. To generate the manifold, one must generate a large number of orbits, each originating near the fixed point with coordinates satisfying \(\overline{y}_{0}=(\lambda_{1}\,-\,1)\overline{x}_{0}\). To numerically determine the stable manifold, we solve the inverse map, choose initial conditions along the stable direction of the saddle, and iterate backwards in time. The inverse map is given by \[\begin{split} y_{n-1}&=\frac{1}{\alpha}\,y_{n}\,+ \,\frac{\gamma}{\alpha}\cos\,x_{n}\\ x_{n-1}&=\,x_{n}\,-\,y_{n-1}(x_{n},y_{n})\end{split}\] for \(n=0,\,-1,\,-2,\,...\). A large number of initial conditions (say 100-200), each chosen close to the fixed point and orbits iterated (say 10-20 times), will generate a collection of points that lie approximately on each of the two manifolds of the saddle point. The results of such a calculation for the \((\pi/2,\,0)\) fixed point of the standard map are shown in Figure 3-10 for a dissipation parameter of \(\alpha=\frac{1}{2}\) and forcing amplitude values \(\gamma=2,\,3.3,\,6\). The first case shows these two manifolds to be nonintersecting. However, for \(\gamma=3.3\), we Figure 3.10: Stable and unstable manifolds of the standard map (3-4.1) at (\(\pi/2\), 0) for (\(a\)) \(\gamma=2\), (\(b\)) \(\gamma=3.3\), and (\(c\)) \(\gamma=6\). see that they are almost touching. This is a critical case because there is a theorem attributed to Poincare that says if they intersect once, they will intersect an infinite number of times (see Chapter 6 for more discussion). Note that for flows, such intersection is not allowed, except at fixed points, but for maps it's O.K. It is this tangling of manifolds that is believed to create the horseshoe maps which are necessary for certain kinds of chaos. For \(\gamma=6\) it is clear that these two manifolds have indeed intersected many times. In fact, in the region \(\gamma\sim 6\), the standard map yields a strange attractor which can be verified by iterating the map (3-4.1) many times from one initial condition (Figure 3-11). What is remarkable about Figure 3-11 is that the shape of the strange attractor for \(\gamma=6\) is very close to the shape of the unstable manifold for the same value. This is believed by theorists to be more than coincidental--namely, that for many chaotic attractors the chaos is organized by the unstable manifold of a saddle-type fixed point. illustrate this question and give a clue to its answer, we offer two examples, namely, the kicked rotor oscillator and the Henon map. #### The Kicked Rotor As we have seen in the examples of the horseshoe map (Figure 1-23) or the logistic equation [Eq. (1-3.6)] in Chapter 1, the nature of the chaotic dynamics is best uncovered by taking a Poincare section of a continuous time flow in phase space. However, for most differential equation models of physical systems, it is impossible to obtain analytical results. Again, we look at a variation of the impact map. In the example considered here, we imagine a rotor with rotary intertia \(J\) and damping \(c\) which is subject to both a steady torque \(c\omega_{0}\) and a periodic series of _pulsed torques_ as shown in Figure 3-12 (see also Schuster, 1984). The equation of motion representing the change in angular momentum of the rotor is given by \[J\dot{\omega}+c\omega=c\omega_{0}+T(\theta)\sum_{n=-\infty}^{+\infty}\delta(t-n\tau)\] (3-5.1) The term \(\delta(t-n\tau)\) represents the classical delta function which is zero everywhere except at the \(t=n\tau\) and whose area is unity. Thus, for times \(n\tau-\varepsilon<t<n\tau+\varepsilon\), where \(\varepsilon<\!<1\), the angular momentum change is given by \[\mathrm{J}(\omega^{+}-\omega^{-})=T(\theta(n\tau))\] (3-5.2) For example, if the torque is created by a vertical force as shown in Figure 3-12, then the pulsed torque is proportional for \(T(\theta)=F_{0}\mathrm{sin}\ \theta\). Figure 3-12: Rotor with viscous damping and periodically excited torque studied by Zaslavsky (1978). When \(T(\theta)=0\), Eq. (3-5.1) has a steady solution \(\omega=\omega_{0}\), \(\theta=\omega_{0}t\). To obtain a Poincare map, we take a section right before each pulsed torque. Thus, we define \(\theta_{n}=\theta(t=n\tau-\varepsilon)\), \(\varepsilon\to 0^{+}\). One can relate (\(\theta_{n}\), \(\omega_{n}\)) to (\(\theta_{n+1}\), \(\omega_{n+1}\)) by solving the linear differential equation between pulses and using the jump in angular momentum condition (3-5.2) across the pulse. Between pulses, the rotation rate has the following behavior: \[\omega=\omega_{0}+ae^{-ct/J}\] Carrying out this procedure, one can derive the following exact Poincare map for the system (3-5.1): \[\eqalign{\omega_{n+1}&={c\tau\over J}\,\omega_{0}+\omega_{n}-{c\over J}\,( \theta_{n+1}-\theta_{n})+{1\over J}\,T(\theta_{n})\cr\theta_{n+1}&=\omega_{0} \tau+\theta_{n}+{J\over c}(1-e^{-ct/J})\,\biggl{(}\omega_{n}+{1\over J}T( \theta_{n})-\omega_{0}\biggr{)}\cr}\] These equations were first derived by the Soviet physicist Zaslavsky (1978) to treat the nonlinear interaction between two oscillators in plasma physics. In this mechanical analog of this problem, \(\omega_{0}\) represents the frequency of one uncoupled oscillator [see also Ott (1981) for a derivation]. This two-dimensional map is often nondimensionalized using \[\eqalign{x_{n}&={\theta_{n}\over 2\pi}\quad({\rm mod}\ 1)\cr y_{n}&={\omega_{n}- \omega_{0}\over\omega_{0}}\cr}\] For \(T(\theta)=F_{0}\)sin \(\theta\) and \(\varepsilon=F_{0}/J\omega_{0}\), Eqs. (3-5.4) then become \[\eqalign{y_{n+1}&=e^{-\Gamma}\,(y_{n}+\varepsilon\sin 2\pi x_{n})\cr x_{n+1}&= \biggl{\{}x_{n}+{\Omega\over 2\pi}+{\Omega\over 2\pi\Gamma}\,(1-e^{-\Gamma})y_{n}+{K\over\Gamma}(1-e^{-\Gamma}){\rm sin}\ 2\pi x_{n}\biggr{\}}\cr}\] where the braces \(\{\}\) indicate that only the fractional part is used (i.e., mod 1 or 0 \(\leq\theta\leq 2\pi\)). Also, \(K=\varepsilon\Omega/2\pi\), \(\Gamma=c\tau/J\), and \(\Omega=\omega_{0}\tau\). Here \(y_{n}\) measures the departure of the speed from the unperturbed equilibrium speed \(\omega=\omega_{0}\). Note that this map contracts areas for \(\Gamma>0\) and preserves areas for \(\Gamma=0\). This system of two difference equations has been found to exhibit chaotic solutions only if the following conditions are satisfied when \(\varepsilon\) is small: \[1<\frac{\Gamma}{1-e^{-\Gamma}}<K\] (3-5.6) A typical case is shown in Figure 3-13 for the parameters \(\Gamma=5\), \(\varepsilon=0.3\), \(\Omega=100\), and \(K=9\). The problem of a kicked or pulsed double rotor with two degrees of freedom has been investigated by Kostelich et al. (1985, 1987). ### Circle Map A simpler version of the Zaslavsky map for two coupled oscillators can be obtained by letting the damping become larger, \(\Gamma>\!>1\). In this limit, one can ignore the changes in \(\omega\) or \(y\) (note that \(\Delta y\) is small in Figure 3-13). This leads to a one-dimensional map known as a _circle map:_ \[x_{n+1}=\left\{x_{n}+\frac{\Omega}{2\pi}-\frac{K}{\Gamma}\sin 2\pi x_{n}\right\}\] (3-5.7) This equation has received extensive study (e.g., see Rand et al., 1982) and is a model for the quasiperiodic oscillation between two oscillators with uncoupled (\(K=0\)) frequencies in the ratio \(\Omega\). In this Figure 3-13: Strange attractor for the Zaslavsky map (3-5.4) for the kicked rotor in Figure 3-12: \(x\) represents angular rotation (mod 1), and \(y\) represents the angular velocity. example, one can see the steps that lead from the physics to an exact 2-D map and then to an approximate 1-D map (3-5.7). ### The Henon Map Earlier we saw that the quadratic, dissipative map (3.3-8) contracted area proportional to the constant \(\beta\). This results in a strange attractor with fractal structure and a fractal dimension, \(d\), with \(1<d<2\). However, as \(\beta\to 0\), the coupling between the two variables \(x\), \(y\) becomes weaker. Also, one can see from the chaotic attractor that the points seem to be distributed along a one-dimensional curve in the plane and \(d\sim 1\) (see Figure 3-14). In the limit of small \(\beta\), the Henon map becomes asymptotic to a curve in the \(x\)-\(y\) plane given by \[x\,=\,1\,-\,\frac{\alpha}{\beta^{2}}y^{\,2}\] For small \(\beta\), \(y\) becomes small and the Henon map approaches the one-dimensional quadratic map \[x_{n+1}\,=\,1\,-\,\alpha x_{n}^{2}\] which is similar to the logistic map (1-3.6). ### Experimental Evidence for Reduction of 2-D to 1-D Maps There is much experimental evidence to date to support the proposition that many chaotic physical phenomena can be approximately modeled by one-dimensional maps of the form (see Figure 2-16) \[x_{n+1}\,=\,F(x_{n})\] This evidence comes in two forms: (i) calculation of \(F(x_{n})\) from measurement of a single state variable and (ii) reduction of 2-D Poincare map data to a first-order map. An example of the former is the experimental work of a group at the University of Texas (Roux et al., 1983) who have measured the dynamics of chemical reactors. One famous example is the Belousov-Zhabotinski reactor which involves over a dozen reactions. In spite of the complexity of this system, a simple experimental return map was measured as shown in Figure 3-15 and is similar in form to a quadratic map. The second example is taken from control theory in which a mass is shuttled back and forth by a servomotor as shown in Figure 3-16\(a\) (Golnaraghi and Moon, 1991). The position is controlled by an error signal which is the difference between a desired periodic motion of the mass and the actual position. Figure 3-14: (\(a\),\(b\)) Henon maps (1-3.8) for two different control parameters \(\alpha=1.4\), \(\beta=0.05\). Nonlinearity arises from the motion constraints near the end of its travel. When the control gain exceeds some limit, the mass becomes chaotic. However, the Poincare map triggered off the periodic control signal appears to be one-dimensional, as shown in Figure 3-16\(b\). A return map reveals a noninvertible 1-D piecewise linear map, as shown in Figure 3-16\(c\). Another example for a friction oscillator is shown in Figure 5-17. The reduction of dynamical behavior from complex systems to a 1-D map is more than just academic interest. The simple map, once determined, can serve as an information compression coding from which properties such as the probability density distribution or the Lyapunov exponent can be obtained by simply iterating the map. Obtaining dynamical information from a 1-D map for a physical system is many orders of magnitude faster than trying to integrate the partial or ordinary differential equations that describe the underlying physics. Figure 3-15: Representation of the chemical dynamics of the Belousov–Zhabotinski reaction using a 1-D map. [From Roux et al. (1983).] [MISSING_PAGE_EMPTY:29] ### Period-doubling route to chaos If Chaos Theory is the study of the pathways from simple to complex dynamics, then period doubling must be considered one of the principal routes to chaotic behavior in physical systems with nonlinearities. Periodic doubling is a phenomenon in which the period of repetition of a cyclic dynamic process doubles with the change of some control parameter, and continues to double at successively closer parameter values until the period is so long it is practically aperiodic. This phenomenon has been reported in hundreds of published experimental papers and in dozens of different physical and even biological systems where strong nonlinearities are present. Much of the theory of periodic doubling is based on the study of first-order difference equations \[x_{n+1}\ =\ F(x_{n})\] (3-6.1) and in particular the study of the quadratic map or so-called "logistic" equation of ecological modeling: \[x_{n+1}\ =\ \lambda x_{n}(1\ -\ x_{n})\] (3-6.2) Here the growth or birth rate of a species in the next generation is proportional to \(\lambda\), while the quadratic decay or "death" in the species is governed by the nonlinear term. The core of the mathematical theory is associated with the work of Mitchell Feigenbaum, formerly of Los Alamos National Laboratory and now at Rockefeller University in New York City. Feigenbaum (1980), however, attributes the qualitative observation of periodic doubling in equations of the above form (3-6.2) to the earlier work of Metropolis, Stein, and Stein (1973). A later paper of Robert May (1976) of Princeton University described some of the qualitative results in equations of mathematical biology. However, it was Feigenbaum (1978) who derived _quantitative_ measures of the period doubling phenomenon. Furthermore, it was his assertion that these measures could be observed in physical systems with more complex mathematical description than (3.6-2) that led experimentalists on the quest for _universal_ measures of the steps on the period-doubling route to chaos. It will take later historians of science to discover who first observed this phenomenon, but the avalanche of observations began around the time of observationsreported in fluids by Libchaber and Maurer (1978) and in electronic circuits by Linsay (1981). There are many excellent mathematical descriptions of this phenomenon, and we shall not attempt to describe all the details here. [The previously cited paper by Feigenbaum (1980) is an excellent readable source, however.] Because this book emphasizes the physical and experimental aspects of chaos, we shall focus mainly on those results of the theory that are relevant to the observation and understanding of period doubling in physical systems. ### Qualitative Features of Period Doubling One of the distinguishing features of this book is the attempt to describe the distinctive patterns of different dynamic phenomena in a visual way that dynamicists can recognize from computer or oscilloscope images of dynamic data. In the case of period doubling associated with continuous time dynamics, there are six common displays of this phenomena: 1. Time history 2. Phase plane 3. Poincare map 4. Fourier transform 5. Autocorrelation 6. Probability density function These different data processing and display techniques are common for many modern computer and dynamic signal analyzers. An example is shown in Figure 3-17 for a nonlinear electronic circuit [see Chapter 4 and Matsumoto et al. (1985)]. In the continuous time domain, the change from period-1 to period-2 motion clearly shows the doubling of the fundamental period. The phase plane shows a qualitative change from one orbit to two overlapping orbits. Also the Poincare map changes from one to two periodic points. The Fourier transform, usually performed experimentally with a discrete fast Fourier transform (FFT) chip, shows the halving of the fundamental frequency \(\omega_{0}=2\pi/\tau_{0}\). In further period doublings, say in period \(2n\), not only will \(\omega_{0}/2n\) be present, but equally likely one will see other harmonics \(m\omega_{0}/2n\). The autocorrelation function is related to the FFT and is qualitatively similar to the time domain representation. Figure 3.17: Experimental tools for observing period-doubling bifurcations. (\(a\)) Time history; (\(b\)) Phase plane; (\(c\)) Fourier transform; (\(d\)) Probability density function. Figure 3.17: (Continued) Finally, one can sometimes perform a probability density calculation with electronic spectrum analyzers or computers. In period doubling, the classic two-spike signature for a sine wave oscillation changes to a four-spike picture indicating four vertical tangents in the phase-space representation of the period-doubled waveform. ### Poincare Maps and Bifurcation Diagrams A more informative representation of period doubling in a continuous time system can often be presented by plotting the discrete Poincare map measures of the signal as one slowly changes one of the control parameters. The Poincare map of the base period signal is represented by one point in the \(x\)-axis, whereas the Poincare map for the period-doubled map shows up as two points on the \(x\)-axis. In a bifurcation diagram, one plots the Poincare map of the \(x\)-axis versus a control parameter \(\lambda\). This can be easily done in computer simulation of iterated maps, such as for the logistic map shown in Figure 3-18. This diagram shows the values of the control parameter at which the motion changes from a period-\(n\) to a period-\(2n\) oscillation as well as shows the regions where suspected chaos may be found. Figure 3-18: Period-doubling bifurcations for the logistic map (3-6.2). ### Quantitative Measures of Period Doubling In this section we illustrate the nature of the quantitative analysis of period-doubling bifurcations using the logistic map. The reader should consult more mathematical books on the subject for a more rigorous treatment (e.g., see Devaney, 1989). A general quadratic map takes the form \[x_{n+1}=a+bx_{n}+cx_{n}^{2}\] When \(b>0\), \(c<0\), one can rescale this equation into the standard form (3-6.2) \[x_{n+1}=\lambda x_{n}(1-x_{n})=F(x_{n})\] The basic analysis of Eq. (3-6.2) usually involves the following procedure. 1. Find fixed points of \(F(x_{n})\) and some of the two-cycle iterates \(F^{2n}=F(F^{2n-2}(x_{n}))\). 2. Establish the stability of these fixed points. 3. Examine the relationship between the critical bifurcation values of the control parameter (\(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\),..., \(\lambda_{n}\)). 4. Determine the limit point \(\lambda_{\approx}\) at which the first chaotic dynamics results. 5. Look at scaling properties of the successive period-\(2n\) orbits. 6. Look at the scaling of the spectral properties of the period-\(2n\) orbits. As described earlier, the fixed or equilibrium points of a discrete iterated map may be found by finding those values of the state variable where \(x_{n+1}=x_{n}\) for a fixed control parameter, that is, \[x_{e}=F(x_{e})\] In general, there is more than one equilibrium point. For the logistic map, these points are given by \[x_{e}=0,\qquad x_{e}=\frac{\lambda-1}{\lambda},(\lambda>1)\]Thus, for \(\lambda<1\) the only fixed point of the map is at the origin. When the control parameter \(\lambda\) is increased to a value greater than 1, a period-1 or period-\(n\) orbit may be possible. In an \(n\)-orbit, the map will produce a sequence of points \[(x_{0},\,x_{1},\,x_{2},\,\ldots\,\,,\,x_{n-1},\,x_{n}\,=\,x_{0})\] In the case of a period-2 motion we have the orbit (\(x_{0},\,x_{1},\,x_{2}\,=\,x_{0}\)) where \[x_{1} = F(x_{0}) = F(F(x_{1}))=F^{2}(x_{1})\] \[x_{2} = F(x_{1}) = F(F(x_{2}))=F^{2}(x_{2})\] Figure 3.19: Comparison of the period-1 and period-2 maps for the logistic equation (3-6.2). and \[F^{2}(x)\ =\ \lambda^{2}\ x(1\ -\ x)[1\ -\ \lambda x(1\ -\ x)]\] One of the fixed points of \(F^{2}(x)\) is \(x=0\). However, two of the other three fixed points of \(F^{2}(x)\) give us the orbit points of the period-2 map. This is clearly shown in Figure 3-19. From this figure, one can see that one of the fixed points of \(F^{2}(x)\) is unstable \(|dF^{2}/dx|>1\) while the other two points give the stable orbit points of the period-2 cycle of \(F(x)\). This pattern continues as \(\lambda\) is increased through higher period-doubling bifurcations. At each critical value of \(\lambda\), the stable fixed Figure 3-20: Comparison of period-2 and period-4 maps for the logistic equation (3-6.2). [From Feigenbaum (1980).] points of the previous period-\(2n\) map become unstable and the next higher-order period-\(2(n\,+\,1)\) map throws off two new fixed points which become the new orbit points of the next period-doubling orbit. An example for the transition from period 2 to period 4 is shown in Figure 3-20. ### Scaling Properties of Period-Doubling Maps _Feigenbaum Numbers._ Perhaps one of the notable discoveries of the theory of chaotic dynamics was a set of scaling relationships for the period-doubling route to chaos that could be tested in computer or physical experiments. Using both computational evidence and analysis, Feigenbaum proposed that the bifurcation values of the control parameter of the logistic map, \(\lambda_{n}\), converge in an exponential way as one approaches the limiting value \(\lambda_{\pi}\) at which chaotic dynamics develop; that is \[\lambda_{\pi}\,-\,\lambda_{n}\,=\,c\delta^{-n}\] (3-6.3) If one knows three consecutive values of the period-doubling bifurcation parameter, \(\lambda_{n-1}\), \(\lambda_{n}\), \(\lambda_{n+1}\), then the number \(\delta\) can be found: \[\begin{split}\delta&=\lim_{n\to\infty}\frac{\lambda_ {n}\,-\,\lambda_{n-1}}{\lambda_{n+1}\,-\,\lambda_{n}}\\ \delta&=\,4.6692016\,\ldots\end{split}\] (3-6.4) What is remarkable is that Feigenbaum showed that this value is _universal_ within a class of maps \(x_{n+1}=F(x_{n})\), where \(F(x)\) has a smooth maximum. [Another technical requirement involves an operator on \(F(x)\) called the _Schwarteean derivative_; see Feigenbaum (1980) or, e.g., Rasband (1990).] He also proposed that in physical systems that undergo such period-doubling phenomena, the above limit of \(\delta\) should also hold. This is a remarkable assertion. However, the experimental evidence does not always provide one with a large number of measurable period-doubling bifurcations (five or six seem to be the experimental limit). In spite of this, there is good reason to believe that this scaling number is correct in physical problems. _Amplitude Scaling._ Amplitude scaling has to do with a comparison of features of the \(2(n\,+\,1)\) period doubling as compared with the amplitude features of the \(2n\) period doubling. There are several ways to describe this scaling. To choose one, let's examine the bifurcation diagram for the logistic map (3-6.2) shown in Figure 3-21 which plots the amplitudes of the cycle points as a function of the control parameter \(\lambda\). One can see that within each period-doubling region there is a \(2n\) cycle which has a cycle point at the midpoint \(x=\frac{1}{2}\). These particular cycles have been called \(supercycles\). Then we define an amplitude measure \(a_{n}\) as shown in Figure 3-21 where \(a_{1}=x_{2}-x_{1}\), \(a_{2}=x_{2}-x_{4}\), and so on. Feigenbaum discovered that the ratio \(a_{n}/a_{n+1}\) has a limiting value \[\lim_{n\to x}\frac{a_{n}}{a_{n+1}}=2.50\ldots\] (3-6.5) Another way to describe this scaling is to notice that each minor period-doubling pitchfork can be mapped onto the preceding pitchfork by simply blowing up the minor bifurcation region by the factor 2.50. This idea is related to the mathematical concept of renormalization [e.g., see Chapter 6 or Feigenbaum (1980)]. The third way to see this scaling is when period doubling occurs in a continuous time system. Here the period doubling produces ripple modulation on the fundamental periodic signal. The ratio of one of the Figure 3-21: Bifurcation diagram of the logistic map (3-6.2) showing relative amplitudes of supercycles. ripples in the period \(2(n\,+\,1)\) cycle to a ripple in the \(2n\) cycle is given by the factor \(2.5\). Again, the significance of the ratio is that if period doubling occurs in a dissipative system with an underlying hump map, then this property will result regardless of the particular shape of the 1-D mapping function \(F(x)\). ### Subharmonic Spectra Scaling The scaling of the amplitudes in successive period-doubling regimes has as a consequence a related scaling property in the Fourier spectra of the dynamic signals. This is illustrated in Figure 3-22. Again Feigenbaum (1980) has shown that the successive subharmonic spectra peaks in the Fourier transform will be \(8.2\,\mathrm{d}\mathrm{b}\) below the previous subharmonic peaks. This result has been confirmed in a number of experiments (e.g., see Libchaber et al., 1980) as shown in Figure 3-23. ### Symbol Sequences in Period Doubling In many experimental problems we have only qualitative information about the dynamics; for example, a rotor turns clockwise or counterclockwise or a particle moves to the left side or the right side in some mechanical system. In such problems one can use what is called Figure 3-22: Universal scaling of the fourier transformation amplitudes for the logistic map (3-6.2). \(course\)-\(graining\); that is, we divide up the state space into a course grid. In the case of the logistic map (3.6-2) the obvious course-graining are the two intervals \(I_{1}\): \(0\leq x<\frac{1}{2}\) and \(I_{2}\): \(\frac{1}{2}<x\leq 1\). Then one can code the dynamics by a sequence of symbols L or R indicating whether the motion is in the \(I_{1}\) or \(I_{2}\) interval. Thus, a period-2 orbit might be LRLRLR..., or a period-3 orbit might be LRLRLRR.... In early work in the 1970s, Metropolis, Stein, and Stein (1973) showed that as the control parameter is varied for different hump maps, the sequence of symbol sequences is the same. A list of such symbol sequences is given in Table 3.1 for the logistic map and for the map derived from a tent map for a friction oscillator (Feeny and Moon, 1989). In the friction oscillator the course-graining or partitioning is associated with sticking (S) or slipping (N) motions. This is then another indication of the existence of the underlying order in 1-D map models of deterministic dynamical systems which exhibit chaotic behavior. ### Period Doubling in Conservative Systems The scaling behavior of the control parameter values at which period doubling occurs and the resulting Feigenbaum number (3-6.4) were derived for a dynamical system with dissipation. When the system is nondissipative, however, one gets a different "Feigenbaum' number. This has been illustrated in a paper by Helleman (1980a) and elsewhere Figure 3.23: Universal scaling of FFT spectra for Rayleigh–Benard thermal convection. [From Libchaber et al. (1980).] (e.g., see Benettin et al., 1980a) by an example for a two-dimensional conservative map: \[\eqalign{x_{n\,+\,1}&=\,y_{n}\cr y_{n\,+\,1}&=\,-x_{n}\,+\,2y_{n}(c\,+\,y_{n})}\] Helleman derived this equation from the motion of a proton in a storage ring with periodic impulses. He thus found an exact map for a nonlinear, second-order forced system by integrating the equation of motion between impulses. That the map is conservative can be proved by looking at the change of area of a small parallelepiped in one iteration of the map. The change in area is measured by the determinate of the Jacobian matrix, (3-1.4) which can be shown to be unity for this map. It can be shown that iterations of this map yield a multiperiodic orbit for \(c\,\geq\,-\,1\). In fact, the transitions from one to two periodic and from two to four periodic, and so on, are given by \[c_{1}\,=\,-\,1,\qquad c_{2}\,=\,1\,-\,\sqrt{5},\qquad c_{3}\,=\,1\,-\,\sqrt{4 \,+\,\sqrt{5/4}}\] as the reader can verify on a calculator or small personal computer. For large \(k\), this sequence of critical bifurcation parameters has the relationship \[c_{k}\,\sim\,c_{\rm x}\,+\,A/\delta_{c}^{k}\] (3-6.7) where \(\delta_{c}\,=\,8.7210\) from numerical experiments. This value differs from the Feigenbaum value of 4.6692 found for the logistic map. It is believed that \(\delta_{c}\) is "universal" for all conservative maps. ### 3.7 The Measure of Chaos; Lyapunov Exponents Chaos in deterministic systems implies a sensitive dependence on initial conditions. This means that if two trajectories start close to one another in phase space, they will move exponentially away from each other as the map is iterated. If \(d_{0}\) is a measure of the initial distance between two starting points, at a later time the distance may be written in the form \[d_{n}\,=\,d_{0}2^{\Lambda n}\] (3-7.1) (The choice of base 2 is convenient, but arbitrary.) However, in most physical problems the motion is bounded, and \(d_{n}\) cannot go to infinity. Thus, the exponent \(\Lambda\) must be averaged over the trajectory, that is, \[\Lambda\,=\,\lim_{N\to x}\frac{1}{N}\sum_{n=0}^{N-1}\log_{2}\left|\frac{d_{n\, +\,1}}{d_{0n}}\right|\] In the case of a one-dimensional map, \[x_{n+1}\,=\,f(x_{n})\]an explicit rule can be derived. At the \(n\)th iteration choose \[d_{0n} = \varepsilon\] \[d_{n+1} = f(x_{n}\,+\,\varepsilon)\,-\,f(x_{n})=\frac{df}{dx}\Bigg{|}_{n}\varepsilon\] Thus (3-7.1) becomes \[\Lambda\,=\,\lim_{N\to\infty}\frac{1}{N}\sum_{n\,=\,0}^{N-1}\,\log_{2}\,\Bigg{|} \frac{df}{dx}(x_{n})\Bigg{|}\] (3-7.2) An illustrative example is the Bernoulli map \[x_{n+1}\,=\,2x_{n}\,\,(\text{mod}\,\,1)\] (3-7.3) as shown in Figure 3-24. Here (mod 1) means \[x(\text{mod}\,\,1)\,=\,x\,\,-\,\,\text{Integer}(x)\] Figure 3-24: Bernoulli map (3-7.3). This map is multivalued and is chaotic. Except for the switching value \(x=\frac{1}{2}\), \(|f^{\prime}|=2\). Applying the definition (3-7.3) we find \(\Lambda=1\). Thus, on the average, the distance between points on nearby trajectories grows as \[d_{n}=d_{0}2^{n}\] The units of \(\Lambda\) are one bit per iteration. One interpretation of \(\Lambda\) is that information about the initial state is lost at the rate of one bit per iteration. To see this, write \(x_{n}\) in binary notation, that is, \[x_{n}=\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{16}+\frac{1}{128}\right)=0.1101001\] and \(x\) (mod 1) means \(1.101001\) (mod 1) \(=0.101001\). Thus, the map \(2x_{n}\) (mod 1) moves the decimal point to the right and drops the integer value. So if we start out with \(m\) significant decimal places of information, we lose one for each iteration; that is, we lose one bit of information. After \(m\) iterations we have lost knowledge of the initial state of the system. Another example is the \(tent\)\(map\): \[x_{n+1}=2rx_{n}, x_{n}<\frac{1}{2}\] (3-7.4) \[x_{n+1}=2r(1-x_{n}), x\geq\frac{1}{2}\] As in the Bernoulli map (3-7.3), \(|f^{\prime}(x)|=2r\) is a constant and the Lyapunov exponent is found to be (Lichtenberg and Lieberman, 1983, pp. 416-417) \[\Lambda=\log\,2r\] (3-7.5) When \(2r>1\), \(\lambda>0\) and the motion is chaotic, but when \(2r<1\), \(\lambda<0\) and the orbits are regular; in fact, all points in \(0<x<1\) are attracted to \(x=0\) (Schuster, 1984, p. 22). Our final example is the logistic equation (3.6-2) \[x_{n+1}=ax_{n}(1-x_{n})\] This map may become chaotic when \(a>3.57\). This can be verified by numerical calculation of the Lyapunov exponent as a function of \(a\) as shown in Figure 3-25. Beyond \(a=3.57\), the Lyapunov exponent is positive except within multiperiod windows \(3.57<a<4\). When \(a=4\), it has been shown that \(\Lambda=\ln 2\). It has been shown (see Schuster, 1984) that this value can be derived analytically by finding a transformation of the tent map for \(r=1\) into the logistic map. ##### Probability Density Function for Maps Living and designing in the face of uncertainty is part of the everyday world. We often deal with natural unpredictability such as the weather with probability measures. Thus, it is natural to seek probabilistic descriptors when deterministic systems are in a state of dynamical chaos. Such statistical tools are widely used in the random excitation of linear systems. However, probabilistic mathematics for chaotic dynamics of nonlinear systems are not readily available. One exception is the case of systems governed by a first-order difference equation or map. Our discussion of probability measures for one-dimensional maps can only be introductory. However, some mathematical language is necessary and useful if one wishes to read more advanced treatments of the subject. For a first-order map, this description involves a function \(P(x)\) called the _probability density function_ (PDF), where \(x\) is the state variable that governs the map \[x_{n+1}\,=\,F(x_{n})\]Because \(x\) is a continuous variable, \(P(x)\ dx\) is the probability that the dynamical orbit will occur between \((x,\ x+dx)\). The domain of the variable \(x\) over which \(P(x)\neq 0\) is sometimes called the _support_ of the probability measure. One complication in chaotic systems is that the support is sometimes fractal. In this case, \(P(x)\) is not a continuous function. However, in practical systems there is always a small amount of noise which tends to smooth out the fractal nature of \(P(x)\). Thus, when an integration of \(P(x)\) makes sense, the probability of an orbit occurring between \(x_{1}\leq x_{n}\leq x_{2}\) is given by \[P[x_{1}\,,x_{2}]=\int_{x_{1}}^{x_{2}}P(x)\ dx\] (3-7.6) and \[\int_{-x}^{x}P(x)\ dx\,=\,1\] In statistical theory, one can have time-varying probabilistic measures. However, in this book we assume that some chaotic attractor exists and that \(P(x)\) is a so-called _invariant measure;_ that is, it is a property of the attractor that does not change with time. Several examples of invariant measures or PDFs for one-dimensional maps have been discovered analytically. Two such cases are the tent map and the logistic map: \[\begin{array}{lcl}\mbox{\it Tent\ Map:}&\qquad x_{n+1}=\ rx_{n},&\qquad x_{n}<\frac{1}{2}\\ &\qquad x_{n+1}=\ r-rx_{n},&\qquad x_{n}\geq\frac{1}{2};\ r>1\\ &\qquad P(x)=1\\ \mbox{\it Logistic\ Map:}&\qquad x_{n+1}=\ 4x_{n}(1-x_{n})\\ &\qquad P(x)=\frac{1}{\pi\sqrt{x(1-x)}}\end{array}\] (3-7.7) Both maps are sometimes called _one-hump maps,_ and the PDF can be derived from a functional equation. Consider a general one-hump map shown in Figure 3-26. Then orbits that arrive in the differential domain between \((x,\ x+dx)\) have two preimages \(x_{1}\) and \(x_{2}\) where \[x\,=\,F(x_{1})\,=\,F(x_{2})\]Assuming that \(F(x)\) is continuous and differentiable at \(x_{1}\), \(x_{2}\), one can easily show that the PDF must satisfy the following functional equation: \[P(x)=\frac{P(x_{1})}{|F^{\prime}(x_{1})|}+\frac{P(x_{2})}{|F^{\prime}(x_{2})|}\] (3-7.9) For the tent map with \(r=2\), \(F^{\prime}(x)=2\) and \[P(x)\;=\;\sharp[P(x_{1})\;+\;P(x_{2})]\] (3-7.10) where \(x_{1}=2x\), \(x_{2}=2\;-\;2x\). It is easy to see that \(P(x)=\) constant is a solution. This idea can be extended to multihump maps using the idea that the probability of orbits arriving in the vicinity of \(x\) is the sum of the probabilities that they originated in the multiple preimages of the map \(F(x)\). ##### Numerical Calculation of PDF A fairly obvious way to calculate an approximation to \(P(x)\) is to divide up or partition the domain of \(x\) into \(N\) cells of size \(\Delta x\), and then run the map for several thousand iterations, counting the number of times \(N_{i}\) an orbit enters the \(i\)th cell. The set of numbers, sometimes called Figure 3-26: Hump map showing two preimage contributions to the probability density functions. a _histogram_\(\{P_{i};\,i=1,\,...,\,N\}\), is given by \[P_{i}\,=\,N_{i}/N\] The histogram is then considered to be an approximation to \(P(x)\). This technique is sometimes called _course-graining_. This method runs into problems when the _support_ of \(P(x)\) is fractal. However, the set of numbers \(\{P_{i}\}\) may still be a good approximation to \(P(x)\) when the map is subject to a small amount of noise, that is, \[x_{n+1}\,=\,F(x_{n})\,+\,\eta(x_{n})\] where \(\eta(x_{n})\) is a small random variable. Such noise is present not only in physical systems but also in computing machines, which are always limited to a finite precision. The effect of small amounts of random noise on the dynamics of one-dimensional maps has been studied by several authors. Two graphs from a study by Crutchfield et al. (1982) shown in Figure 3-27 show the effect of noise on the PDF for the logistic map, when the control parameter is slightly larger than the critical parameter for chaos. Notice the smoothing of the peaks due to noise. These data were generated by partitioning the line [0, 1] into \(10^{3}\) bins and using \(10^{6}\) iterations. The extension of the analytical method such as (3-7.9) to two- or higher-dimensional maps and flows requires other mathematical tools such as the Fokker-Planck equations (e.g., see Gardiner, 1985). However, the course-graining numerical technique is often straightforward and can be implemented computationally as well as in experimental measurements (e.g., see Chapters 2 and 5). Figure 3-27: (_a,b_) The effect of a small amount of noise on the probability density function for the logistic map (3-6.2). [From Crutchfield et al. (1982).] ### PDF and Lyapunov Exponents As discussed above, the Lyapunov exponent is often calculated as a time average of the slope of the map function \(F^{\prime}(x)=dF/dx\): \[\Lambda=\frac{1}{N}\sum\ln\lvert F^{\prime}(x)\rvert\] (3-7.11) where \(\Lambda>0\) defines chaos. When one has a probability distribution function for the map, the Lyapunov exponent can sometimes be calculated using a space average, that is, \[\Lambda=\int_{0}^{1}P(x)\ln\lvert F^{\prime}(x)\rvert\,dx\] (3-7.12) In the case of the tent map for \(r=2\), \(P(x)=1\), \(\lvert F^{\prime}(x)\rvert=2\), and \(\Lambda=\ln 2\). When the base of the logarithm is 2, then \(\Lambda\) represents the loss of 1 bit of information per cycle of the map. ### 3-D flows; models and maps Dynamical models using differential equations occur naturally in the physical sciences because the equations of classical physics have been formulated in terms of partial differential equations. With suitable spatial assumptions, the equations of mass balance, energy, momentum, and electromagnetics can be reduced to a set of coupled ordinary differential equations. The simplest set which can exhibit chaotic solutions is a set of three; \[\dot{x} = F(x,y,z)\] \[\dot{y} = G(x,y,z)\] (3-8.1) \[\dot{z} = H(x,y,z)\] The dynamics can be visualized by constructing a ''velocity'' vector field at every point in the three-dimensional phase space (\(x\), \(y\), \(z\)); that is, \(V=(\dot{x}\), \(\dot{y}\), \(\dot{z}\)) = (\(F\), \(G\), \(H\)). The rate of change of volume in this phase space is given by the divergence of this vector field: \[\nabla\cdot V=\frac{\partial F}{\partial x}+\frac{\partial G}{\partial y}+ \frac{\partial H}{\partial z}\] (3-8.2)For dissipative problems \(\nabla\cdot V<0\), whereas for conservative problems \(\nabla\cdot V=0\). In this section we examine a few examples of three-dimensional flows and show how the dynamics generates two- and one-dimensional maps. It is also a good exercise to interpret discrete time fixed points and map dynamics in terms of the original 3-D flow geometry. For example, a cycle-3 fixed point of a 2-D map may imply three loops of a period-3 subharmonic orbit in the 3-D flow. Or, if a closed orbit exists in a 2-D map, then the underlying flow may exhibit a quasiperiodic motion in 3-D that moves on a toroidal surface. ### Lorenz Model for Fluid Convection One of the first models shown to exhibit chaotic behavior in numerical simulation was a fluid convection model of E. Lorenz of M.I.T. (1963) as described briefly in Chapter 1. In this model, \(x(t)\) represents a measure of the fluid velocity, and \(y(t)\), \(z(t)\) represent measures of the spatial temperature distribution in the fluid layer under gravity. The equations were derived in a more complex form by Saltzman (1961) (see also Chapter 4) and were simplified by Lorenz as follows: \[\begin{array}{l}\dot{x}=\sigma(y-x)\\ \dot{y}=rx-y-xz\\ \dot{z}=-bz+xy\end{array}\] (3-8.3) These equations are derived from the energy and momentum balance relations for the fluid. Here \(\sigma\) represents the Prandtl number, which is a ratio of kinematic viscosity to thermal conductivity; \(r\) is called a Rayleigh number and is proportional to the temperature difference between the upper and lower surfaces of the fluid; and \(b\) is a geometric factor. Note that the only nonlinear terms are two quadratic polynomials. The general form of these equations also serves as a simple model for complex dynamics in certain laser devices (e.g., see Haken, 1985). The dynamics of this system can be visualized in a 3-D phase space (\(x\), \(y\), \(z\)) (Figure 1-27) using a "'velocity' field (\(\dot{x}\), \(\dot{y}\), \(\dot{z}\)) given by the right-hand side of (3-8.3). The divergence of this "'velocity' field shows that differential volume elements of phase space are uniformly contracting, that is, \[\nabla\cdot V=-(\sigma+b+1)\] (3-8.4)A typical set of values for the study of this equation is \(\sigma=10\), \(b=\lx@sectionsign\), \(1<r\leq 28\), which are the ones studied by Lorenz. Unfortunately, these nondimensional groups do not relate to real geoconvection flow parameters. However, these parameters can be replicated in a laboratory convection experiment called a thermosyphon (see Chapter 4). Many authors have reproduced and extended the original analyses of Lorenz. Below is a summary of some of these results using \(r\) as a control variable. The fixed points of the flow are found by setting \(\dot{x}=\dot{y}=\dot{z}=0\) and are given by \[\begin{array}{rcl}(x,y,z)&=&(0,0,0)\\ (x_{e},y_{e},z_{e})&=&(\pm\sqrt{b(r-1)},\,\pm\sqrt{b(r-1)},\,r-1)\end{array} \tag{8.5}\] The stability of the fixed point at the origin may be found by solving the following eigenvalue problem: \[\begin{array}{rcl}\det\left[\begin{array}{ccc}(\lambda\,+\,\sigma)&-\, \sigma&0\\ -\,r&(\lambda\,+\,1)&0\\ 0&0&(\lambda\,+\,b)\end{array}\right]\,=\,0\\ \lambda\,=\,-\,\frac{(\sigma\,+\,1)}{2}\pm\frac{1}{2}\,[(\sigma\,+\,1)^{2}\,- \,4\sigma(1\,-\,r)]^{1/2}\end{array}\] (3-8.6) For \(r<1\) the origin is the only fixed point, but when \(r>1\) two other fixed points are born as given above. The stability of these points can be studied by looking at the linearized equation near each of these fixed points. Also, the global dynamics are bounded in a finite volume (sphere) of phase space (e.g., see Berge et al., 1985). As one increases the temperature difference between upper and lower surfaces (i.e., by increasing \(r\)), the following dynamical bifurcations occur: 1. \(0<r<1\). There is only one stable fixed point at the origin. 2. \(1<r<1.346\). Two new stable nodes are born and the origin becomes a saddle with a one-dimensional, unstable manifold (Figure 3-28). 3.8 3-D FLOWS; MODELS AND MAPS 3.1.346 \(<r<\) 13.926. At the lower value the stable nodes become stable spirals. 3.13.926 \(<r<\) 24.74. Unstable limit cycles are born near each of the spiral nodes, and the basins of attraction of each of the two fixed points become intertwined. The steady-state motion is sensitive to initial conditions. 5.24.74 \(<r\). All three fixed points become unstable. Chaotic motions result. The dynamic orbit shown in Figure 1-27 is best viewed "live" on a computer graphics terminal. There one can see the interplay between the unstable spirals. Note that this'mapping'' is not a classic Poincare map because the trajectories do not penetrate a surface of section. In later work, however, researchers did find a Poincare map (e.g., see Guckenheimer and Holmes, 1983 and Sparrow, 1982). This map is shown schematically in Figure 3-30. A plane \(z=r-1\) contains the Figure 3-30: Poincaré plane for the construction of a return map using the flow generated by the Lorenz equation [Eq. (3-8.3)]. Figure 3-39: One-dimensional map of Lorenz based on the relative maximum values \(z_{n}\) of the time integration of the thermal convection model equation (3-8.3). two fixed points. When the trajectory penetrates this plane, the point is projected onto a line connecting the two fixed points from which one obtains a one-dimensional map similar to the one shown in Figure 3-31. This map is similar to the Bernoulli map (3-7.3). In both cases, it is remarkable that such complex dynamics can be reduced to one-dimensional maps. ##### Duffing's Equation and the "Japanese Attractor" A classic differential equation that has been used to model the nonlinear dynamics of mechanical and electrical systems is the harmonic oscillator with a cubic nonlinearity: \[\ddot{x}\ +\ \gamma\dot{x}\ +\ \alpha x\ +\ \beta x^{3}\ =\ F\ \cos\ \Omega t\] (3-8.7) This equation has been named after G. Duffing, a German electrical engineer/mathematician who studied it in the 1930s. With \(\alpha=0\), it is a model for a circuit with a nonlinear inductor (see Chapter 4); and with \(\alpha<0\), \(\beta>0\), it is a model for the postbuckling vibrations of an elastic beam column under compressive loads. We will focus on the case of \(\alpha=0\), which has been extensively studied by a group of engineers at Kyoto University for several decades first under the leadership of C. Hayashi (see Hayashi, 1985) and then under Professor Y. Ueda (e.g., see Ueda, 1979, 1991). This equation can be written as Figure 3-31: Poincaré map of the Lorenz equations based on the section shown in Figure 3-30. a set of three first-order nondimensional differential equations: \[\dot{x} = y\] \[\dot{y} = - ky\,-\,x^{3}\,+\,B\,\cos 2\pi z\] (3-8.8) \[\dot{z} = 1\qquad(\text{mod}\ 1)\] The (mod 1) indicates that the phase space has a cylindrical geometry. If we construct a set of vectors (\(\dot{x},\dot{y},\dot{z}\)) on a three-dimensional grid, we can imagine the flow of a fluid. This is the modern geometric view of solutions of differential equations as a flow in a 3-D vector space in contrast to the algebraic methods many learned prior to the 1980s. One of the ways to produce a map is to look at the penetration of these flow trajectories through the planes \(z=0,\,1,\,2,\,...\) (in Cartesian representation Figure 3-32) or through \(z=0\) plane in the cylindrical space representation. Shown in Figure 3-32 are three trajectories: One is a fixed point, one lies on the stable manifold of the saddle point of the map, and the other goes through the unstable manifold of the map. Ueda and his co-workers not only were one of the first to observe chaotic solutions of the Duffing equation (using analog computers), but were one of the first to relate the tangling of the stable and unstable manifolds of stable points of stroboscopic maps to the formation of a Figure 3-32: Three trajectories in the phase space of a periodically forced oscillator originating from stable and unstable manifolds and the saddle point of a Poincaré map. strange attractor. An example of this is shown in Figure 3-33, which was obtained from analog and digital simulation of (3-8.7). Such maps, originating from differential equations, are not as easy to generate or analyze as those calculated directly from difference equations. Nevertheless, this example shows the importance of discrete time maps to the understanding of continuous time dynamics. It also shows the role of the unstable manifolds in Poincare maps as organizing topologies for strange attractors. Figure 3-33: Strange attractors for periodically forced Duffing oscillator (3-8.8) for a circuit with a nonlinear inductor. [From Ueda (1979).] ### A Map from a 4-D Conservative Flow The following example illustrates two ideas relating to flows and maps. The first is the idea of chaos in lossless systems sometimes called _conservative_ or _Hamiltonian_ dynamics (see also Chapter 1). The second idea is how one can obtain a 2-D map from dynamics in a 4-D phase space. Another lesson from this example is that because there are no attractors in conservative systems, each initial condition results in a unique type of motion, namely, periodic, quasiperiodic, or stochastic (chaotic). The example is illustrated in Figure 3-34, which shows a particle on a rotary base (Cusumano, 1990). The motion of the particle is confined to a vertical plane that is fixed to the rotating base. In Figure 3-34\(a\) the restoring force on the rotating particle is a linear spring, whereas in Figure 3-34\(b\),_c_ the restoring force is gravity. This problem introduces _inertial_ nonlinearities, commonly known as _centripetal_ and _Coriolis_ accelerations which appear in Newton's laws of motion when written in polar, spherical, or path coordinates. Leaving the derivation of the equations of motion as an exercise, one can obtain two coupled second-order differential equations from a Lagrange's equation formulation of the problem with the spherical angles \(q_{1}\), \(q_{2}\) as generalized coordinates, nondimensionalized for unit mass and unit radius: \[\begin{array}{rcl}\ddot{q}_{1}&+&\omega_{1}^{2}q_{1}&-&\frac{1}{2}(\dot{q}_{ 2})^{2}\sin 2q_{1}&=&0\\ &&[\varepsilon&+&\sin^{2}\!q_{1}]\dot{q}_{2}&+&\varepsilon\omega_{2}^{2}q_{2} &+&\dot{q}_{1}\dot{q}_{2}\sin 2q_{1}&=&0\end{array}\] (3-8.9) where \(\varepsilon\) represents the ratio of base inertia to particle inertia \(\varepsilon=J/ML^{2}\) and \(\omega_{1}\), \(\omega_{2}\) are the uncoupled natural frequencies of the oscillators when \(q_{1}\), \(q_{2}\) are small. Ostensibly it would appear that the dynamics would naturally be described in a 4-D phase space. But, because energy is conserved, there is an added constraint which can be easily shown to be given by \[[\varepsilon\,+\,\sin^{2}\!q_{1}]\dot{q}_{2}^{2}\,+\,\dot{q}_{1}^{2}\,+\,\omega _{1}^{2}q_{1}^{2}\,+\,\varepsilon\omega_{2}^{2}\,q_{2}^{2}\,=\,2E=\mbox{constant}\] (3-8.10) where \(E\) is the total kinetic and potential energies. Using this expression, one can eliminate one of the four phase-space variables, say \(\dot{q}_{2}\), to obtain a continuous time motion in a 3-D space (\(q_{1}\), \(\dot{q}_{1}\), \(q_{2}\)). To obtain a 2-D map, one sets \(q_{2}=0\), \(\dot{q}_{2}>0\) to obtain a discrete time mapping in the plane (\(q_{1}\), \(q_{2}\)). This procedurewas carried out for the above equations by integrating them (in the form of first-order equations) and then saving (\(q_{1}\), \(q_{2}\)) when \(q_{2}=0\), \(\dot{q}_{2}>0\). These calculations were part of a Ph.D. dissertation of J. Cusumano (1990) in his study of the out-of-plane chaotic dynamics of an elastic torsion beam (see Cusumano and Moon, 1990) in which this model was used as a simple two-mode approximation to the original partial differential equation. Figure 3.34: Three two-degree-of-freedom oscillators with inertial nonlinearities. [From Cusumano (1990).] The results of these calculations are shown in Figure 3-35 for increasing values of the energy \(E\). Note that for a given energy there are many types of orbits, possibly depending on the initial conditions. Of particular note are the motions near the origin. For small energy, there are quasiperiodic motions about the origin. However, for larger energy the origin becomes unstable and two stable out-of-plane quasiperiodic motions exist away from the origin. Finally, for higher energies the map shows a diffuse set of points which indicate stochastic or chaotic dynamics and which wander about the phase space in an Figure 3-35: Poincaré map for the lossless system shown in Figure 3-34 for two values of the initial energy. [From Cusumano (1990).] (Note, \(p_{1}=\dot{q}_{1}\))
## Chapter 4 Chaos in Physical Systems _The world is what it is and I am what I am.... This out there and this in me, all this, everything, the resultant of inexplicable forces. A chaos whose order is beyond comprehension. Beyond human comprehension._ Henry Miller _Black Spring_ ### 4.1 New Paradigms in Dynamics In his book _The Structure of Scientific Revolutions_, Thomas Kuhn (1962) argues that major changes in science occur not so much when new theories are advanced but when the simple models with which scientists conceptualize a theory are changed. A conceptual model or problem that embodies the major features of a whole class of problems is called a _paradigm_. In vibrations, the spring-mass model represents such a paradigm. In nonlinear dynamics the motion of the pendulum and the three-body problem in celestial mechanics represent classical paradigms. The theory that new models are precursors for major changes in scientific or mathematical thinking has no better example than the current revolution in nonlinear dynamics. Here the two principal paradigms are the Lorenz attractor [Eq. (1-3.9)] and the logistic equation [Eq. (1-3.6)]. Many of the features of chaotic dynamics are embodied in these two examples, such as divergent trajectories, subharmonic bifurcations, period doubling, Poincare maps, and fractal dimensions. Just as the novitiate in linear vibrations had to master the subtleties of the spring-mass paradigm to understand vibrations of complexsystems, so the budding nonlinear dynamicist of today must understand the phenomena embedded in the Lorenz and logistic models. Other lesser paradigms are also important in dynamical systems, including the forced motions of the Van der Pol equation (1-2.5), the Duffing oscillator models (1-2.4) of Ueda and Holmes (see below, this chapter), the two-dimensional map of Henon (1-3.8), and the circle map (3-5.7). The implication of this minor revolution in physics means that in this new age of dynamics we will observe dynamical phenomena and conduct dynamics experiments in physical systems from a vastly new perspective. Old experiments will be viewed in a new light while new dynamical phenomena remain to be discovered. To date, there have appeared many fine books on the subject of chaos. Most of these, however, focus almost entirely on the mathematical analysis of chaotic dynamics. In this chapter, we survey a variety of mathematical and physical models which exhibit chaotic vibrations. An attempt is made to describe the physical nature of the chaos in these examples, as well as to point out the similarities and differences between the physical problems and their more mathematical chaos paradigms mentioned above. ### Early Observations of Chaotic Vibrations Early scholars in the fields of electrical and mechanical vibrations rarely mention nonperiodic, sustained oscillations with the exception of problems relating to fluid turbulence. Yet chaotic motions have always existed. Experimentalists, however, were not trained to recognize them. Inspired by theoreticians, the engineer or scientist was taught to look for resonances and periodic vibrations in physical experiments and to label all other motions as "noise.' Joe Keller, a mathematician at Stanford University, has speculated on the reason for the apparent myopic vision of experimental scientists as regards chaotic phenomena in the last century. He notes that the completeness and beauty of linear differential equations led to its domination of the mathematical training of most scientists and engineers. Examples of nonperiodic oscillations can be found in the literature, however. Three cases are cited here. First, Van der Pol and Van der Mark (1927), in a paper on oscillations of a vacuum tube circuit, make the following remark at the end of their paper: "Often an irregular noise is heard in the telephone receiver before the frequency jumps." No explanation is offered for these events; and in classical treatiseson the Van der Pol oscillator, no further mention is made of "'irregular noises." One of the more interesting stories of observing chaotic oscillations before the age of chaos theory is told by Professor Yoshisuke Ueda of Kyoto University.1 Professor Ueda was the student of a very famous professor of nonlinear electrical circuits, C. Hayashi, whose well-known treatise is referenced at the end of this book. Ueda tells of collecting data on an analog computer in November of 1961 when he accidently came upon a nonperiodic signal from the simulation of frequency entrainment on a second-order, nonlinear, periodically driven oscillator. At the time, they assumed that nonperiodic motions were always quasiperiodic. But, Ueda's analysis at the time showed that instead of a nice closed Poincare section, characteristic of quasiperiodicity, he got a ragged picture, what he calls a "'shattered egg.' However, Ueda's attempts to get his famous professor to acknowledge his observations or publish them met with no success. He attempted to publish some of these results in 1971 in Japan when he was a new professor at Kyoto, but he again met with resistance. Not until 1978 did his famous "Japanese Attractor' paper get published in _Transactions of the Institution of Electrical Engineers in Japan_ [reprinted in English in the _International Journal of Non-Linear Mechanics_, **20** (1980), 481-491]. Footnote 1: This lecture was given by Professor Ueda at the International Symposium ‘‘The Impact of Chaos on Science and Society,’ 15–17 April 1991, organized by the United Nations University and the University of Tokyo. The moral of this story is clear: Even in one of the most advanced laboratories in nonlinear circuits, chaotic dynamics were rejected because they did not fit in with the mathematical theories of the times. (As evidence of the advanced nature of the experiments at Kyoto, Professor Abe Hack invented an automatic Poincare-map-generating circuit in 1966 to be used with the analog computer.) In a third example, Tseng and Dugundji (1971) studied the nonlinear vibrations of a buckled beam. The beam was rigidly clamped at both ends and then compressed to buckling. This created an arched structure. When the beam was vibrated transverse to its length and the acceleration forces increased, snap-through occurred. In this regime, intermittent oscillations were observed as well as subharmonic responses. The analysis in the paper, however, only dealt with periodic vibrations. Many readers may recall similar phenomena in scientific experiments that they have done or have seen in engineering practice. Chaotic noise has always been around in engineering devices, such as static in old radios and chatter in loose-fitting gears, but until recently we had no models or mathematics to simulate or describe it. ### Particle and Rigid Body Systems #### Multiple-Well Potential Problems A system with a finite number of equilibrium states can often be described by a multiple-well potential energy function \(V(\mathbf{x})\) where, for a particle with unit mass, the equation of motion takes the form \[\ddot{\mathbf{x}}\;+\;\nabla V(\mathbf{x})\;=\;\mathbf{F}(\mathbf{x},\;\dot{ \mathbf{x}},\;t) \tag{4.1}\] where \(\mathbf{x}\) represents the position of the particle in the configuration space, \(\nabla V\) represents the gradient operator, and \(\mathbf{F}(\mathbf{x},\;\dot{\mathbf{x}},\;t)\) represents additional forcing and dissipation forces. These systems are good candidates for chaotic vibrations because the unforced problem \(\mathbf{F}=0\) has one or more saddle points in the phase space which can Figure 4.1: Sketch of a pendulum with a ferromagnetic end mass oscillating above four permanent magnets. Model for a two-degree-of-freedom four-well potential oscillator. lead to horseshoe's in the Poincare map of the system. A sketch of a mechanical system with a four-well potential is shown in Figure 4-1 for a spherical pendulum under gravity with four permanent magnets underneath the bob. Double-Well Potential ProblemsThe forced vibrations of a buckled beam were modeled using a Duffing-type equation by Holmes (1979), who showed in analog computer studies that chaotic vibrations were possible. The nondimensional equation derived by Holmes is \[\ddot{x}\,+\,\gamma\dot{x}\,-\,\frac{1}{2}\,x(1\,-\,x^{2})=f_{0}\text{cos}\;\omega t\] (4-2.2) where \(x\) represents the lateral motion of the beam [Here a simple one-mode model is used to represent the beam as in Moon and Holmes (1979).] This equation can also model a particle in a two-well potential (Figure 1-2). This model has been used to study plasma oscillations (e.g., see Mahaffey, 1976). Chaotic solutions obtained from an analog computer are shown in Figure 4-2. An experimental realization of this model was discussed in Chapter 2. A Fourier spectrum based on solutions to this equation (Figure 2-7) shows a continuous spectrum of frequencies which is characteristic of chaotic motions. A Poincare Figure 4-2Chaotic vibrations of a periodically forced buckled beam: comparison of analog computer simulation and experimental measurements. [From Moon and Holmes (1979).] map of the strange attractor is shown in Figure 4. Fractal dimensions for chaotic solutions are discussed in later chapters. Numerical studies of the double-well problem have also been published by Dowell and Pezeshki (1986), Moon and Li (1985a,b), and Ueda et al. (1986). In a similar example, Clemens and Wauer (1981) have analyzed the snap-through oscillation of a one-hinged arch. Their equation takes the form \[m\ddot{y}\,+\,\gamma\dot{y}\,+\,2k\left(1\,-\frac{1}{(b^{2}\,+\,y^{2})^{1/2}} \right)y\,=f_{0}\mbox{sin}\;\omega t \tag{4.3}\] When only cubic nonlinearities are retained, this equation assumes the form (4.3) for the two-well potential Duffing oscillator. In an other two-well problem, Shaw and Shaw (1989) has studied the forced vibration of an inverted pendulum with amplitude constraints. Chaotic motions of an elastoplastic arch have been studied by Poddar et al. (1986). Once the length of the arch is longer than the distance between the pinned ends, there will be two equilibrium positions. Forced excitation can then result in unpredictable jumping from one arched position to another (see also Symonds and Yu, 1985). Figure 4.3: Poincaré map of chaotic solutions to the forced two-well potential oscillator; 15,000 points. ### 4.2 Particle and rigid body systems #### Three- and Four-Well Potential Problems Chaotic dynamics of a particle in three-well and four-well potentials have been studied both experimentally and analytically in M.S. and Ph.D. dissertations of G.-X. Li (see Li and Moon, 1990a,b). A brief discussion of these problems is given in Chapters 6 and 7. A three-well problem can easily be created experimentally by placing three permanent magnets below a cantilevered beam (see Figure 2-2). A four-well potential can also be created by placing four magnets below a spherical pendulum. In this case, the number of degrees-of-freedom is two (Figure 4-1). #### Chaotic Dynamics in the Solar System The first firm test of the Newtonian model of the physical dynamical world was the correct prediction of the motions of the planets. And, for more than two centuries, students of physics have been taught the predictable nature of Newton's orbital dynamics. The time history of the planets in our solar system has been used to measure the history of our world. For over three decades, orbital dynamics has been used to predict with remarkable accuracy the motions of our rockets and satellites. Now, more than three centuries after the publication of the _Principia_, some are challenging the notion of absolute predictability in the motions of some of the objects in our solar system. How can this be? The law of the gravitational force of attraction of Newton is inversely proportional to the square of the distance between masses, and thus it would appear to be strongly nonlinear. For two masses under mutual attraction, however, the problem can be reduced to a single mass problem moving around a fixed center. Furthermore, a change of variable transforms the nonlinear problem into the linear harmonic oscillator (e.g., see Goldstein, 1980). However, when three or more celestial bodies interact, then stochastic dynamics are possible. Another departure from the classical Newton orbital problem is the effect of mass distribution. When the mass of the planet or moon has certain symmetries, then one can reduce the problem to the interaction of point masses. However, for irregularly shaped objects, the angular displacements add complexity similar to that of spinning top dynamics. In the following, a few examples of rigid body and orbital dynamics are described which may provide clues to chaos in the solar system. An example of chaotic fluid flow on Jupiter may be found in the work of Marcus (1988) as well as Meyers et al. (1989). Chaotic Tumbling of HyperionThe NASA mission of Voyager 2 transmitted pictures of an irregularly shaped satellite of Saturn called Hyperion. The pioneering work of J. Wisdom of M.I.T. showed how this nonsymmetric celestial object could exhibit chaotic tumbling in its elliptical orbit around Saturn. It is well known that an elongated satellite such as a dumbell-shaped object orbiting in a circular orbit could exhibit oscillating planar rotary motions about an axis through the center of mass and normal to the plane of the orbit, with a period \(1/\sqrt{2}\) smaller than the orbital period. When the satellite is asymmetric with three different moments of inertia, \(A<B<C\), Wisdom et al. (1984) show that the planar dynamics are described by \[\frac{d^{2}\theta}{dt^{2}}+\frac{\omega_{0}^{2}}{2r^{3}}\sin 2(\theta-f)=0\] (4-2.4) where time is normalized by the orbital period \(T=2\pi\) and where \(r(t)\) and \(f(t)\) are \(2\pi\) periodic [e.g., see Thompson (1989a for a review of this work]. Here \(\omega_{0}^{2}=3(B-A)/C\), \(r\) is the radius to the center of mass, and \(\theta(t)\) measures the orientation of the long axes of the satellite. The term \(f(t)\) is called the _true anomaly_ of the orbit. This equation is similar to that of a parametrically forced pendulum which has been found to exhibit chaotic dynamics. However complex the planar oscillations may be, Wisdom et al. (1984) show that these planar motions can become unstable with the possibility of three-dimensional tumbling of the satellite in its orbit around Saturn. Imagine living on such a world where the Saturn rise and set are unpredictable and where definitions of east and west, defined on Earth by the fixed axes of rotation, would be hard to determine by intuition. Chaotic Orbits of Halley's CometThe dynamics of a celestial object such as Halley's comet about the sun can be approximately described by a two-body problem which is fully integrable. However, the motions of several planets, namely, Jupiter and Saturn, can exert perturbations on the orbit of Halley's comet when there is a close encounter with these planets. This can be seen as an analogy to a pendulum that receives a series of short time perturbations similar to that of the kicked rotor problem. In a recent paper, Chirikov and Vecheslavov (1989) have used this technique of reducing the dynamics of Halley's comet under the influence of Jupiter to a two-dimensional iterated map. The determination of possible chaotic dynamics requires a large number of observations. Given the limited number of orbital periods in our lifetime of many of the celestial objects in our solar system, the only tool that we have is to perform a simulation backwards and forwards in time. If one uses the differential equations of Newton for each of the relevant celestial bodies, the calculation using digital computers becomes extremely time-consuming. One solution is to construct an electronic Orrery or a dedicated analog or digital computer in which the equations are hard-wired into the electronics. Another solution is to replace the coupled ordinary differential equations with coupled iterated maps. Sketching only the barest outline of the problem, \(x_{n}\) is chosen to represent the relative phase of the orbit of Jupiter at which Halley's comet reaches its perihelion. Between encounters the comet is assumed to have an energy proportional to \(\omega_{n}\). Using observations of Halley's comet, Chirikov and Vecheslavov (1989), derive a 2-D map of the form \[\begin{array}{l}\omega_{n\,+\,1}=\,\omega_{n}\,+\,F(x_{n})\\ x_{n\,+\,1}=\,x_{n}\,+\,\omega_{n\,+\,1}^{-3/2}\end{array}\] (4-2.5) where \(F(x)\) is the saw-tooth function shown in Figure 4-4\(a\). Iteration of this map for certain initial conditions leads to the stochastic orbit shown in Figure 4-4\(b\). ### Pendulum Problems #### Forced Single-Degree-of-Freedom Pendulum The classical pendulum has a restoring force or torque that is proportional to the sine of the angular displacement, \(\theta\) (Figure 4-5). This implies that there are two equilibrium positions \(\theta=0\), \(\pi\). In the case of zero forcing, there is both a center and a saddle point in the phase plane. As with multiple-potential well problems, saddles often are clues to the existence of horseshoe maps in the Poincare section when periodic forcing is added to the problem. (See Figures 1-13, 3-5.) #### Chaos in a Pendulum The motion of a particle in both space-periodic and time-periodic force fields serves as a model for several problems in physics. These include the classical pendulum, a charged particle in a moving electric field, synchronous rotors, and Josephson junctions. For example, the equation for the nonlinear dynamics of a Figure 4.4: (\(a\)) Mapping function (4-2.5) for a model of Halley’s comet perturbed by the orbit of Jupiter. (\(b\)) Iteration of the map (4-2.5). [From Chirikov and Vecheslavov (1989).] particle in a traveling electric force field takes the form (e.g., see Zaslavsky and Chirikov, 1972) \[\ddot{x}\ +\ \delta\dot{x}\ +\ \alpha\ \sin\ x\ =\ g(kx\ -\ \omega t) \tag{4.2.6}\] where \(g(\ )\) is a periodic function. The study of the forced pendulum problem has revealed complex dynamics and chaotic vibrations (see Hockett and Holmes, 1985; Gwinn and Westervelt, 1985): \[\ddot{x}\ +\ \delta\dot{x}\ +\ \alpha\ \sin\ x\ =\ f\cos\ \omega t \tag{4.2.7}\] _Parametric oscillation_ is a term used to describe vibration of a system with time-periodic changes in one or more of the parameters of a system. For example, a simply supported elastic beam with small periodic axial compression is often modeled by a one-mode approximation which yields an equation of the form \[\ddot{x}\ +\ \omega_{0}^{2}(1\ +\ \beta\cos\Omega t)x\ =\ 0 \tag{4.2.8}\] This linear ordinary differential equation is the well-known Mathieu equation. It is known that for certain values of \(\omega_{0}^{2}\), \(\beta\), and \(\Omega\) the equation admits unstable oscillating solutions. When nonlinearities are added, these vibrations result in a limit cycle. A similar example is the pendulum with a vibrating pivot point (Figure 4.5). Chaotic vibrations for this problem have been studied numerically by Levin and Koch (1981) and McLaughlin (1981). The mathematical equation Figure 4.5: Parametrically forced pendulum. for this problem is similar to (4-2.8): \[\begin{array}{l}\check{\theta}\;+\;\beta\dot{\theta}\;+\;(1\;+\;A\;\cos\;\Omega t )\sin\;\theta\;=\;0\end{array}\] (4-2.9) Period-doubling phenomena have been observed in numerical solutions, and a Feigenbaum number was calculated for the sixth subharmonic bifurcation of \(\delta=4.74\). Chaotic motion of a double pendulum have been studied by Richter and Scholz (1984). ### Spherical Pendulum The complex dynamics of a spherical pendulum with two degrees of freedom has been examined by Miles (1984a), who found chaotic solutions for this problem in numerical experiments when the suspension point undergoes forced periodic motions. The equation of motion can be derived from a Lagrangian given by \[\begin{array}{l}L=\frac{1}{2}\,m\,(\dot{x}^{2}\,+\,\dot{y}^{2}\,+\,\dot{z}^{ 2})\,-\,m\,\frac{\beta}{l}\,(l\,-\,z)\end{array}\] (4-2.10) where \(l\) is the length of the pendulum and the coordinates \((x,\,y,\,z)\) satisfy the constraint equation \[\begin{array}{l}(x\;-\;x_{0})^{2}\,+\;y^{2}\,+\;z^{2}\,=\,l^{2}\end{array}\] The suspension point is \(x_{0}=\varepsilon l\) cos \(\omega t\), and gravity acts in the \(z\) direction. Miles (1984a) used a perturbation technique and transformed the resulting equation of motion using \[\begin{array}{l}x\,=\,[\,p_{\,1}(\tau)\cos\,\theta\,+\,q_{\,1}(\tau)\sin\, \theta]l\varepsilon^{1/3}\\ y\,=\,[\,p_{\,2}(\tau)\cos\,\theta\,+\,q_{\,2}(\tau)\sin\,\theta]l\varepsilon^ {1/3}\end{array}\] (4-2.11) where \(\theta=\omega t\) and \(\tau=\frac{1}{2}\varepsilon^{2/3}\omega t\). The resulting set of four first-order equations for (\(p_{\,1},\,p_{\,2},\,q_{\,1},\,q_{\,2}\)) with small damping added (represented by \(\alpha\)) is found to be \[\begin{array}{l}\frac{d}{dt}\left[\begin{array}{l}p_{\,1}\\ p_{\,2}\\ q_{\,1}\\ q_{\,2}\end{array}\right]\;=\;\left[\begin{array}{cccc}-\,\alpha&-\,\beta&- \,\delta&0\\ \beta&-\,\alpha&0&-\,\delta\\ \delta&0&-\,\alpha&-\,\beta\\ 0&\delta&\beta&-\,\alpha\end{array}\right]\left[\begin{array}{l}p_{\,1}\\ p_{\,2}\\ q_{\,1}\\ q_{\,2}\end{array}\right]\;+\;\left[\begin{array}{l}0\\ 1\\ 0\\ 1\end{array}\right]\end{array}\] (4-2.12)where \(\alpha\), \(\beta\), and \(\delta\) depend on the variables \((p_{1},p_{2},q_{1},q_{2})\). The reader is referred to Miles (1984a) for the definitions of \(\alpha\), \(\beta\), and \(\delta\). The divergence of this flow in the four-dimensional phase space is \(\nabla\circ\mathbf{f}=-4\alpha\). Equilibrium points of the set of equations (4-2.12) correspond to either periodic planar or nonplanar motions. Numerical simulation of this set of equations shows a transition from closed orbit trajectories and discrete spectra to complex orbits and broad spectra characteristic of chaotic motions. Experiments on a Magnetic PendulumThe pendulum is a classical paradigm in dynamics. To find out if this paragon of deterministic dynamics can exhibit chaotic oscillations, the author and co-workers at Cornell University (Moon et al., 1987) constructed a magnetic dipole rotor with a restoring torque proportional to the sine of the angle between the dipole axis and a fixed magnetic field (Figure 4-6). A time-periodic restoring torque was provided by placing a sinusoidal voltage across two poles transverse to the steady magnetic field. The mathematical model for this forced magnetic pendulum becomes \[J\ddot{\theta}\ +\ c\dot{\theta}\ +\ MB_{s}\mathrm{sin}\ \theta\ =\ MB_{d}\mathrm{cos}\ \theta\ \mathrm{cos}\ \Omega t\] (4-2.13) where \(J\) is the rotational inertia of the rotor, \(c\) is a viscous damping constant, \(M\) is the magnetic dipole strength of the rotor dipole, and \(B_{s}\) and \(B_{d}\) are the intensities of the steady and dynamic magnetic fields, respectively. Figure 4-7 shows a comparison of periodic and chaotic Figure 4-6: Sketch of a magnetic dipole rotor in crossed static and dynamic magnetic fields—a “magnetic pendulum.†rotor speeds under periodic excitation. Additional discussion of this experiment may be found in Chapters 5 and 6. Chaos theory has also been used to excite nonperiodic vibrations in a multiple-pendulum mobile sculpture by Viet et al. (1983). A brief discussion on chaos and sculptural assemblages of pendulums as in the work of Calder is given in Appendix C, ''Chaotic Toys.'' ##### Rigid Body Problems The gyroscopic rotational effects of a free-spinning rigid body such as precession and nutation are well known. Then under what circumstances can rigid bodies exhibit chaotic dynamics? To answer this we review the equations of motion known as _Euler's equations_. Here the components of the rotation vector \(\boldsymbol{\omega}=(\omega_{1},\,\omega_{2},\,\omega_{3})\) are written with respect to the principal inertial axes centered at the center of mass: \[\frac{d}{dt}\,I_{1}\omega_{1}=(I_{2}-I_{3})\omega_{2}\omega_{3}+M_{1}\] \[\frac{d}{dt}\,I_{2}\omega_{2}=(I_{3}-I_{1})\omega_{1}\omega_{3}+M_{2}\] (4-2.14) \[\frac{d}{dt}\,I_{3}\omega_{3}=(I_{1}-I_{2})\omega_{1}\omega_{2}+M_{3}\] where \([I_{1},I_{2},I_{3}]\) are the principal inertias and \((M_{1},M_{2},M_{3})\) are applied Figure 4.7: _Top:_ Periodic motion of a magnetic rotor (Figure 4.6). _Bottom:_ Chaotic motion of a magnetic rotor. moments. In general, the rotational motion could be coupled to the translational motion, or applied forces could be coupled into the applied moments. We focus here on the force-free case in which the center of mass is stationary. In this case, when the applied moments are zero, the motion is integrable. That is, one can write down the motion in terms of elliptic functions (e.g., see Goldstein, 1980). However, there are several cases where freely rotating rigid bodies can exhibit chaotic behavior. The first case is when one of the moments \(M_{i}\) varies periodically in time. The second case is where one has parametric excitation through time-periodic changes in the principal inertias, for example, \(I_{2}=I_{0}+B\) cos \(\Omega t\). The third case is where the applied moments are coupled through some feedback mechanism to the rotation velocities, that is, \[\mathbf{M}=\mathbf{A}\cdot\boldsymbol{\omega}\] This case has been studied by Leipnik and Newton (1981). Under appropriate choice of constants, they obtained a double strange attractor, each with its own basin of attraction. The choice of constants used by Leipnik and Newton was \(\mathbf{I}=[3I_{0},2I_{0},I_{0}]\) and \[\mathbf{A}=\left[\begin{array}{ccc}-1.2&0&-\sqrt{6/2}\\ 0&0.35&0\\ \sqrt{6}&0&-0.4\end{array}\right]\] The chaotic dynamics can easily be observed in a three-dimensional phase space. The general equations (4-2.14) are identical to the equations for the bending of a thin elastic rod or tape. This analogy, discovered by Kirchhoff in 1831, is exploited in Chapter 8 to discuss spatially chaotic bending of a thin elastic tape (Davies and Moon, 1992). #### Ship Capsize and Nonlinear Dynamics Ships and submarines constitute one class of rigid body dynamics under the influence of gravity and hydrodynamic forces. Two groups have done considerable research on ship dynamics using modern methods of nonlinear analysis; J. M. T. Thompson and co-workers at University College, London and A. H. Nayfeh and co-workers at Virginia Polytechnic Institute in Blacksburg, Virginia. In spite of such research, current design criteria for ship stability in naval architecture is largely empirical (Thompson etal., 1990). The simplest models involve the assumption of one-degree-of-freedom rolling subject to a periodic overturning moment in lateral ocean waves sometimes called _regular beam seas_ (Figure 4-8): \[I\ddot{\theta}\ +\ B(\dot{\theta})\ +\ C(\theta)\ =\ D\ \sin\ \omega t\] (4-2.15) where \(\theta\) is the angle of roll and \(I\) is the moment of inertia. The damping \(B(\dot{\theta})\) is usually nonlinear as is the overturning moment \(C(\theta)\). The London group has done a lot of analysis on a ship in high winds in which \(C(\theta)\) is derived from a one-well potential function (for a review see Thompson et al., 1990; also see Virgin, 1986): \[C(\theta)\ =\ \beta_{1}\theta\ -\ \beta_{2}\theta^{2}\] (4-2.16) The Virginia Polytechnic Institute group has published papers on the symmetric ship problem using an equation of the form \[\ddot{\theta}\ +\ \omega^{2}\theta\ +\ \alpha_{3}\theta^{3}\ +\ 2\mu\dot{\theta}\ +\ \mu_{3}\dot{\theta}^{3}\ =\ F_{0}\cos\ \Omega t\] (4-2.17) This group has also studied ship rolling oscillations excited by heavy-roll coupling with a parametric forcing term proportional to the roll angle: \(\theta\)\(\cos\ \Omega t\) (Sanchez and Nayfeh, 1990). Thompson's work on dynamic stability for ships in a one-well potential has led to safety criteria based on ideas about fractals and basins Figure 4-8: Sketch of a model for ship dynamics subject to wind and sea wave forces. of attraction which are discussed in Chapter 7 (see also Thompson, 1989b and Thompson et al., 1990). ##### Impact Oscillators Impact-type problems result in explicit difference equations or maps which often yield chaotic vibration under certain parameter conditions (see also SS3.1). A classic impact-type map is the motion of a particle between two walls. When one wall is stationary and the other is oscillatory (Figure 4-9_a_), the problem is called the Fermi model for cosmic ray acceleration involving charged particles and moving magnetic fields. This model is discussed in great detail by Lichtenberg and Lieberman (1983) in their readable monograph on stochastic motion. Several sets of difference equations have been studied for this model. In one model, the moving wall imparts momentum changes without change of position. The resulting equations are given by \[\begin{array}{l}v_{n+1}=|v_{n}+\,V_{0}\sin\omega t_{n}|\\ t_{n+1}=t_{n}+\frac{2\Delta}{v_{n+1}}\end{array}\] (4-2.18) where \(v_{n}\) is the velocity after impact, \(t_{n}\) is the time of impact, \(V\)0 is the maximum momentum per unit mass that the wall can impart, and \(D\) is the gap between the two walls. Numerical studies of this and similar equations reveal that stochastic-type solutions exist in which thousands of iterations of the map (4-2.18) fill up regions of the _phase space_ (\(v_{n}\), \(t_{n}\)) as illustrated in Figure 4-9\(b\). In some cases, the trajectory does not penetrate certain 'islands' in the (\(v_{n}\), \(t_{n}\)) plane. In these islands more regular orbits occur. This system can often be analyzed using classical Hamiltonian dynamics. This system is typical of chaos in low- or zero-dissipation problems. In moderate-to-high dissipation, the chaotic Poincare map becomes localized in a structure with fractal properties as in Figure 3-11. But in low dissipation problems, the Poincare map fills up large areas of the phase plane with no apparent fractal structure. The Fermi accelerator model is also similar to one in mechanical devices in which there exists play, as illustrated in Figure 4-10. A mass slides freely on a shaft with viscous damping until it hits stiff springs on either side (see Shaw and Holmes, 1983 and Shaw, 1985). Another mathematical model which is closer to the physics is the bouncing ball on a vibrating surface shown in Figure 4-11. This problem has been Figure 4.9: (_a_) Particle impact dynamics model with a periodically vibrating wall. (_b_) Poincaré map \(v_{n}\) versus \(\omega l_{n}\) (mod \(\pi\)) for the impact problem in (_a_) using Eqs. (4-2.18). studied by Holmes (1982). Using an energy loss assumption for each impact, one can show that the following difference equations result: \[\begin{array}{l}\phi_{j+1}=\phi_{j}+v_{j}\\ v_{j+1}=\alpha v_{j}-\gamma\cos(\phi_{j}+v_{j})\end{array} \tag{4.19}\] Here \(\phi\) represents a nondimensional impact time, and \(v\) represents the velocity after impact. As shown in Figure 4-11\(a\), a steady sinusoidal motion of the table can result in a nonperiodic motion of the ball. A fractal-looking chaotic orbit for this map is shown in Figure 4-11\(b\). This model suffers from the problem of admitting negative velocities at impact. This problem was addressed in a paper by Bapat et al. (1986). Experiments on the chaotic bouncing ball have been performed by Tufillaro and Albano (1986). Other studies of impact or _bilinear_ oscillator problems have been done by Thompson and Ghaffari (1982), Thompson (1983), Isomaki et al. (1985), and Li et al. (1990). ##### Impact Print Hammer Impact-type problems have emerged as an obvious class of mechanical examples of chaotic vibrations. The bouncing ball (4-2.19), the Fermi accelerator model (4-2.18), and a beam with nonlinear boundary conditions all fall into this category. A practical realization of impact-induced chaotic vibrations is the impact print hammer experiment studied by Hendriks (1983) (Figure 4-12). In this printing device, a hammer head is accelerated by a magnetic force and the kinetic energy is absorbed in pushing ink from a ribbon onto paper. Hendriks uses an empirical law for the impact force versus relative displacement after impact; \(u\) is equal to the ratio of displacement to ribbon-paper thickness: \[F=\left\{\begin{array}{ll}-AE_{p}u^{2.7},&\dot{u}>0\\ -AE_{p}\beta u^{11},&\dot{u}<0\end{array}\right. \tag{4.20}\] Figure 4-10: Experimental model of mass with a deadband in the restoring force. [MISSING_PAGE_EMPTY:20] where \(A\) is the area of hammer-ribbon contact, \(E_{p}\) acts like a ribbon-paper stiffness, and \(\beta\) is a constant that depends on the maximum displacement. The point to be made is that this force is extremely nonlinear. When the print hammer is excited by a periodic voltage, it will respond periodically as long as the frequency is low. But as the frequency is increased, the hammer has little time to damp or settle out and the impact history becomes chaotic (see Figure 4-13). Thus, Figure 4-12: Sketch of pin-actuator for a printer mechanism. Figure 4-13: Displacement of a printer actuator as a function of time for different input frequencies showing loss of predictable output. [From Hendriks (1983), copyright 1983 by International Business Machines Corporation, reprinted with permission.] chaotic vibrations restrict the speed at which the printer can work. One potential solution which is under study is the addition of control forces to suppress this chaos. This idea has been explored in the work of Tung and Shaw (1988). ### Chaos in Gears and Kinematic Mechanisms Kinematic mechanisms are generally input-output devices that convert one form of motion into another. For example, a gear transmission converts a rotary input motion into another rotary motion at a different frequency. Or, a slider crank mechanism converts translation into rotary motion. This mechanism is the heart of tens of millions of automobile engines. These devices are called kinematic because, in the ideal mechanism, the relation between input and output depends only on geometry or kinematic relationships; that is, inertia does not determine the mechanical gain. However, in real mechanical devices, linear and nonlinear departures from the ideal mechanism such as elastic members, gaps, play, and friction bring inertial effects into the dynamic behavior of the mechanism. For example, gear transmissions work fine when under load, but they often lead to rattling vibrations when the load becomes small when there is small play in the gears or bearings. The understanding of machine noise in mechanical systems has been a neglected subject (e.g., see Moon and Broschart, 1991). Such noise sometimes leads to fatigue and other material damage as well as creates unwanted acoustic or hydrodynamic noise as in submarines. The modern developments in nonlinear dynamics have given new tools to attack this hitherto unsolved problem. A number of papers have appeared which treat the possibility of nonlinear and chaotic vibrations in kinematic mechanisms, including gears [Pfeiffer (1988), Karagiannis and Pfeiffer (1991), Singh et al. (1989)], slider crank mechanism and four-bar and robotic mechanisms (e.g., see Beletzky, 1990). #### Gear Rattling Chaos Two spur gears with diameters \(d_{1}\), \(d_{2}\), which ideal geometrics, have a frequency or speed ratio \(\omega_{1}/\omega_{2}\) equal to \(d_{2}/d_{1}\). This speed ratio is affected by the meshing of teeth on each gear. However, when elasticity effects in the teeth (or gaps between the teeth) are present, this ideally kinematic problem becomes a dynamic one. Consider, for example, the two gears shown in Figure 4-14 in which a gap of \(\varepsilon\) exists between the circumferential distance between tooth contacts and the actual width of the tooth. Suppose we assume that the motion of one gear is given, while the motion of the other is governed by the dynamics. For example, we could imagine the left-hand gear rotating with a small oscillatory motion while the right-hand gear tooth exhibits complex dynamic impacts between the two drive gear teeth. This problem is not unlike the Fermi map problem in (4-2.18) (see Figure 4-14_b_). A study of this problem and its extension to more complex gear transmission systems has been given by Pfeiffer and co-workers at the Technical University of Munich [e.g., see Pfeiffer (1988), Karagiannis and Pfeiffer (1991)]. This relation between the gear rattling problem and the Fermi map has been studied by Pfeiffer and Kunert (1989). Also, in the United States the gear laboratory of R. Singh at the Ohio State University has looked at various nonlinear vibrations of gear systems including chaotic dynamics (e.g., see Comparin and Singh, 1990). The Munich group, however, has pioneered in the application of Poincare map techniques for predicting noise in automative and other gear transmission systems. A typical Poincare map from two meshed gears with a small gap and periodic excitation on one gear is shown in Figure 4-15. The Munich group has also tried to predict the probability distribution function for the chaotic noise using the Fokker-Planck Figure 4-14: (_a_) Sketch of two enmeshed gears with a excessive play \(\varepsilon\). (_b_) Analogous problem of a mass moving between two vibrating constraints. equation (see Kunert and Pfeiffer, 1991) and to show how different arrangements of gears could possibly reduce gear noise. ##### Control System Chaos Imagine a mechanical device with a nonlinear restoring force and suppose a control force is added to move the system from one position to another according to some prescribed reference signal \(x_{r}(t)\). Such a system can be modeled by the following third-order system: \[\begin{array}{rcl}m\ddot{x}\,+\,\delta\dot{x}\,+\,F(x)&=&-\,z\\ \dot{z}\,+\,\alpha z&=&\Gamma_{1}[x\,-\,x_{r}(t)]\,+\,\Gamma_{2}\dot{x}\end{array} \tag{4.2.21}\] Figure 4.16: Feedback control system: nonlinear plant with linear feedback control. Here \(z\) represents a feedback force, and \(\Gamma_{1}\) and \(\Gamma_{2}\) represent position and velocity feedback gains, respectively. This system of equations can be represented by the block diagram in Figure 4-16 with a nonlinear mechanical plant and a linear feedback law. Two types of chaotic vibrations problems can be explored here. First, if the system is autonomous [i.e., the reference signal is zero--\(x_{r}(t)=0\)], one could explore the gain space (\(\Gamma_{1}\), \(\Gamma_{2}\)) for regions of steady, periodic, and chaotic vibrations. The second problem arises if \(x_{r}(t)\) is periodic. That is, we wish to move the mass through a given path over and over again as in some manufacturing robotic device. Figure 4-17: _Top_: Chaos boundary as a function of feedback gain and input command frequency. _Bottom:_ Trajectories of periodic and chaotic dynamics for a mass with feedback control and nonlinear restoring force with a deadband region (see Figure 4-10). (See Golnaraghi and Moon, 1991.) One could then explore the parameters of frequency and gain for which the system is periodic or chaotic as in Figure 4-17. Chaotic vibrations for an autonomous system of the form (4-2.21) were studied by Holmes and Moon (1983) as well as by Holmes (1984). For example, when \(F(x)=x(x^{2}-1)(x^{2}-B)\), the mechanical system has three stable equilibria. This system has been shown to exhibit both periodic limit cycle oscillation and chaotic motion. The problem of a forced feedback system has been studied by Golnaraghi and Moon (1991). Also Sparrow (1981) looked into chaotic oscillations in a system with a piecewise linear feedback function. Many other examples of chaotic control systems have since appeared in the literature. See also Baillieul et al. (1980). A discussion of using the chaotic nature of a strange attractor to control the dynamics of a system is presented in Section 4.9. ### Chaos in Elastic Systems #### Chaos in Elastic Continua Many experiments on chaotic vibrations in elastic beams have been carried out by the author and co-workers [e.g., see Moon and Holmes (1979, 1985), Moon (1980a,b, 1984b), Moon and Shaw (1983), and Cusumano and Moon (1990)]. Two types of problems have been investigated. In the first problem, the partial differential equation of motion for the beam is essentially linear, but the body forces or boundary conditions are nonlinear. In the second problem, the motions are sufficiently large enough that significant nonlinear terms enter the equations of motion. The planar equation of motion of an elastic beam with small slopes and deflections is governed by an equation of the form \[D\,\frac{\partial^{4}v}{\partial x^{4}}+m\,\frac{\partial^{2}v}{\partial t^{2} }=f\bigg{(}v,\frac{dw}{dt},t\bigg{)}\] (4-3.1) where \(v\) is the transverse displacement of the beam, \(D\) represents an elastic stiffness, and \(m\) is the mass per unit length. The right-hand term represents the effects of distributed body forces or internal damping. In many of the experiments at Cornell University, we used permanent magnets to create nonlinear body force terms. We also use flow-induced forces to produce self-excited oscillation of elastic beams (see Section 4.4). When the displacement and slope of the beam centerline are large, we use variables (_u, u, b_) to characterize the horizontal and vertical displacements and the slope which are related by (see Figure 4-18) \[(1\,+\,u\,^{\prime})^{2}\,+\,(u\,^{\prime})^{2}\,=\,1,\qquad\tan\,\theta=\frac{u \,^{\prime}}{1\,+\,u\,^{\prime}}\] (4-3.2) where ( )' = d/d_s_ and \(s\) is the length along the deformed beam. The balance of momentum equations then take the form (see Moon and Holmes, 1979) \[\begin{array}{l}m\ddot{u}\,=f_{v}\,-\,G\,^{\prime}\\ m\ddot{u}\,=f_{u}\,+\,H\,^{\prime}\end{array}\] (4-3.3) where \[\begin{array}{l}G\,=\,D\theta^{\prime\prime}(1\,+\,u\,^{\prime})\,-\,Tu\,^{ \prime}\\ H\,=\,D\theta^{\prime\prime}v\,+\,T(1\,+\,u\,^{\prime})\end{array}\] In these equations, (_f__u,f__v_) represent body force components, while \(T\) represents the axial force in the rod. The nonlinearities in these equations are distinguished from those in fluid mechanics by the fact that no convective or kinematic nonlinearities enter the problem. Also, the local stress-strain relations are linear. The nonlinear terms arise from the change in geometric shape and are known as _geometric nonlinearities_. [See Love (1922) for a discussion of nonlinear rod theory. See also Chapter 8.] ##### Elastic Beam with Nonlinear Boundary Conditions Multiple equilibrium positions are not needed in a mechanical system to get chaotic Figure 4-18: Planar deformation of an elastic rod. vibrations. Any strong nonlinearity will likely produce chaotic noise with periodic inputs. One example of a system with one equilibrium position is an elastic beam with nonlinear boundary conditions (see Moon and Shaw, 1983). Nonlinear boundary conditions are those that depend on the motion. For example, suppose the end is free for one direction of motion and is pinned for the other direction of motion. The chaotic time history of this beam is shown in Figure 4-19. Another variation of this problem is a two-sided constraint with play which gives three different linear regimes for the bending of the beam. Experiments in our laboratory also show chaos for this nonlinear boundary condition. Shaw (1985) has performed an analysis of these mechanical oscillations when play or a dead zone is present. Flow-induced chaotic vibrations have also been observed in a cantilevered pipe with nonlinear end constraints (see Section 4.4). ##### 4.4.2 Magnetoelastic Buckled Beam In this example, an elastic cantilevered beam is buckled by placing magnets near the free end of the Figure 4-19: Chaotic vibrations of an elastic beam with a nonlinear boundary condition. beam [see Chapters 2 and 4 as well as Moon and Holmes (1979) and Moon (1980a,b; 1984b)]. The magnetic forces destabilize the straight unbent position and create multiple equilibrium positions as shown in Figure 4-20. In experiments, we have created up to four stable equilibrium positions with four magnets. In the postbuckled state, the system represents a particle in a two (or more)-well potential (Figure 1-2_b_). The whole system is placed on a vibration shaker and oscillates with constant amplitude and frequency. For small oscillations, the beam vibrations occur about one of the equilibrium positions. As the amplitude is increased, however, the beam can jump out of the potential well and chaotic motions can occur, with the beam jumping from one well to another (Figure 4-2). A Poincare map of this phenomenon is shown in Figure 4-21. (We call this map the _Fleur de Poincare_.) The equation used to model this system is a modal approximation to the beam equation (4-3.3) with nonlinear magnetic forces acting at the tip. A one-mode approximation for a damped beam with a free end gives good results. This equation can be rewritten as three first-order equations. Note that here the \(x\) variable refers to nondimensional modal amplitude and _not_ to the distance along the beam. \[\begin{array}{l}\dot{x}=y\\ \dot{y}=-\gamma\dot{x}+\frac{1}{2}x(1-x^{2})-A_{0}\omega^{2}\cos z\\ \dot{z}=\beta\end{array}\] (4-3.4) This problem is analogous to a particle in a double-well potential \(V=-(x^{2}-x^{4}/2)/4\). This experiment is discussed throughout this book. Figure 4-20: Steel elastic beam on a periodically moving support that is buckled by magnetic body forces. Figure 4.21: Experimental Poincaré map of chaotic motion of the magnetically buckled beam, Fleur de Poincaré. The Poincare section (Figure 4-21) has the character of two-dimensional point mappings. The experiments _do not_ always exhibit period doubling before the motion became chaotic. Odd subharmonics were often a precursor to chaos. (Note: **A** description of the experimental apparatus may be found in Appendix C, "Chaotic Toys.") Another variation of this experiment is an inverted pendulum with an elastic spring reported in the People's Republic of China by Zhu (1983) from Beijing University. For a weak spring, the inverted pendulum has two stable equilibria similar to the two-well potential problem (see also Shaw and Shaw, 1989). ### Three-Dimensional Elastica and Strings Under certain conditions, the forced planar motion of the nonlinear elastica described by (4-3.3) becomes unstable and three-dimensional motions result. Similar phenomena are known for the planar motion of a stretched string (Miles, 1984b). At Cornell University, we have performed several experiments with very thin flexible steel elastica with rectangular cross section (e.g., 0.25 mm \(\times\) 10 mm \(\times\) 20 cm long) known as "Feeler" gauge steel strips (Figure 4-22_a_). For these beams, small motions in the stiff or lateral direction of the unbent beam are nearly impossible without buckling or twisting of the local cross sections. However, when there is significant bending in the weak direction, lateral displacements are possible accompanied by twisting of the local cross sections. We have shown that planar vibrations of the beam in the weak direction near one of the natural frequencies not only become unstable but can exhibit chaotic motions as well. This is demonstrated in Figure 4-22\(b\), where power spectra (fast Fourier transform; see Chapter 5) show a broad spectrum of frequencies when the driving input has a single-frequency input. Similar phenomena are observed for very thin sheets of paper. In fact, we have shown that chaotic motions of very thin sheets of paper generate a broad spectrum of acoustic noise in the surrounding air. This work is described in the doctoral dissertation of Cusumano (1990). See also Section 7.4 for a discussion of the calculation of fractal dimensions for chaotic attractors in these experiments. Chaotic ballooning motions of a periodically excited string under tension have been studied both analytically and experimentally by O'Reilly (1991). ### Two-Degree-of-Freedom Buckled Beam To explore the effects of added degrees of freedom, we built an elastic version of a spherical Figure 4.22: (\(a\)) Regions of chaos for a periodically forced thin elastica. (\(b\)) Fourier spectra for forced vibrations of a thin elastic beam. Broad-spectrum chaos is the result of out-of-plane vibration. [From Cusumano and Moon (1990).] pendulum (Figure 4-1) where a beam with circular cross section was used (see Moon, 1980b). Again magnets were used to buckle the beam, but the tip was free to move in two directions. This introduced two incommensurate natural frequencies, and quasiperiodic vibrations occurred which eventually became chaotic (Figure 4-23). This experimental system can be modeled by equations for two coupled oscillators as given by \[\ddot{x}\,+\,\gamma\dot{x}\,-\,\frac{1}{2}x(1\,-\,x^{2})\,+\,\beta xy^{2}=f_{2}\] (4-3.5a) \[\ddot{y}\,+\,\delta\dot{y}\,+\,a(1\,-\,\varepsilon y^{2})y\,+\,\beta x^{2}y \,=f_{0}\,+f_{1}\cos\omega t\] (4-3.5b) Figure 4-23: (_a_) Sketch of an elastic rod undergoing three-dimensional motions in the neighborhood of a double-well potential created by two magnets. (_b_) _Top:_ Simultaneous time trace of phase plane motion and Poincaré map of quasiperiodic motion. _Bottom:_ Poincaré map of chaotic motion. The terms \(f_{0}\) and \(f_{2}\) account for gravity if the beam is not initially parallel with the earth's gravitational field, and the coupling terms are conservative. If the coupling is small, one can solve for \(y(t)\) from Eq. (4-3.5b) and the equation for \(x(t)\) looks like a parametric oscillator. Miles (1984b) has performed numerical experiments on two quadratically coupled, damped oscillators and has found regions of chaotic motions resulting from sinusoidal forcing. He examined the special case when the two linear natural frequencies \(\omega_{1}\) and \(\omega_{2}\) were related by \(\omega_{2}\simeq 2\omega\). ### Flow-Induced Chaos in Mechanical Systems #### Flow-Induced Elastic Vibrations ##### Flow in a Pipe: Fire Hose Chaos There are many classes of flow-induced vibrations: flow inside flexible bodies such as pipes or rocket fuel tanks, flow around bodies such as wings, or heat exchange tubes and flow over one surface of a body such as over a panel of an aircraft or a rocket. One of the most studied problems has been the steady flow of fluid through flexible tubes or pipes (see Paidoussis, 1980; also see Chen, 1983). This problem has interest not only because of the nonconservative nature of the fluid forces, but also because of the relevance of the problem to flow-induced vibrations in heat exchange systems. This problem has recently received attention using modern methods of nonlinear analysis and experimentation. Although some of the early classical work goes back to dynamicists in the Soviet Union, research using modern methods has been centered in Europe (Steindl and Troger, 1988) and North America (Bajaj and Sethna, 1984; Sethna and Shaw, 1987; Paidoussis and Moon, 1988; Copeland and Moon, 1992; and Tang and Dowell, 1988). In one study shown in Figure 4-24, fluid flows with constant velocity out of a cantilevered flexible pipe. At a critical flow speed, small limit cycle vibrations appear. The nonlinearity in this problem consists of amplitude constraints near the end of the pipe. When the limit cycle oscillations grow to where the pipe hits the constraints, chaotic vibrations appear (Paidoussis and Moon, 1988). This problem has been modeled as two coupled autonomous nonlinear oscillators, with the dynamics living in a four-dimensional phase space. (See Figure 5-24.) ##### Aeroelastic Panel Flutter An example of chaos in autonomous mechanical systems is the flutter resulting from fluid flow over an elastic plate. This problem is known as _panelflutter_, and readers are referred to two books by Dowell (1975, 1988) for more discussion of the mechanics of this problem. Panel flutter occurred on the outer skin of the early Saturn rocket flights that put men on the Moon in the early 1970s. Dowell and co-workers have done extensive numerical simulation of panel flutter. In early work, Kobayashi (1962) and Fung (1958) had observed nonperiodic motions in their analyses. In one set of problems, they looked at the combined effects of in-plane compression in the plate and fluid flow. More recent numerical results are given in Figure 4-25, showing stable phase plane trajectories for one set of fluid Figure 4-24: Sketch of a flexible tube with nonlinear boundary conditions carrying a steady flow of fluid. velocity and compressive load conditions and chaotic vibrations for another set of conditions (see also Dowell, 1982, 1984). This example also illustrates a different type of Poincare map. Because there is no intrinsic time, one must choose a hyperplane in phase space and look at points where the trajectory penetrates that plane. Dowell has done this for the panel flutter problem and has shown strange attractor-type Poincare maps. ##### Supersonic Panel Flutter An analytic-analog computer study that uncovered chaotic vibrations and predates the Lorenz paper by one year is that of Kobayashi (1962). He analyzed the vibrations of a buckled plate with supersonic flow on one side of the plate. Kobayashi expanded the deflection of the simply supported plate in a Fourier series and studied the coupled motion of the first two modes. Denoting the nondimensional modal amplitudes of these two modes by \(x\) and \(y\), the equations studied using an analog computer were of the form \[\begin{array}{l}\ddot{x}\,+\,\delta\dot{x}\,+\,[1\,-\,q\,+\,x^{2}\,+\,4y^{2}] x\,-\,Qy\,=\,0\\ \ddot{y}\,+\,\delta\dot{y}\,+\,4[4\,-\,q\,+\,x^{2}\,+\,4y^{2}]y\,+\,Qx\,=\,0\end{array}\] (4-4.1) Figure 4-25: Flow over a buckled elastic plate. _Top left:_ Periodic aeroelastic vibrations. _Top right:_ Chaotic vibrations of the plate. [From Dowell (1982).] where \(q\) is a measure of the in-plane compressive stress in the plate (which can exceed the buckling value) and \(Q\) is proportional to the dynamic fluid pressure of the supersonic flow upstream of the plate. In his abstract of this 1962 paper, Kobayashi states, "Moreover the following remarkable results are obtained. (i) In some unstable region of a moderately buckled plate, only an _irregular vibration_ is observed' [italics added]. He also refers to earlier experimental studies in 1957 at the NACA in the United States which was the pre-Sputnik ancestor of NASA (see also Fung, 1958). ### 4.5 Inelastic and geomechanical systems #### Nonlinear Dynamics of Friction Oscillators Early models of chaotic dynamics, as presented in the first edition of this book, included mainly polynomial or trigonometric nonlinearities. As the field matures, more realistic physical nonlinearities are being studied. One of these is the dynamics of vibrating systems with dry friction (Popp and Steltzer, 1990; Feeny and Moon, 1989, 1992). The study of friction has a long history (e.g., see Den Hartog, 1940; Stoker, 1950), and we cannot begin to mention all the literature on the subject based on classical dynamical methods. The skidding of a car on a dry pavement, the screeching of chalk on a blackboard, and other experiences of technical devices with friction have always suggested that more complex dynamics are involved than simple periodic or steady motions. To date there is still debate between materials scientists and mechanicians about the nature of the friction force between two solid objects. We cannot resolve them here. What we can say is that certain classical models can lead to chaotic dynamics and that the global character of this chaotic attraction in phase space is similar to that measured in experiments. In the classical friction problem, the friction force in the direction tangential to the surface depends on the force that is applied normal to the surface as shown in Figure 4.26. The equation of motion for a harmonically forced friction oscillator is given by \[\begin{array}{l}\ddot{x}\,+\,2\delta\dot{x}\,+\,x\,+\,\eta(x)f(\dot{x})\,=\, A\,\cos\Omega t\end{array}\] Here the time and distance are normalized by the mass and linear spring constant. We also allow the normal force effect \(\eta(x)\) to depend on the motion. In many models the tangential friction force is written as a function of velocity. In a study by Feeny and Moon (1992),two models were explored: the classical Coulomb friction law with a discontinuity at \(\dot{x}=0\), and a continuous approximation to the Coulomb law: \[f(\dot{x})=\mu\ \mathrm{sgn}(\dot{x})\] or \[f(\dot{x})=[\mu_{d}+(\mu_{s}+\mu_{d})\mathrm{sech}(\beta\dot{x})]\tanh\alpha \dot{x}\] Figure 4.27: Experimental Poincaré map for a friction oscillator (Figure 4.26). The horizontal part is the sticking region. [From Feeny and Moon (1989) with permission of Elsevier Science Publishers, copyright 1989.] The tanh term models the jump from positive friction to negative friction and approaches a discontinuity as \(\alpha\rightarrow\infty\). The sech term represents a transition from the static friction \(\dot{x}\simeq 0\) to the dynamic friction. A comparison of numerical integration of the continuous function model with experimental observations shows good agreement. In the experiment, the moving mass was designed so that the normal force varied linearly with the displacement (see Feeny, 1990). Color Plate 2 shows a three-dimensional phase space (\(x\), \(v=\dot{x}\), \(\Omega t\) (mod \(2\pi\))). The chaotic attractor is composed of three sections: positive and negative velocities and a sticking region. A Poincare map is shown in Figure 4-27. This shows a nearly one-dimensional structure with two branches, namely, a positive velocity branch and a sticking branch (lower curve). This 2-D map can be reduced to a 1-D map (see Chapter 5). Also one can use bimodal symbol dynamics (+1 for slipping, -1 for sticking) from which a Lyapunov exponent can be calculated (see also Table 3-1 and Chapter 5, Figure 5-17). ### Chaos, Fractals, and Surface Machining It has been observed that all surfaces of solid objects are created by dynamic processes, be they mechanical, chemical, or thermal. There is also evidence from measurements of surface topography that the apparently random displacements from the mean obey a scaling law (Feder, 1988), and experiments on fractured surfaces appear to show fractal scaling (Mandelbrot et al., 1984). These _static_ properties of fractal surface topography have led some to propose that nonlinear dynamics may play a role in the machining or surface creation process (Scott, 1989). In a series of papers, Grabec (1986, 1988) has studied a nonlinear model of the cutting dynamics for an elastic tool bit using a friction law between the workpiece and the tool. The equations for this model are given in terms of the two-degree-of-freedom displacement (\(x\), \(y\)) of the tip of the tool bit: \[m\ddot{x}\,+\,r_{x}\dot{x}\,+\,k_{x}x = F_{x}\] \[m\ddot{y}\,+\,r_{y}\dot{y}\,+\,k_{y}y = F_{y}\] where \(F_{y}\,=\,kF_{x}\). Some effective friction forces along the fault zones where the deformation occurs. One such model, proposed by Carlson and Langer (1989), is shown in Figure 4-28. In this and other models of earthquakes, the energy source is the slow but steady velocity of one of the plates relative to the other. The buildup of elastic energy is then released when the sticking force exceeds some critical value. The unpredictable nature of this stick-slip motion is thought to be a paradigm for the unpredictable nature of earthquakes. A two-block model with friction proposed by Nussbaum and Ruina (1987) at first produced only time-periodic behavior, but a recent adaptation of this model by Huang and Turcotte (1990) using unequal friction forces on each block seems to result in chaotic dynamics of the blocks. In a more physics-based model which looks at both the spatial and temporal deformations between two elastic plates with friction contact along these common edges (i.e., the fault line), Horowitz and Ruina (1989) show through calculations that complex spatial patterns of slip can develop along the fault line. From a more general view of geomechanics and nonlinear dynamics, Turcotte (1992) has published a monograph which tries to relate scaling law behavior in geology to the theorem of fractals and chaotic dynamics. ### Chaos in electrical and magnetic systems #### Nonlinear Electrical Circuits--Ueda Attractor One of the first discoveries of chaos in electrical circuits was that of a periodically excited nonlinear inductor studied by Ueda (1979, 1991). The equation for a circuit with nonlinear inductance and linear resistor, driven by a harmonic voltage, can be written in a nondimensional form Figure 4-28: Earthquake model for the chaotic motions between two tectonic plates. [From Carlson and Langer (1989).] Figure 4.29: Poincaré map of chaotic analog computer simulation of a forced Van der Pol-type circuit. [From Ueda and Akamatsu (1981).] as follows: \[\ddot{x}\,+\,k\dot{x}\,+\,x^{3}\,=\,B\,\cos\,t\] (4-6.1) which is a special case of Duffing's equation (1-2.4). Professor Y. Ueda of Kyoto University in Japan has obtained beautiful Poincare maps of the chaotic dynamics of this equation using analog and digital simulation (Figure 3-33). Ueda has also modeled a negative resistor oscillator, shown in Figure 4-29. The equation for this system is a modified Van der Pol equation (1-2.5): \[\ddot{x}\,+\,(x^{2}\,-1)\dot{x}\,+\,x^{3}\,=\,B\,\cos\,\omega t\] (4-6.2) It is interesting to note that both the Duffing and Van der Pol equations have been studied for decades, yet nowhere in any of the standard references on nonlinear vibrations are chaotic solutions reported. Other nonlinear chaotic circuits are discussed in the next section. ### Nonlinear Circuits _Periodically Excited Circuits: Chaos in a Diode Circuit._ The idealized diode is a circuit element that either conducts or does not. Such on-off behavior represents a strong nonlinearity. A number of experiments in chaotic oscillations have been performed using a particular diode element called a _varactor diode_ (Linsay, 1981; Testa et al., 1982; Rollins and Hunt, 1982) using a circuit similar to the one in Figure 4-30. Both period-doubling and chaotic behavior were reported. The Figure 4-30: (_a_) Model for a varactor diode circuit. (_b_) Circuit element when the diode is conducting. (_c_) Circuit element when the diode is off. [From Rollins and Hunt (1982) with permission of the American Physical Society, copyright 1982.] period doubling suggests that an underlying mathematical model is a one-dimensional map in which the absolute value of the maximum current value in the circuit during the (\(n\,+\,1\))st cycle depends on that in the \(n\)th cycle: \[|I_{\max}|_{n\,+\,1}\,=\,F(|I_{\max}|_{n})\] (4-6.3) One of the interesting questions regarding this system was the physical origin of the nonlinearity. In the earlier work of Linsay, it was proposed that the diode could be modeled as a highly nonlinear capacitance, where \[\begin{array}{rcl}c&\,=&\,c_{0}(1\,-\,\alpha\,V)^{-\gamma}\\ \frac{d}{dt}\,c(V)V&\,=&\,I\\ L\frac{dI}{dt}&\,=&\,-\,RI\,-\,V\,+\,V_{0}\mbox{sin}\,\omega t\end{array}\] (4-6.4) where \(\gamma\,=\,0.44\). Rollins and Hunt (1982), however, have proposed an entirely different model in which the circuit acts as either one of two linear circuits, shown in Figure 4-30\(b\),_c_. Each cycle consists of a conducting and a nonconducting phase. The nonlinearity arises in determining when to switch from the conducting circuit with bias voltage \(V_{f}\)to the nonconducting circuit with constant capacitance. The switching time is a function of the maximum current value \(|I_{\max}|\). In this model, exact solutions of the circuit differential equations are known in each interval, with unknown constants to be determined using continuity of current and voltage at the switching times. Rollins and Hunt used this technique to calculate numerically the mapping function shown in Figure 4-31. Later experiments showed that this model accounted for more of the physics than the earlier version using nonlinear capacitance. See also Hunt and Rollins (1984). Another study with a varactor diode was reported by Bucko et al. (1984), who looked at a series circuit with a diode, inductor, and resistor driven by a sinusoidal voltage. They assumed a mathematical model of the form \[L\frac{dI}{dt}\,+\,RI\,+f\bigg{(}I,\,\bigg{\{}\,I\,dt\bigg{)}\,=\,V_{0}\mbox{ cos}\,\omega t\] (4-6.5) where the properties of the nonlinear diode \(f(I)\) were discussed in the previous section. Bucko et al. explored the parameter plane (\(V_{0}\), \(\omega\)) and outlined regions of subharmonic and chaotic response. These results are shown in Figure 4-32. Figure 4-32\(a\) shows a driving frequency range \(0.5<\omega/2\pi<4.0\) MHz. These data show that one can choose a parameter path that results in a period-doubling route to chaos. However, one can also follow paths that apparently do not follow this route. Figure 4-32\(a\) also shows chaotic islands which, when expanded in Figure 4-32\(b\), exhibit further islands of chaos. This example shows that when the basic equations (4-6.5) are three differential equations, the Poincare map of the dynamics is _two_-dimen sional and the period-doubling properties of the one-dimensional map may not hold in such systems. For certain parameter regimes, however, the two-dimensional map may look one-dimensional and the dynamics are likely to behave as a Figure 4-31: Comparison of (\(a\)) calculated and (\(b\)) measured one-dimensional maps for the varactor diode circuit of Figure 4-32. [From Rollins and Hunt (1982) with permission of the American Physical Society, copyright 1982.] one-dimensional noninvertible map. The experimental moral of this is the following: When there is more than one essential nondimensional group in a physical problem, one should explore a region of parameter space to uncover the full range of possibilities in the nonlinear dynamics. #### Nonlinear Inductor. Bryant and Jeffries (1984a) have studied a sinusoidally driven circuit with a linear negative resistor and a nonlinear Figure 4.32: (_a_) Subharmonic and chaotic oscillation regions in the driver voltage amplitude–frequency plane for an inductor–resistor–diode series circuit. (_b_) Enlargement of diagram in (_a_). [From Bucko et al. (1984) with permission of Elsevier Science Publishers, copyright 1984.] in the \(\sim 10^{-5}\) m\({}^{2}\). ### 4.6 Chaos in electrical and magnetic systems In this work, they looked at four circuit elements in parallel: a voltage generator, negative resistor, capacitor, and a coil around a toroidal magnetic core, with typical values of \(C\approx 7.5\ \mu\)F, \(R=-500\ \Omega\), and a forcing frequency around 200 Hz or higher. The negative resistor was created by an operational amplifier circuit. If \(N\) is the number of turns around the inductor, \(A\) the effect core cross section, and \(l\) the magnetic path length, the equation for the flux density \(B\) in the core is given by \[NAC\ddot{B}+\frac{NA}{R}\dot{B}+\frac{l}{N}H(B)=I(t) \tag{4.6}\] where \(H(B)\) is the nonlinear magnetic field constitutive relation of the core material. In their experiments, they used \(N=100\) turns, \(A\approx 1.5\)\(10^{-5}\) m\({}^{2}\), and \(l\approx 0.1\) m. Using this circuit, they observed quasiperiodic vibrations, phase-locked motions, period doubling, and chaotic oscillations. #### Autonomous Nonlinear Circuits--Chua Attractor Autonomous chaotic oscillations in a tunnel diode circuit have been observed by Gollub et al. (1980) for the circuit shown in Figure 4.33\(a\). The nonlinear elements in this circuit are two tunnel diodes. The current-voltage relation shown in Figure 4.33\(b\) is obviously nonlinear and exhibits a hysteresis loop for cyclic variations in current \(I_{D}\). In this work, the authors use return maps to construct pseudo-phase Figure 4.33: Tunnel diode circuit which admits autonomous chaotic oscillations. [From Gollub et al. (1980) with permission of Plenum Publishing Corp., copyright 1980.] plane Poincare maps. That is, they time sample the current \[x_{n}\equiv I_{D_{2}}\left(t_{0}\,+\,n\tau\right)\] (4-6.7) where \(n\) is an integer, and then plot \(x_{n}\) versus \(x_{n\,+\,1}\). The data were sampled when the voltage \(V_{D_{1}}\) passed through a value of 0.42 V in the decreasing sense. The authors also used Fourier spectra and calculation of Lyapunov constants to measure the divergence rate of nearby trajectories. As noted above, Ueda (1979) studied chaos in a circuit with negative resistance. A novel way to achieve negative resistance in the laboratory is with an operational amplifier. Two examples of experiments on chaotic oscillations in nonlinear circuits using this technique are those by Matsumoto et al. (1984, 1985) and Bryant and Jeffries (1984a,b). The circuit studied by Matsumoto et al. is shown in Figure 4-34\(a\) and consists of three coupled current circuits with a nonlinear resistor. This circuit is autonomous; that is, there is no driving voltage. Thus, the system can produce oscillations only if the nonlinear resistor has negative resistance over some voltage range. In their model, Matsumoto et al. (1984) chose a trilinear current-voltage relation Figure 4-34: Circuit with trilinear active circuit elements which leads to autonomous chaotic oscillations. [From Matsumoto et al. (1985), copyright 1985 Institute of Electrical and Electronic Engineers.] shown in Figure 4-34\(b\) which has the form \[g(V_{1})=m_{0}V_{1}+\frac{1}{2}(m-m_{0})\,|\,V_{1}+b|+\frac{1}{2}(m_{0}-m_{1})|\, V_{1}-b|\] (4-6.8) The resulting circuit equations are obtained by summing currents at nodes A and B in Figure 4-36\(a\) and summing voltages in the left-hand circuit loop: \[C_{1}\dot{V}_{1} =\frac{1}{R}\,(V_{2}-V_{1})-g(V_{1})\] \[C_{2}\dot{V}_{2} =\frac{1}{R}\,(V_{1}-V_{2})-1\] (4-6.9) \[L\dot{I} =\,-V_{2}\] where \(V_{1}\) and \(V_{2}\) are the voltages across the capacitors \(C_{1}\) and \(C_{2}\) and \(I\) is the current through the inductor. Chua and co-workers created the trilinear resistor (4-6.8) by using an operational amplifier with diodes. For small voltages, the nonlinear resistance is negative, and the equilibrium position \((V_{1},\,V_{2},I)=(0,\,0,\,0)\) is unstable and oscillations occur. Chaotic oscillations were found for 1/\(C_{1}=9,\,1/C_{2}=1,\,1/L-7,\,G=0.7,\,m_{0}=-0.5,\,m_{1}=-0.8,\) and \(b=1\) in a set of consistent units. A chaotic time history is shown in Figure 4-35, which has the same character as the Lorenz attractor (Figure 1-27). See also Chua et al. (1986). ### Magnetomechanical Models _Dynamo Models._ A physical model that has received considerable attention is the rotating disk in a magnetic field. This system is of interest to geophysicists as a potential model to explain reversals of the earth's magnetic field. A single-disk dynamo is shown in Figure 4-36. The equation governing the rotation \(\Omega\) and the currents \(I_{1}\) and \(I_{2}\) are of the form (see Robbins, 1977). \[J\,\dot{\Omega} =\,-\,k\Omega\,-\,\mu_{2}I_{1}(I_{1}+I_{2})\,+\,T\] \[L_{1}\dot{I}_{1} =\,-\,RI_{1}-R_{3}I_{2}+\,\mu_{1}\Omega I_{1}\] (4-6.10) \[L_{2}\dot{I}_{2} =\,-\,R_{2}I_{2}\,+\,\mu_{2}\Omega I_{1}\] where \(T\) is an applied constant torque. The time traces in Figure 4-36 show that the current (and hence the magnetic field) can reverse in an apparently random manner. [See also Jackson (1990) for a lengthy discussion of an analysis of Eqs. (4-6.10).] ##### Magnetically Levitated Vehicles Suspension systems for land-based vehicles must provide vertical and lateral restoring forces when the vehicle departs from its straight path. Conventional suspension systems such as pneumatic tires and steel wheels on steel rails, as well as the futuristic systems of air cushion or magnetic levitation, all exhibit nonlinear stiffness and damping behavior and are thus candidates for chaotic vibrations. As an illustration, some experiments performed at Cornell University on a magnetically levitated vehicle are described. [See the book by Moon (1984a, 1993) which describes magnetic levitation transportation mechanics.] In this experiment, permanent magnets were attached to a rigid platform and a continuous L-shaped aluminum guideway was moved Figure 4-35: Chaotic trajectory for a circuit with a trilinear resistor (see Figure 4-34) numerical simulation. This attractor, based on Chua’s circuit, is called the _double scroll_. [From Matsumoto et al. (1985), copyright 1985 Institute of Electrical and Electronic Engineers.] past the model using a 1.2-meter-diameter rotating wheel (Figure 4-37). The induced eddy currents in the aluminum guideways interact with the magnetic field of the magnets to produce lift, drag, and lateral guidance forces. The magnetic drag force is nonconservative and can pump energy into the vibrations of the model. Thus, under certain conditions, the model can undergo limit cycle oscillations. As the speed is increased, damped vibrations change to growing oscillations (see bottom of Figure 4-37). The nonlinearities in the suspension forces limit the vibration and a limit cycle motions results. [This bifurcation in stability is known in mathematics as a _Hopf bifurcation_ (Chapter 1). In mechanics it is called a _flutter_ oscillation.] In addition to flutter or limit cycle oscillations, the levitated model can undergo static bifurcations. Thus, at certain speeds, the equilibrium state can change from vertical to two stable tilted positions as Figure 4-36: _Top:_ Disk dynamo model of Robbins (1977) for reversals of the Earth’s magnetic field. _Bottom:_ Chaotic current reversals from numerical solutions of disk dynamo equations (4-6.10). shown in Figure 4-37. This latter instability is known in aircraft dynamics as _divergence_ and is analogous to buckling of an elastic column. In our experiments, chaotic vibrations occurred when the system exhibited both divergence (multiple equilibrium states) and flutter. The flutter provides a mechanism to throw the model from one side Figure 4-38: Chaotic lateral motions of the levitated model. Figure 4-37: _Top:_ Sketch of magnetically levitated model on a rotating aluminum guideway. _Bottom:_ Limit cycle bifurcation of levitated model. of the guideway to another, similar to what occurred in the buckled beam problem discussed in Chapter 2. The mathematical model for this instability, however, has two degrees of freedom. Lateral and roll dynamics were measured from films of the chaotic vibrations (Figure 4-38). These vibrations were quite violent and if they occurred in an actual vehicle traveling at 400-500 km/h, the vehicle would probably derail and be destroyed. ### Optical Systems We have seen that multiple-well potential problems are a natural source of chaotic oscillations. The creation of coherent light using devices known as _lasers_ involves the stimulation of electrons between two or more atomic energy levels. Thus, it is not surprising that chaotic and complex dynamical behavior may be found in laser systems. Many papers have been published in the physics literature on chaotic behavior of laser systems as well as on the chaotic propagation of light through nonlinear optical devices. An extensive review of chaos in light systems has been written by Harrison and Biswas (1986), and a very readable introduction to nonlinear dynamics of lasers may be found in Haken (1985). In elementary laser systems the nonlinearity originates from the fact that the system oscillates between at least two discrete energy levels. The simplest mathematical model for such systems involves three first-order equations for the electric field in the laser cavity, the population inversion, and the atomic polarization. These equations, known as the _Maxwell-Block equations_, are similar in structure to the Lorenz equations discussed in Chapters 1 and 3 [see Eq. (3-8.3)]. Chaotic phenomena in lasers have been observed in both the autonomous mode and the modulation mode. The simplest model for laser dynamics is derived from Maxwell's equations of electromagnetics and the semiclassical theory of quantum mechanics. At one level of the theory, electrons are assumed to reside in one of two states, each governed by wave functions \(\varphi_{1}\), \(\varphi_{2}\). The complete wave function is then assumed to be a superposition of these two with time-varying amplitudes \(c_{i}(t)\), \(c_{2}(t)\). Here \(|c_{i}|^{2}\) is the probability of finding the electron in the state '\(i\)'. The difference \(d=|c_{2}|^{2}-|c_{i}|^{2}\) is called the _electron occupation difference_, and its macroscopic measure is denoted by \(D(t)\) and is called the _inversion density_. When there are more electrons in one state than the other, the material has an atomic electric dipole density whose macroscopic measure is denoted by the polarization density, \(P(x)\). The third dynamical variable in the laser problem is the macroscopic electric field \(E(t)\). In the dynamical equations, only the slowly varying or modulation parts of these variables are of concern (i.e., we filter out the high-frequency dynamics of the light wave). The basic dynamical equations in these "slow" variables take the form as described in Haken (1985): \[\begin{array}{l}\dot{P}\,=\,\gamma_{1}P\,+\,\gamma ED\\ \dot{D}\,=\,-\,\gamma_{2}D\,+\,\gamma_{2}(\Lambda\,+\,1)\,-\,\gamma_{2}\Lambda EP \\ \dot{E}\,=\,-\,\kappa E\,+\,\kappa P\end{array}\] (4-6.11) It is left to the reader as an exercise to show that these equations can be related to the Lorenz equations (1-3.9), or see Section 8.3 of Haken's book. Many experimental observations of period doubling and other chaotic phenomena have been reported [e.g., see Milonni et al. (1987) for a review]. The other class of problems discussed in Harrison and Biswas (1986) involves passive nonlinear optics. Here the index of refraction (speed of light in the medium) depends on the intensity of the light, for example, through the Kerr effect. ### Fluid and Acoustic Systems #### Chaotic Dynamics in Fluid Systems Although the primary focus of this book is on low-order mechanical and electrical systems, the major impact of the new dynamics on fluid mechanics warrants mention of at least a few fluid experiments in chaotic motions. We recall from Chapter 1 that the major nonlinearity in fluid problems is a convective acceleration term \(\mathbf{v}\,\circ\,\nabla\mathbf{v}\) in the equations of motion (1-1.3). However, other nonlinearities may also play a role such as free surface or interface conditions and non-Newtonian viscous effects. We can classify five types of fluid experiments in which chaotic motions have been observed: 1. Closed-flow systems: Rayleigh-Benard convection, Taylor-Couette flow between cylinders 2. Open-flow systems: pipe flow, boundary layers, jets 3. Fluid particles: dripping faucet 4. Waves on fluid surfaces: gravity waves 5. Reacting fluids: chemical stirred tank reactor, flame jetsAnother set of fluid problems has been the collapse of fluid bubbles which can create acoustic chaos (Lauterborn and Cramer, 1981). One reason for the intense interest in chaotic dynamics and fluids is its potential for unlocking the secrets of turbulence. [For example, see Swinney (1983) for a review and see the edited volume by Tatsumi (1984) for a collection of papers on fluids and chaos.] Some feel that this may be too ambitious a goal for a theory based on a few ordinary differential equations and maps. One view is that dynamical systems theory will provide a good model for the transition to turbulence, but will require major breakthroughs to solve the more difficult problem of fully developed spatial and temporal turbulence (strong turbulence). However, a group at Cornell University has recently studied the dynamics of coherent structures in a turbulent boundary layer for open flow over a wall (Aubry et al., 1988) using modern global bifurcation theory. This work was one of the first to seek to examine both spatial and temporal complexities in fluid problems. Whatever the ultimate progress, nonlinear dynamical theory has added new tools to the study of experimental fluid mechanics. ##### Closed-Flow Systems: Rayleigh-Benard Thermal Convection We recall from Chapter 1 that a thermal gradient in a fluid under gravity produces a buoyancy force that leads to a vortex-type instability with resulting chaotic and turbulence motions (Figure 4-39). By far the most studied experimental system is the thermal convection of fluid in a closed box. This is the system that Lorenz tried to model with his famous equations (3-8.3). Experimental studies of Rayleigh-Benard thermal convection in a box have shown period-doubling sequences as precursors to the chaotic state. They have been carried out in helium, water, and mercury for a wide range of nondimensional Prandtl numbers and Rayleigh Figure 4-39: Sketch of thermofluid convection rolls. numbers. These experiments emerged in the late 1970s. For example, Libchaber and Maurer (1978) observed period-doubling convection oscillation in helium. A number of experimental papers have emerged from a group at the French National Laboratory at Saclay, France, associated with Berge and co-workers (1980, 1982, 1985); see also Dubois et al. (1982). The experiment is similar to that pictured in Figure 4-39 with a fluid of silicone oil in a rectangular cell with dimensions 2 cm \(\times\) 2.4 cm \(\times\) 4 cm. These authors have observed both the quasiperiodic route to chaos (Newhouse et al., 1978) and intermittent chaos. In the former, they observe the following sequence of dynamic events as the temperature gradient is increased: \[\begin{array}{l}\text{steady}\rightarrow\text{monofrequency}\rightarrow\text{quasiperiodic}\rightarrow\text{chaotic}\rightarrow\text{thermal}\rightarrow\text{state}\text{motion}\text{motion}\text{gradient}\end{array}\] The frequency range observed in their experiments is very low, for example, 9-30 \(\times\) 10\({}^{-3}\) Hz. They were one of the first groups to obtain Poincare maps in fluid experiments. This was facilitated by their discovery of regions in the flow where one frequency or oscillator was predominant. Thus, they could use one frequency to synchronize the Poincare maps. Two maps are shown in Figure 2-19. The first is quasiperiodic and the frequency ratio is close to 3. The second is based on 1500 Poincare points and shows a breakup of the toroidal attractor before chaos sets in. The techniques used to measure the motion included laser Doppler anemometry and a differential interferometric method. More recent work involving mode-locking and chaos in convection problems has been done by Haucke and Ecke (1987). fixed boundary conditions on the top and bottom of the fluid layer. For a small temperature difference \(\Delta T\), no fluid motion takes place, but at a critical \(\Delta T\), convective or circulation flow occurs. This motion is referred to as _Rayleigh-Benard convection_. Truncation of the Fourier expansion in three modes was studied by Lorenz (1963). An earlier study by Saltzman (1962) used a five-mode truncation. In this simplification, the velocity in the fluid (\(\upsilon_{x}\), \(\upsilon_{y}\)) is written in terms of a stream function \(\psi\): \[\upsilon_{x}=\frac{\partial\psi}{\partial y},\qquad\upsilon_{y}=\frac{- \partial\psi}{\partial x} \tag{4.1}\] In the Lorenz model, the nondimensional stream function and perturbed temperature are written in the form [see Lichtenberg and Lieberman (1983, pp. 443-446) for a derivation] \[\psi = \sqrt{2}\,x(t)\text{sin}\;\pi ax\;\text{sin}\;\pi y \tag{4.2}\] \[\theta = \sqrt{2}\,y(t)\text{cos}\;\pi ax\;\text{sin}\;\pi y\;-\;z(t)\text {sin}\;2\pi y \tag{4.3}\] where the fluid layer is taken as a unit length. The resulting equations for (\(x\), \(y\), \(z\)) are then given by \[\dot{x} = \sigma(y\,-\,x)\] \[\dot{y} = \rho x\,-\,y\,-\,xz \tag{4.4}\] \[\dot{z} = \,-\,\beta z\,+\,xy\] The parameter \(\sigma\) is a nondimensional ratio of viscosity to thermal conductivity (Prandtl number), \(\rho\) is a nondimensional temperature gradient (related to the Rayleigh number), and \(\beta=4(1\,+\,\alpha^{2})^{-1}\) is a geometric factor, with \(\alpha^{2}=\frac{1}{2}\). For the parameter values \(\sigma=10\), \(\rho=28\), and \(\beta=\frac{3}{3}\) (studied by Lorenz), there are three equilibrium points, all of them unstable. The origin is a saddle point, whereas the other two are unstable foci or spiral equilibrium points (see Figure 1-26). However, globally, one can show that the motion is bounded. Thus, the trajectories have no home but remain confined to an ellipsoidal region of phase space. A numerical example of one of these wandering trajectories is shown in Figure 1-27. A discussion of the bifurcation sequence as the thermal gradient is increased is given in Section 3.8. Thermal Convection Model of Moore and Spiegel It is often the case that discoveries of major significance are not singular and that different people in different places observe new phenomena at about the same time. Such appears to be the case regarding the discovery of low-order models for thermal convection dynamics. Above we discussed the now famous Lorenz (1963) equations, (4-7.4), which later received tremendous attention from mathematicians. Yet around the same time, Moore and Spiegel (1966) of the Goddard Institute and New York University, respectively, proposed a model for unstable oscillations in fluids which rotate, have magnetic fields or are compressible, and have thermal dissipation. The equations derived in their paper, like Lorenz's, are equivalent to three first-order differential equations. If \(z\) represents the vertical displacement of a compressible fluid mass in a horizontally stratified fluid (Figure 4-40\(a\)), restoring forces in the fluid are represented by a spring force and a buoyancy force resulting from gravity. Also, the fluid element can exchange heat with the surrounding fluid. Thus, the dynamics are modeled by a second-order equation (Newton's law) coupled to a first-order equation for heat transfer, leading to a third-order equation. In nondimensional form, Figure 4-40: (\(a\)) Spring–mass model for thermal convection of Moore and Spiegel (1966). (\(b\)) Region of nonperiodic motions in the nondimensional parameter space for the thermal convection model of Moore and Spiegel (1966), Eq. (4-7.5). this equation becomes \[\ddot{\chi}\,+\,\ddot{z}\,+\,(T\,-\,R\,+\,Rz^{2})\dot{z}\,+\,Tz\,=\,0\] (4-7.5) where a nonlinear temperature profile of the form \[\theta\,=\,\theta_{0}\left[\,1\,-\,\left(\frac{z}{L}\right)^{\!\!2}\right]\] is assumed. In Eq. (4-7.5), \(T\) and \(R\) are nondimensional groups: \[T\,=\,\left(\frac{\mbox{thermal relaxation time}}{\mbox{free oscillation time}}\right)^{\!\!2}\] \[R\,=\,\left(\frac{\mbox{thermal relaxation time}}{\mbox{free fall time}}\right)^{\!\!2}\] In their numerical studies, Moore and Spiegel discovered an entire region of aperiodic motion as shown in Figure 4-40\(b\). In a follow-up paper, Baker et al. (1971) analyzed the stability of periodic solutions in the aperiodic regime. They showed that Eq. (4-7.5) can be put into the form \[\begin{array}{l}\ddot{s}\,=\,-\,(1\,-\,\delta)s\,+\,\theta\\ \dot{\theta}\,=\,-\,R^{\,-\,1/2}\theta\,+\,(1\,-\,\delta s^{2})\dot{s}\end{array}\] (4-7.6) The limit of \(R\rightarrow\infty\) is the zero dissipative case. In this limit (\(R\) large), Baker et al. showed that in the range of periodicity, the periodic solutions of (4-7.6) become unstable locally. This property of global stability and local instability seems to be characteristic of chaotic differential equations. In a more recent paper, Marzec and Spiegel (1980) studied a more general class of third-order equations of the form \[\begin{array}{l}\dot{x}\,=\,y\\ \dot{y}\,=\,-\,\frac{dV(x,\lambda)}{dx}\,-\,\varepsilon\mu y\\ \dot{\lambda}\,=\,-\,\varepsilon[\lambda\,+\,g(x)]\end{array}\] (4-7.7) where \(V(x,\,y)\) is thought of as a potential function. They show that both the Moore-Spiegel oscillator (4-7.5) and the Lorenz system (4-7.4) (with a change of variables) can be put into the above form (4-7.6). Strange attractor solutions to specific examples of (4-7.6) were found numerically. The above set of equations also models a second-order oscillator with feedback control \(\lambda\) similar to (4-2.21). It will be an interesting study for historians of science to answer why the Lorenz system received so much study and the Moore-Spiegel model was virtually ignored by mathematicians. Both purported to model convection. Lorenz published his article in the _Journal of Atmospheric Sciences_, whereas Moore and Spiegel published theirs in the _Astrophysics Journal_. #### Closed-Loop Thermosiphon It is curious, given the great amount of attention to the Lorenz attractor as a paradigm for convective flow chaos, that only a few attempts were made to design an experiment that incorporated the assumptions of the Lorenz model. One of these is the flow of fluid in a circular channel under gravity, called a _thermosiphon_. The relevance of this experiment to the Lorenz model was pointed out by Hart (1984). Convectively driven flows are of interest as models for geophysical flows such as warm springs or groundwater flow through permeable layers in the Earth's crust, and they are also of interest as applications for solar heating systems or reactor core cooling. Early experiments by Bau and Torrance (1981) were performed in a rectangular loop thermosiphon. They derived equations that describe flow in a closed circular tube with gravity acting in the vertical plane, as shown in Figure 4-41. Essentially, all variables are assumed to be independent of the radial direction. The principal dependent variables are the circumferential velocity \(v(t)\) and the temperature \(T(\theta,\,t)\). A Figure 4-41: Thermal convection in a vertical one-dimensional fluid circuit. A model for a thermosiphon. viscous wall stress is assumed to act in the fluid. Also, a prescribed wall temperature \(T_{w}(\theta)\) is assumed with a linear cooling law proportional to \(T\,-\,T_{w}\). The basic equations are the balance of angular momentum for the fluid mass and a partial differential equation for the energy or heat balance law. A buoyancy force or moment is introduced by assuming that the fluid density depends on the temperature, \[\rho\,=\,\rho_{0}[1\,-\,\beta(T\,-\,T_{0})]\] (4-7.8) so that a net torque acts on the fluid proportional to \[g\beta a\int_{-\pi}^{\pi}T(\theta)\cos(\theta\,+\,\alpha)\,d\theta\] (4-7.9) where \(\theta\) is as defined in Figure 4-42. In a method similar to that used in deriving the Lorenz equations (4-7.4), the temperature is expanded in a Fourier series. In this way, the partial differential equation for the heat balance is reduced to a set of ordinary differential equations. Following Hart (1984), one writes \[T(\theta)\,=\,\sum\,[C_{n}(t)\cos\,n\theta\,+\,S_{n}(t)\sin\,n\theta]\] (4-7.10) He shows that only the \(n\,=\,1\) thermal modes determine the dynamics. By redefining variables \(x\,=\,v\), \(y\,=\,C_{1}\), and \(z\,=\,S_{1}\,+\,R_{a}\), where \(R_{a}\) is similar to the Reynolds number, the resulting coupled first-order equations take the form \[\dot{x}\,=\,P_{r}[\,-\,F(x)\,+\,(\cos\,\alpha)y\,-\,(\sin\,\alpha)(z\,-\,R_{a})]\] \[\dot{y}\,=\,\,-\,xz\,-\,y\,+\,R_{a}x\] (4-7.11) \[\dot{z}\,=\,xy\,-\,z\] where \(F(x)\) is a nonlinear friction law. To obtain the Lorenz equations, one sets \(\alpha\,=\,0\) and \(F(x)\,=\,Cx\). The Lorenz limit corresponds to antisymmetric heating about the vertical. In their experiments, Bau and Torrance (1981) investigated the stability of flow but did not explore the chaotic regime. Given the close correspondence between Eqs. (4-7.11) and the Lorenz equation (4-7.4), it would appear natural that experimental exploration of the chaotic regime of the thermosiphon would be attempted. Another analysis of the relation between the Lorenz equations and fluid in a heated loop has been reported by Yorke et al. (1985). Earlier experiments with a fluid convection loop by Creveling et al. (1975) did not report chaotic motions. However, recent experiments by Gorman et al. (1984, 1986) have reproduced some of the features of the Lorenz attractor. The working fluid was water and the apparatus consisted of a 75-cm-diameter loop of 2.5-cm-diameter Pyrex (glass) tubing. The bottom half was heated with electrical resistance tape while the top half was kept in a constant-temperature bath. Taylor-Couette Flow Between CylindersA classic fluid mechanics system which exhibits perturbulent chaos is the flow between two rotating cylinders (called _Taylor-Couette flow_) shown in Figure 4.4. Much work has been done on this system [e.g., see Swinney (1983) for a review]. This flow is sensitive to the Reynolds number \(R=(b-a)a\Omega_{i}/\nu\) and the ratios \(b/a\) and \(\Omega_{j}/\Omega_{i}\), where the latter is the quotient of the outer cylinder rotation rate to the inner as well as the boundary conditions on the ends. This system exhibits a prechaos behavior of quasiperiodic oscillations before broad-band chaotic noise sets in. Other work includes that of Brandstater and co-workers (1983, 1984, 1987). Taylor-Couette flow also exhibits complex spatiotemporal dynamics that are now under study by the group at the University of Texas at Austin under Professor H. Swinney. Figure 4.42: Sketch of flow between two rotating cylinders known as _Taylor–Couette flow_. #### 4.7.3 Pipe Flow Chaos While closed-flow problems have captured the bulk of the attention vis-a-vis dynamical systems theory, open-flow problems are of great importance to engineering design. These include flows over airfoils, boundary layers, jets, and pipe flow. Recently, increased attention has been focused on applying the theory of chaotic dynamics to the laminar-turbulent transition problem in open-flow systems. One example is the experiment of Sreenivasan (1986) of Yale University who is studying intermittency in pipe flows. In this problem, low-velocity flow is laminar and steady, whereas for sufficiently high mean flow velocity the flow field becomes turbulent. At some critical velocity, the transition from laminar to turbulent appears to occur in intermittent bursts of turbulence. As the velocity increases, the fraction of time spent in the chaotic state increases until the flow is completely turbulent. Some observations of this phenomenon go back to Reynolds in 1883. The current focus of attention is to try to relate features of the intermittency, such as distribution of burst times, to dynamical theories of intermittency (e.g., see Pomeau and Manneville, 1980). #### Fluid Drop Chaos A simple system with which the reader can observe chaotic dynamics in one's home is the dripping faucet. This experiment is described by R. Shaw of the University of California--Santa Cruz in a monograph on chaos and information theory (1984). The experiment and sketch of experimental data are shown in Figure 4-43. The observable variable is the time between drops as measured with a light source and photocell, and the control variable is the flow rate from the nozzle. In Shaw's experiment, he measures a sequence of time intervals \(\{T_{n},\,T_{n+1},\,T_{n=2}\}\) but does not measure the drop size or other physical properties of the drop such as shape. He and his students obtained periodic motion and period-doubling phenomena as well as chaotic behavior. Different maps of \(T_{n+1}\) versus \(T_{n}\) are obtained for different flow rates. The map in Figure 4-43 shows a classic one-dimensional parabolic map similar to the logistic map of Feigenbaum (1978). They also observed a more complicated map which is best represented in a three-dimensional phase space \(T_{n}\) versus \(T_{n+1}\) versus \(T_{n+2}\). This is an example of using discrete data to construct a pseudo-phase-space and suggests that another dynamics variable should be observed (such as drop size). #### 4.7.4 Surface Wave Chaos It is well known that waves can propagate on the interface between two immiscible fluids under gravity (e.g., air onwater). Such waves can be excited by vibrating a liquid in the vertical direction in the same way that one can parametrically excite vibrations in a pendulum. Subharmonic excitation of shallow water waves goes back to Faraday in 1831. An analysis of this phenomenon in the context of period doubling has been performed by a group at UCLA (Keolian et al., 1981). They looked at saltwater waves in an annulus of 4.8-cm mean radius with a cross section of 0.8 \(\times\) 2.5 cm. The system is driven in the vertical direction by placing the annulus on an acoustic loudspeaker. By measuring the wave height versus time at several locations around the annulus, the UCLA group measured a subharmonic sequence before chaos that does not follow the classic period-doubling sequence; for example, they observe resonant frequencies _pf/m_, where \(f\) is the driving frequency, for \(m=1,2,4,12,13,16,18,20,24,28,36\), which differs from the \(2^{n}\) sequence of the logistic equation. Figure 4.43: Experimental one-dimensional map for the time between drops in a dripping faucet. [From Shaw (1984) with permission of Ariel Press, copyright 1984.] In another study of forced surface waves, Ciliberto and Gollub (1985) looked at a cylindrical dish of water with radius 6.35 cm with a depth of about 1 cm. They also used a loudspeaker excitation to explore regions of periodic and chaotic motion of the fluid height. In the region around 16 Hz, for example, they obtained chaotic wave motion for a vertical driving height of around 0.15 mm. The tried to interpret the results in terms of nonlinear interaction between two linear spatial modes (see Figure 6-8, Section 6.2). A theoretical analysis of this problem has been done by Holmes (1986). ### Acoustic Chaos At first look, the propagation of sound waves in air or water would appear to be a linear phenomenon, since they are usually modeled by the linear wave equation. But, as anyone who has tried to blow into a trumpet or a wind instrument has found out, it is not difficult to create noisy, chaotic-like acoustic effects. The origin of chaos in acoustics has at least three sources: the nonlinearities in the medium itself; the acoustic generator; and the reflection, impedance, or reception of the acoustic waves. A review of chaotic dynamics in some acoustics problems has been given by Lauterborn and Holzfuss (1991) of the Technical University of Darmstadt, Federal Republic of Germany. This group has pioneered in the study of chaotic noise from bubbles and cavitation in liquids. In this class of problems, a high-intensity source creates bubbles in the fluid. The nonlinear behavior of the bubbles is then believed to be the source of period doubling and chaotic acoustic phenomena in the fluid (e.g., see Lauterborn and Holzfuss, 1989 and Lauterborn and Cramer, 1981). Two papers on musical chaos have been published by Maganza et al. (1986), who studied period doubling and chaos in clarinet-like instruments, and by Gibiat (1988), who studied a similar system. Embedding space or pseudo-phase-space techniques were used to look for qualitative behavior of the clarinet-like resonator. In a study in our laboratory, chaotic modulation in an organ-pipe generator of sound was obtained when a nonlinear mechanical impedance was placed at the open end of the meter-long pipe (Moon, 1986). Other examples of acoustic chaos are discussed in the review by Lauterborn and Holzfuss (1991). ### Chemical and Biological Systems #### Chaos in Chemical Reactions Rossler (1976a,b) and Hudson et al. (1984) have observed chaotic dynamics in a small reaction-diffusion system. Also, Schrieber et al. (1980) have observed similar behavior in two coupled stirred-cell reactors. If (_x_1, \(y\)1) represents the chemical concentration in one cell and (_x_2, \(y\)2) represents the concentration in another cell, a set of equations can be derived to model the dynamic behavior: \[\begin{array}{l}\dot{x}_{1}=A\,-\,(B\,+\,1)x_{1}\,+x_{1}^{2}y_{1}\,+\,D_{1}(x _{2}\,-\,x_{1})\\ \dot{y}_{1}=B_{1}x_{1}\,-\,x_{1}^{2}y_{1}\,+\,D_{2}(y_{2}\,-\,y_{1})\\ \dot{x}_{2}=A\,-\,(B\,+\,1)x_{2}\,+\,x_{2}^{2}y_{2}\,+\,D_{1}(x_{1}\,-\,x_{2}) \\ \dot{y}_{2}=Bx_{2}\,-\,x_{2}^{2}y_{2}\,+\,D_{2}(y_{1}\,-\,y_{2})\end{array}\] (4-8.1) A now classic example of chemical chaos is the Belousov-Zhabotinski reaction in a stirred-flow reactor. Subharmonic oscillations and period doubling have been observed by a group under Professor H. Swinney of the University of Texas at Austin (Simoyi et al., 1982). With the input chemical concentrations held fixed, the time history of the concentration of the bromide ion, one of the reaction chemicals, shows complex subharmonic behavior of different flow rates. See Argoul et al. (1987b) for a review. In a more recent study, this group has developed a model to explain spatially induced chaos in Belousov-Zhabotinski-type reaction-diffusion systems (Vastano et al., 1990). #### Biological Chaos One of the exciting aspects of the new mathematical models in nonlinear dynamics is the wide applicability of these paradigms to many different fields of science. Thus, it is no surprise that dynamic phenomena in biological systems have been explained by some of the very same equations used to describe chaos in the electrical and mechanical sciences. A collection of papers concerning chaos in biological systems may be found in the book edited by Degn et al. (1987). A readable book on this subject has been published by Glass and Mackey (1988). A few other examples are described here. #### Chaotic Heart Beats. Glass et al. (1983) have performed dynamic experiments on spontaneous beating in groups of cells from embryonicchick hearts. Without external stimuli, these oscillations have a period between 0.4 and 1.3 s. However, when periodic current pulses are sent into the group using microelectrodes, entrainment, quasiperiodicity, and chaotic motions have been observed (see Fig. 2-22). The circle map has been used as a model to explain some of these phenomena [e.g., see Guevara et al. (1990), Glass (1991), and Arnold (1991)]. A discussion of the relevance of nonlinear dynamics and chaotic models to ventricular fibrillation was given by Goldberger et al. (1986) and Goldberger and West (1987a). These papers contain a number of references on cardiac dynamics. In another paper, Rigney and Goldberger (1989) used a parametrically excited pendulum equation to model a period doubling of the heart rate observed in electrocardiograph experiment. Another work, that of Lewis and Guevara (1990), described the dynamic modeling of ventricular muscle due to periodic excitation in the sinoatrial node of the heart. The authors started with the telegraph equation, a partial differential equation used in electrical engineering, and reduced the dynamics to a one-dimensional map. #### Nerve Cells In a similar type of experiment, sinusoidal stimulation of a giant neuron in a marine mollusk by Hayashi et al. (1982) also showed evidence of chaotic behavior. #### Biological Membranes Biological membranes control the flow of ions into a cell such as potassium and sodium. Time history measurements of the ion current through a channel in a cell of the cornea as shown in Figure 4-44 seem to indicate a chaotic opening and closing of the channel. Earlier models of this phenomenon were based on intrinsic random processes. However, a dynamic model of the ion kinetics using an iterated one Figure 4-44: Experimental time history of the current through a single-ion-channel protein from a cell in the cornea. [From Leibovitch and Czegledy (1991).] dimensional map has been proposed by Liebovitch and Toth (1991), shown in Figure 4-45. The apparent chaotic switching from open to closed is similar to the particle in a two-well potential described in Chapters 1 and 2. In another paper, Liebovitch and Czegledy (1991) used a two-well Duffing oscillator as a possible model for the ion dynamics. ### Nonlinear Design and Controlling Chaos Is chaos good for anything? Can engineers use chaos theory to invent new technology? Can one design a chaotic controller? In a time of debate between the role of basic research in enhancing technology and productivity, these questions are being raised if not by dynamicists themselves, then by their funding sponsors. Certainly the role of chaotic dynamics in the mixing of fluids and chemicals has drawn interest in chemical engineering (see Section 8.4 and Ottino, 1989b). But, in electrical and mechanical systems, chaotic behavior is more often avoided in design. However, recent progress has been made in using nonlinear behavior to advantage in the design of electronic Figure 4-45: _Top:_ One-dimensional map of chaotic ion channel kinetics. _Bottom:_ Iteration of map. [From Leibovitch and Toth (1991) with permission of Academic Press, Ltd.] circuits, and of even using the chaotic nature of a strange attractor to design a control system. Much of this recent work stems from the work of a group at the University of Maryland. Research conducted in 1992 points the way toward future applications of what I would call "nonlinear thinking" in design of dynamic systems. Perhaps the most clever set of papers are those inspired by recent research from the group at the University of Maryland entitled "Controlling Chaos." In Ott, Grebogi, and Yorke (1990) they proposed using the stochastic nature of a chaotically behaved system to lock onto one of the many unstable periodic motions embedded in the strange attractor. Two papers which apply this nonlinear thinking to control chaos in different experimental systems are Spano and Ditto (1991) and Hunt (1991). Spano and Ditto describe how the use of small perturbations can be used to switch between different orbits embedded in a strange attractor using a control scheme derived from the underlying map for the attractor. They use a flexible beam made of an amorphous ferroelastic material called Metglas (Fe\({}_{81}\)B\({}_{13.5}\)Si\({}_{3.5}\)C\({}_{2}\)) whose effective Young's modulus is sensitive to small magnetic fields. When an applied field is periodically perturbed, the beam undergoes chaotic motions. Embedded in this attractor are an infinite set of unstable periodic orbits. Additional perturbations to the field were then used to control the motion to switch from one formerly unstable periodic orbit to another. In the paper by Hunt (1991) of Ohio University the system consists of a rectifier-type diode in series with an inductor driven by a sinusoidal voltage. The resulting map generated by the strange attractor is then used to produce a control signal. Using this method the author was able to stabilize almost all uncontrolled unstable orbits up to period 23. These two papers essentially employ digital control schemes. Extending this idea of using chaotic systems to build useful oscillators, Pecora and Carroll (1991) describe how one might "paste together" nonlinear subsystems to build useful chaotically driven systems. For example, one might build a circuit (the receiver) that can be synchronized with the chaotic output of another subsystem (the transmitter). In this way, one hopes to be able to introduce modulation on the chaotic carrier for secure transmission of information. In another use of the chaotic attractor for control, Shinbrot, et al. (1990) of the University of Maryland, described how one can devise control algorithms to move a system from one chaotic orbit to another at will in the phase space. This idea uses the exponential sensitivity of a chaotic system and small perturbations (control impulses) to direct a system from one point on an attractor to a target point on the attractor. There are two aspects to the oxymoron "controlling chaos." In one set of problems one examines how a control parameter can either quench a chaotically dynamic system or produce chaotic dynamics in an otherwise quiet or periodically behaved system. One example of this type of "Controlled Chaos" is a paper by Golnaraghi and Moon (1991) using a servo-controlled 'pick and place' robotic device. However, the new concepts of "controlling chaos" inspired by the Maryland group illustrate how one might constructively use the exponentially divergent nature of chaotic orbits and the extreme sensitivity of these systems to small perturbations to design useful systems. These problems might also become paradigms in the education of engineers into the new methods of "nonlinear design." This design philosophy recognizes that nonlinear systems exhibit multiple basins of dynamic behavior which can lead to more creative solutions than those based on linear dynamic systems. ## Problems * Consider an inverted pendulum: a spherical mass at the end of a massless rod of length \(L\). The pendulum is constrained by two rigid walls on each side. At equilibrium the pendulum mass will rest on one of the two walls. Assume that the rest angles are small and show that for undamped free vibrations the dynamics are governed by (see Shaw and Rand, 1989) \[\begin{array}{l}\ddot{x}\ -\ x=\ 0,\qquad|x|<1\\ x\rightarrow\ -\ x,\qquad|x|\ =\ 1\end{array}\] Show that a saddle point exists at the origin of the phase space (\(x\), \(\dot{x}\)). Sketch a few trajectories. * A mass particle is constrained to move under a constant gravity force in a circular path which lies in a vertical plane. Assume that the plane rotates with frequency \(\Omega_{0}\) about the vertical axis through the center of the circle. Find the value of \(\Omega_{0}\) for which the number of equilibria changes from one to three. Show that this problem is similar to a particle in a two-well potential. * Consider the two-degree-of-freedom system of a linear spring
## Chapter 5 Experimental Methods in Chaotic Vibrations _Perfect logic and faultless deduction make a pleasant theoretical structure, but it may be right or wrong; The experimenter is the only one to decide, and he is always right._ L. Brillouin _Scientific Uncertainty and Information_, 1964 ### 5.1 Introduction: Experimental Goals A review of physical systems which exhibit chaotic vibrations was presented in Chapter 4. In this chapter, we discuss some of the experimental techniques that have been used successfully to observe and characterize chaotic vibrations and strange attractors. To a great extent, these techniques are specific to the physical medium--for example, rigid body, elastic solid, fluid, optical, or reacting medium. However, many of those measurements which are unique to chaotic phenomena, such as Poincare maps or Lyapunov exponents, are applicable to a wide spectrum of problems. Since publication of the first edition, some researchers have turned their attention to the spatial as well as to the temporal aspects of chaos. In this chapter we shall focus only on temporal chaos. An introduction to spatial dynamics is given in Chapter 8 along with a few experimental examples. A diagram outlining the major components of an experiment is shown in Figure 5-1. The source of the vibration is either (a) an external energy source such as an electromagnetic shaker or (b) an internal source of self-excitation. In the case of an autonomous system, suchas the Rayleigh-Benard convection cell, the source of instability is a prescribed temperature difference across the cell, and the nonlinearities reside in the convective terms in the acceleration of each fluid element. The other major elements include _transducers_ to convert physical variables into electronic voltages, a _data acquisition_ and storage system, _graphical display_ (such as an oscilloscope), and data analysis computer. The techniques that must be mastered for experiments in chaotic vibrations depend on some extent on the goals that one sets up for the experimental study. These goals could include the following: 1. Establish existence of chaotic vibration in a particular physical system. Figure 5.1: Diagram showing components of an experimental system to measure the Poincaré map of a chaotic physical system. 2. Determine critical parameters for bifurcations. 3. Determine criteria for chaos. 4. Map out chaotic regimes. 5. Measure qualitative features of chaotic attractor--for example, Poincare maps. 6. Measure quantitative properties of attractor--for example, Fourier spectrum, Lyapunov exponent, probability density function. 7. Seek methods to quench, control, prevent or exploit chaos in a technical system. ### 5.2 Nonlinear elements in dynamical systems The phenomena of chaotic vibrations cannot occur if the system is linear. Thus, in performing experiments in chaotic dynamics, one should understand the nature of the nonlinearities in the system. To refresh one's memory, a linear system is one in which the principle of superposition is valid. Thus if \(x_{1}(t)\) and \(x_{2}(t)\) are each possible motions of a given system, then the system is linear if the sum \(c_{1}x_{1}(t)\,+\,c_{2}x_{2}(t)\) is also a possible motion. Another form of the superposition principle is more easily described in mathematical terms. Suppose the dynamics of a given system can be modeled by a set of differential or integral equations of the form \[{\bf L}[{\bf X}]\;=\;{\bf f}(t) \tag{5.1}\] where \({\bf X}\,=\,(x_{1}\,,\,x_{2}\,,\,\ldots\,,\,x_{k}(t)\), \(\ldots\,,\,x_{n})\) represents a set of independent dynamical variables that describe the system. Suppose the system is forced by two different input functions \({\bf f}_{1}(t)\) and \({\bf f}_{2}(t)\) with outputs \({\bf X}_{1}(t)\) and \({\bf X}_{2}(t)\). Then if the system is linear, the effect of two simultaneous inputs can be easily found: \[{\bf L}[c_{1}{\bf X}_{1}\;+\;c_{2}{\bf X}_{2}]\;=\;c_{1}{\bf f}_{1}(t)\;+\;c_{ 2}{\bf f}_{2}(t) \tag{5.2}\] The only way that this property can hold is for the terms in the differential equations (5.1) to be to the first power \({\bf X}\) or \({\bf\dot{X}}_{1}\), and so on--hence the term _linear system_. Nonlinear systems involve the unknown functions in forms other than to the first power, that is, \(x^{2}\), \(x^{3}\), sin \(x\,,\,x^{a}\), 1/(\(x^{2}\,+\,b\)), or similar forms for the derivatives or integrals of the function, for example, \(\dot{x}^{2}\), [\(\int\!x\;dt\)]\({}^{2}\). Experimental nonlinearities can be created in many ways, some of them quite subtle. In mechanical or electromagnetic systems, nonlinearities can occur in the following forms: 1. Nonlinear material or constitutive properties (stress versus strain, voltage versus current) 2. Nonlinear acceleration or kinematic terms (e.g., centripetal or Coriolis acceleration terms) 3. Nonlinear body forces 4. Geometric nonlinearities ### Material Nonlinearities Examples of material nonlinearities in mechanical and electrical systems include the following: _Solid Materials._ Nonlinear stress versus strain: (1) elastic (e.g., rubber) and (2) inelastic (e.g., steel stressed beyond the yield point; also plasticity, creep). _Magnetic Materials._ Nonlinear magnetic field intensity **H** versus flux density **B**: \[\textbf{B}\ =\ \textbf{f}(\textbf{H})\] (e.g., ferromagnetic material iron, nickel cobalt--hysteretic in nature). _Dielectric Materials._ Nonlinear electric displacement **D** versus electric field intensity **E**: \[\textbf{D}\ =\ \textbf{f}(\textbf{E})\] (e.g., ferroelectric materials). _Electric Circuit Elements._ Nonlinear voltage versus current: \[V\ =\ f(I)\] [e.g., Zener and tunnel diodes, nonlinear resistors, field effect transistors (FET), metal oxide semiconductors (MOSFET)]. Nonlinear voltage versus charge: \[V\ =\ g(Q)\] (e.g., capacitors). Other material nonlinearities include nonlinear optical materials (e.g., lasers), heat-flux-temperature gradient properties, nonlinear viscosity properties in fluids, voltage-current relations in electric arcs, and dry friction. ### Kinematic Nonlinearities This type of nonlinearity occurs in fluid mechanics in the Navier-Stokes equations, where the acceleration term includes a nonlinear velocity operator \[v\,\frac{\partial v}{\partial x}\quad\mbox{or}\quad\mathbf{v}\cdot\nabla \mathbf{v}\] which represents convective effects. (See Eq. (1-1.3).) In particle dynamics, one often uses local coordinate systems to describe motion relative to some inertial reference frame. When the local frame rotates with angular velocity \(\Omega\) relative to the large frame, the absolute acceleration is given by \[\mathbf{A}\,=\,\mathbf{a}\,+\,\mathbf{A}_{0}\,+\,\hat{\mathbf{\Omega}}\,\times \,\boldsymbol{\rho}\,+\,\mathbf{\Omega}\,\times\,\mathbf{\Omega}\,\times\, \boldsymbol{\rho}\,+\,2\mathbf{\Omega}\,\times\,\mathbf{v}\] (5-2.3) where \(\mathbf{A}_{0}\) is the acceleration of the origin of the small frame relative to the reference, \(\boldsymbol{\rho}\) and \(\mathbf{v}\) are local position vector and velocity, respectively, of the particle. The last two terms are called the _centripetal_ and _Coriolis_ acceleration terms. The last three terms are _nonlinear_ in the variables \(\boldsymbol{\rho}\), \(\mathbf{v}\), \(\mathbf{\Omega}\). For a rigid body in pure rotation, these terms appear in Euler's equations for the rotation dynamics [see Eq. (4-2.14)]: \[M_{x}\,=\,I_{x}\,\frac{d\omega_{x}}{dt}\,-\,(I_{z}\,-\,I_{y})\omega_{y}\omega_ {z}\] \[M_{y}\,=\,I_{y}\,\frac{d\omega_{y}}{dt}\,-\,(I_{x}\,-\,I_{z})\omega_{z}\omega_ {x}\] (5-2.4) \[M_{z}\,=\,I_{z}\,\frac{d\omega_{z}}{dt}\,-\,(I_{y}\,-\,I_{x})\omega_{x}\omega_ {y}\] where (\(M_{x}\), \(M_{y}\), \(M_{z}\)) are applied force moments and (\(I_{x}\), \(I_{y}\), \(I_{z}\)) are principal second moments of mass about the center of mass. #### Nonlinear Body Forces Electromagnetic forces are represented as follows: \[\begin{array}{ll}\text{Currents:}&F=\alpha I_{1}I_{2}\quad\text{or}\quad\beta IB \\ \text{Magnetization:}&\mathbf{F}=\mathbf{M}\cdot\nabla\mathbf{B}\\ \text{Moving\ Media:}&\mathbf{F}=q\mathbf{v}\times\mathbf{B}\end{array}\] (Here \(I\) is current, \(\mathbf{B}\) is the magnetic field, \(\mathbf{M}\) is the magnetization, \(q\) represents charge, and \(\mathbf{v}\) is the velocity of a moving charge.) #### Geometric Nonlinearities Geometric nonlinearities in mechanics involve materials with linear stress-strain behavior, but the geometry changes with deformation. A classic example of a geometric nonlinearity is the elastica shown in Figure 5-2. In this problem, the material is linearly elastic but the large deformations produce a nonlinear force-displacement or moment-angle relation of the form \[\begin{array}{ll}M&=\ A\kappa\\ \kappa&=\frac{u^{\prime\prime}}{[1\ +\ (u^{\prime})^{2}]^{3/2}}\end{array}\] (5-2.5) Figure 5-2: Examples of geometric nonlinearities in elastic structures. where \(M\) is the bending moment, \(\kappa\) is the curvature of the neutral axis of the beam, and \(u(x)\) is the transverse displacement of the beam. This problem is an interesting one for study of chaotic vibrations because the elastica can exhibit multiple equilibrium solutions (see Chapter 8). Cylindrical and spherical shells also exhibit geometric elastic nonlinearities (see Evenson, 1967). ### 5.3 Experimental Controls First and foremost, the experimenter in chaotic vibrations should have control over noise, both mechanical and electronic. If one is to establish chaotic behavior for a deterministic system, the noise inputs to the system must be minimized. For example, mechanical experiments such as vibration of structures or autonomous fluid convection problems should be isolated from external laboratory or building vibrations. This can be accomplished by using a large-mass table with low-frequency air bearings. A low-cost solution is to work at night when building noise is at a minimum. Second, one should build in the ability to control significant physical parameters in the experiments, such as forcing amplitude or temperature gradient. This is especially important if one wishes to observe bifurcation sequences such as period-doubling phenomena. Where possible, one should use continuous element controls and avoid devices with incremental or step changes in the parameters. In some problems, there is more than one dynamic motion for the same parameters. Thus, control over the initial state variables may also be important. Another factor is the number of significant figures required for accurate measurement. For example, to plot Poincare maps from digitally sampled data, an 8-bit system may not be sensitive enough and one may have to go with 12-bit electronics or better in order to resolve the fine fractal structure in the maps. #### Frequency Bandwidth Most experiments in fluid, solid, or reacting systems may be viewed as infinite-dimensional continua. However, one often tries to develop a mathematical model with a few degrees of freedom to explain the major features of the chaotic or turbulent motions of the system. This is usually done by making measurements at a few spatial locations in the continuous system and by limiting the frequency bandwidth over which one observes the chaos. This is especially important if velocity measurements for phase-plane plots are to be made from deformation histories. Electronic differentiation will amplify higher-frequency signals, which may not be of interest in the experiment. Thus, extremely good electronic filters are often required, especially ones that have little or no phase shift in the frequency band of interest. ### Phase-Space Measurements It was pointed out in Chapter 2 that chaotic dynamics are most easily unraveled and understood when viewed from a phase-space perspective. In particle dynamics, this means a space with coordinates composed of the position and velocity for each independent degree of freedom. In forced problems, time becomes another dimension. Thus, the periodic forcing of a two-degree-of-freedom oscillator with generalized positions (\(q_{1}(t)\), \(q_{2}(t)\)) has a phase-space representation with coordinates (\(q_{1}\), \(\dot{q}_{1}\), \(q_{2}\), \(\dot{q}_{2}\), \(\omega t\)), where \(\omega\) is the forcing frequency. (The phase variable \(\omega t\) is usually plotted modulo \(2\pi\).) If one measures displacement \(q(t)\), a differentiation circuit is required. If velocity is measured, the phase space may be spanned by (\(v\), \(\int\!v\,dt\)), which calls for an integrator circuit. As noted above, in building integrator or differentiator circuits, care should be taken that the phase as well as the amplitude is not disturbed within the frequency band of interest. In electronic or electrical circuit problems, the current and voltage can be used as state variables. In fluid convection problems, temperature and velocity variables are important. ### Pseudo-Phase-Space Measurements In many experiments, one has access to only one measured variable {\(x(t_{1})\), \(x(t_{2})\),...} (where \(t_{1}\) and \(t_{2}\) are sampling times, not to be confused with Poincare maps). When the time increment is uniform, that is, \(t_{2}=t_{1}\,+\,\tau\) and so on, then a pseudo-phase-space plot can be made using \(x(t)\) and its past (or future) values: \[(x(t),\,x(t\,-\,\tau))\quad\text{or}\quad(x(t),\,x(t\,+\,\tau))\] (two-dimensional phase space) \[(x(t),\,x(t\,-\,\tau),\,x(t\,-2\tau))\] (three-dimensional phase space)One can show that a closed trajectory in a phase space in (_x_, \(\dot{x}\)) variables will be closed in the (_x_(_t_), _x_(_t_ - _t_)) variables (one must connect the points when the system is digitally sampled) as shown in Figure 5-3. Likewise, chaotic trajectories in (_x_, \(\dot{x}\)) look chaotic in (_x_(_t_), _x_(_t_ - _t_)) variables. The plots can be carried out after the experiment by a computer, or one may perform on-line pseudo-phase-plane plots using a _sample-and-hold circuit_. The one difficulty with pseudo-phase-space variables is taking a Poincare map. For example, when there is a natural time scale, such as in forced periodic motion of a system with frequency \(\omega\), the sample time \(\tau\) is usually chosen much smaller than the driving period, that is, \(\tau<\!<2\pi/\omega\equiv T\). If \(\tau\) is not an integer of \(T\), Poincare maps may lose some of their fine fractal structure. ### Bifurcation Diagrams As discussed in Chapter 2, one of the signs of impending chaotic behavior in dynamical systems is a series of changes in the nature of the periodic motions as some parameter is varied. Typically, in a single-degree-of-freedom oscillator, as the control parameter approaches a critical value for chaotic motion, subharmonic oscillations appear. In the now classic "logistic equation," a series of period-2 oscillations appear [Eq. (1-3.6)]. The phenomenon of sudden change in the motion as a parameter is varied is called a _bifurcation_. A sample experimental bifurcation diagram is shown in Figure 5-4. Such diagrams can be obtained experimentally by time sampling the motion as in a Poincare map and displaying the output on an oscilloscope as Figure 5-3: Periodic trajectory of a third-order dynamical system using pseudo-phase-space coordinates. Figure 5.4: Experimental bifurcation diagram for the vibration of a buckled beam: Poincaré map samples of bending displacement versus amplitude of forcing vibrations. Figure 5.5: _Top: Poincaré map sampling times at constant phase of forcing function. Bottom: Geometric interpretation of Poincaré sections in the three-dimensional phase space._ shown in Figure 5-4. Here the value of the control parameter--for example, a forcing amplitude or frequency--is plotted on the horizontal axis and the time-sampled values of the motion are plotted on the vertical axis. This diagram actually represents a series of experiments, where each value of the control parameter is an experiment. When the control parameter can be varied automatically, such as by a computer and digital-to-analog device, the diagram can be obtained quite rapidly. Care must be taken, however, to make sure transients have died out after each change in the control parameter. In the bifurcation diagram of Figure 5-4, the continuous horizontal lines represent periodic motions of various subharmonics. The values in the dashed-line areas represent chaotic regions. The boundary between chaotic and periodic motions can clearly be seen in this diagram. When this is automated, one must be careful not to mistake a quasiperiodic motion for a chaotic motion. A phase-plane Poincare map is still very useful for distinguishing between quasiperiodic and chaotic motions. ### 5.6 Experimental Poincare Maps Poincare maps are one of the principal ways of recognizing chaotic vibrations in low-degree-of-freedom problems (see Table 2-2). We recall that the dynamics of a one-degree-of-freedom forced mechanical oscillator or \(L\)-\(R\)-\(C\) circuit may be described in a three-dimensional phase space. Thus, if \(x(t)\) is the displacement, (\(x\), \(\dot{x}\), \(\omega t\)) represents a point in a cylindrical phase space where \(\phi=\omega t\) represents the phase of the periodic forcing function. A Poincare map for this problem consists of digitally sampled points in this three-dimensional space, for example, (\(x(t_{n})\), \(\dot{x}(t_{n})\), \(\omega t_{n}=2\pi n\)). As discussed in Chapter 2, this map can be thought of as slicing a torus (see Figure 5-5). Experimentally, this can be done in several ways. If one has a storage oscilloscope, the Poincare map is obtained by intensifying the image on the screen at a certain phase of the forcing voltage (sometimes called \(z\)-_axis modulation_) (Figure 5-1). In our laboratory, we were able to generate a 5 to 10 V pulse of 1 to 2 \(\mu\)s duration when the forcing function reached a certain phase: \[\omega t_{n}=\phi_{\theta}+2\pi n\] (5-6.1) This pulse was then used to intensify a phase-plane image, (\(x(t)\), \(\dot{x}(t)\)), using two vertical amplifiers as in Figure 5-6. One can also use a digital oscilloscope in an external sampling rate mode with the same narrow pulse signal used for the analog oscilloscope. A similar technique can be employed using an analog-to-digital (A-D) signal converter by storing the sampled data in a computer for display at a later time. The important point here is that the sampling trigger signal must be exactly synchronous with the forcing function. #### Poincare Maps--Change of Phase As noted in Chapter 2, chaotic phase-plane trajectories can often be unraveled using the Poincare map by taking a set of pictures for different phases \(\phi_{0}\) in (5-6.1) (see Figure 5-7). This is tantamount to sweeping the Poincare plane in Figure 5-5. While one Poincare map can be used to expose the fractal nature of the attractor, a complete set of maps varying \(\phi_{0}\) from 0 to \(2\pi\) is sometimes needed to obtain a complete picture of the attractor on which the motion is riding. A series of pictures of various cross sections of a chaotic torus motion in a three-dimensional phase space is shown in Figure 5-7. Note the symmetry in the \(\varphi=0^{\circ}\) and \(180^{\circ}\) maps for the special case of the buckled beam. #### Poincare Maps--Effect of Damping If a system does not have sufficient damping, then the chaotic attractor will tend to uniformly fill up a section of phase space and the Cantor set structure which is Figure 5-6: Example of an experimental Poincaré map for periodic forcing of a buckled beam. characteristic of strange attractors will not be evident. An example of this is shown in Figure 2-11 for the vibration of a buckled beam. A comparison of low- and high-damping Poincare maps shows that adding damping to the system can sometimes bring out the fractal structure. On the other hand, if the damping is too high, the Cantor set sheets can appear to collapse onto one curve as in Figure 3-14. Poincare Maps--Quasiperiodic Oscillations.Often what appears to be chaotic may very simply be a superposition of two incommensurate harmonic motions, for example, \[x(t)\ =\ A\ \cos(\omega_{1}t\ +\ \phi)\ +\ B\ \cos(\omega_{2}t\ +\ \phi_{2})\] (5-6.2) Figure 5-7: Poincaré maps of a chaotic attractor for a buckled beam for different phases of the forcing function. where \(\omega_{1}/\omega_{2}\) is irrational. One can use one frequency to sample a Poincare map, for example, \(\omega_{1}t_{n}=2\pi n\). Then the phase-plane points (\(x(t_{n})\), \(\dot{x}(t_{n})\)) will fill in an elliptically shaped closed curve (Figure 5-8). If \(\omega_{1}/\omega_{2}\) is rational, a finite set of points will be seen in the Poincare map. The case of \(\omega_{1}/\omega_{2}\) irrational can be thought of as motion on a torus or doughnut-shaped figure in a three-dimensional phase space. When three or more incommensurate frequencies are present, one may not see a nice closed curve in the Poincare map and the Fourier spectrum should be used. The difference between chaotic and quasiperiodic motion can also be detected by taking the Fourier spectrum of the signal. A quasiperiodic motion will have a few well-pronounced Figure 5-8: Poincaré map of a motion with two harmonic signals with different frequencies. Figure 5-9: Fourier spectrum of an experimental electronic signal from a circuit with a nonlinear inductor. Frequency components are linear combinations of two frequencies. [From Bryant and Jeffries (1984a) with permission of the American Physical Society, copyright 1984.] peaks as shown in Figure 5-9. Chaotic signals often have a broad spectrum of Fourier components as in Figure 2-7. ##### Position-Triggered Poincare Maps When one does not have a natural time clock, such as a periodic forcing function, then more sophisticated techniques must be used to get a Poincare map (see also Henon, 1982). Suppose we imagine the motion as a trajectory in a three-dimensional space with coordinates (_x_, \(y\), _z_). To construct a Poincare map, we intercept a trajectory with a plane defined by \[ax\ +\ by\ +\ cz\ =\ d\] (5-6.3) as shown in Figure 5-10. The Poincare map consists of those points in the plane for which the trajectory penetrates the plane with the same sense [i.e., if we define a front and back to the plane (5-6.3), then we collect points only on trajectories that penetrate the plane from front to back or back to front, but not both ways.] Experimentally, one can do this by using a mechanical or electronic _lever detector_. Examples of position-triggered Poincare maps are discussed below. In the impact oscillator shown in Figure 5-11, there are three convenient state variables: the position \(x\), velocity \(v\), and phase of the driving signal \(\phi=\omega t\). If one triggers on the position when the mass hits the elastic constraint, the Poincare map becomes a set of values (\(\upsilon_{n}^{\pm}\), \(\omega t_{n}\)) Figure 5-10: General Poincaré surface of section in phase space for the motion of a third-order dynamical system. where \(v_{n}^{\pm}\) is the velocity before or after impact and \(t_{n}\) is the time of impact. Here the points in the map can be plotted in a cylindrical space where \(0<\omega t_{n}<2\pi\). An example of the experimental technique to obtain a (\(\upsilon_{n}\), \(\phi_{n}\)) Poincare map is shown in Figure 5-11. When the mass hits the position constraint, a sharp signal is obtained from a strain gauge or accelerom Figure 5-12: Position-triggered Poincaré map for an oscillating mass with impact constraints (Figure 5-11). Figure 5-11: Sketch of experimental setup for a position-triggered Poincaré section. eter. This sharp signal can be used to trigger a data storage device (such as a storage or digital oscilloscope) to store the value of the velocity of the particle. [In the case shown in Figure 5-11, a linear variable differential transformer (LVDT) is used to measure position, and this signal is electronically differentiated to get the velocity.] To obtain the phase \(\phi_{n}\) mod \(2\pi\), we generate a periodic ramp signal in phase with the driving signal where the minimum value of zero corresponds to \(\phi=0\), and the maximum voltage of the ramp corresponds to the phase \(\phi=2\pi\). The impact-generated sharp spike voltage is used to trigger the data storage device and store the value of the ramp voltage along with the velocity signal before or after impact. A Poincare map for a mass bouncing between two elastic walls using this (\(v_{n}\), \(\phi_{n}\)) technique is shown in Figure 5-12. (See also Figure 4-14.) Figure 5-13: Diagram of experimental apparatus to obtain position-triggered Poincaré maps for a periodically forced rotor with a nonlinear torque–angle relation. Another example of this kind of Poincare map is shown in Figure 5-13 for the chaotic vibrations of a motor. In this problem, the motor has a nonlinear torque-angle relation created by a dc current in one of the stator poles, and the permanent magnet rotor is driven by a sinusoidal torque created by an ac current in an adjacent coil. The equation of motion for this problem is [see Section 4.2, Eq. (4-2.13)] \[J\ddot{\theta}\;+\;\gamma\dot{\theta}\;+\;\kappa\;\sin\;\theta\;=\;F_{0}\mbox{ cos}\;\theta\;\mbox{cos}\;\omega t\] (5-6.4) To obtain a Poincare map, we choose a plane in the three-dimensional space \((\theta,\dot{\theta},\omega t)\), where \(\theta=0\) (Figure 5-13). This is done experimentally by using a slit in a thin disk attached to the rotor and using a light-emitting diode and detector to generate a voltage pulse every time the rotor passes through \(\theta=0\) (see Figure 5-13). This pulse is then used to sample the velocity and measure the time. The data can be directly Figure 5-14: Position-triggered Poincaré map for chaos in a nonlinear rotor (see Figure 5-13). ### Experimental Poincare maps Figure 5.15: Peak amplitude-generated Poincaré maps for a circuit with nonlinear inductance [from Bryant and Jeffries (1984a) with permission of the Americal Physical Society, copyright 1984]. displayed on a storage oscilloscope or, using a computer, can be replotted in polar coordinates as shown in Figure 5-14. Another variation of the method of Poincare sections is to sample data when some variable attains a _peak_ value. This has been used by Bryant and Jefferies (1984b) of the University of California--Berkeley. They examined the dynamics of a circuit with a nonlinear hysteretic iron core inductor shown in Figure 5-15. (The nonlinear properties are related to the ferromagnetic material in the inductor.) They sampled the current in the inductor \(I_{L}(t)\) as well as the driving voltage \(V_{s}(t)\), when \(V_{L}=0\). This is tantamount to measuring _peak_ value of the flux in the inductor \(\varphi\). This is because \(V_{L}=-\dot{\varphi}\), where \(\varphi\) is the magnetic flux in the inductor, and \(\varphi=\dot{\varphi}(I)\), so that when \(\dot{\varphi}=0\), the flux is at a maximum or minimum. The Poincare map is then a collection of pairs of points (\(V_{sn}\), \(I_{Ln}\)) which can be displayed on a storage or digital oscilloscope. ### Construction of One-Dimensional Maps from Multidimensional Attractors There are a number of physical and numerical examples where the attracting set appears to have a sheetlike behavior in some three-dimensional phase space as illustrated in Figure 5-16. [The Lorenz equations (1-3.9) are such an example. See also Section 3.8.] This often means that a Poincare section, obtained by measuring the sequence of Figure 5-16: Construction of a one-dimensional return map in a three-dimensional phase space. points which pierce a plane transverse to the attractor, will appear as a set of points along some one-dimensional line. This suggests that if one could parameterize these points along the line by a variable \(x\), it would be possible that a function exists which relates \(x_{n+1}\) and \(x_{n}\): \[x_{n+1}\ =\ f(x_{n})\] The function (called a _return map_) may be found by simply plotting \(x_{n+1}\) versus \(x_{n}\). One example of this is the experiments of Shaw (1984) on the dripping faucet shown in Figure 4-43 or the nonlinear circuit in Figure 4-31 (see also Simoyi et al., 1982). The existence of such a function \(f(x)\) implies that the mathematical results for one-dimensional maps, such as period doubling and Feigenbaum scaling (Section 3.6), may be applicable to the more complex physical problems in explaining, predicting, or organizing experimental observations. For some problems, the function \(f(x)\), when it exists, appears to cross itself or is tangled. This may suggest that the mapping function can be untangled by plotting the dynamics in a higher-dimensional embedding space using three successive values of the Poincare sampled data [\(x(t_{n})\), \(x(t_{n+1})\), and \(x(t_{n+2})\)]. The three-dimensional nature of the relationship can sometimes be perceived by changing the projection of the three-dimensional curve onto the plane of a graphics computer monitor. This may suggest a special two-dimensional map of the form \[x_{n+2} = f(x_{n+1},x_{n})\] \[x_{n+1} = f(x_{n},y_{n})\] (5-6.5) \[y_{n+1} = Ax_{n}\] This form is similar to the Henon map (1-3.8). This method has been used successfully by Van Buskirk and Jeffries (1985) in their study of circuits with \(p-n\) junctions and by Brorson et al. (1983), who studied a sinusoidally driven resistor-inductor circuit with a varactor diode. _Example: 1-D Map for a Friction Oscillator._ In Chapter 4, we described experiments with a dry friction oscillator (Figure 4-26) which was modeled as a forced oscillator of the form (see Feeny, 1990) \[\ddot{x}\ +\ F(x,\ \dot{x})\ +\ \omega_{0}^{2}x\ =f_{0}\ \sin\ \Omega t\] (5-6.6) The natural phase space is thus three-dimensional (\(x\), \(\dot{x}\), \(\Omega t\)), and a Poincare map triggered on the forcing phase would lead to a 2-D map, \((x_{n},\dot{x}_{n})\). A sample of such a Poincare map is shown in Figure 5-17\(a\), It shows an almost 1-D structure and exhibits no obvious fractal patterns. This suggests trying a 1-D map by replotting the 2-D map by projecting the points onto a one-dimensional manifold \(0\leq S\leq 2\). In the experiment, this resulted in a tent- or hump-type map as shown in Figure 5-17_b_: Figure 5-17: (_a_) Sketch of a one-degree-of-freedom oscillator with dry-friction force. (_b_) Poincaré map of chaotic dynamics; \(0\leq s<1\) represents â€sticking†motions, whereas \(1\leq s\leq 2\) represents slipping motions. (_c_) Return map based on the above Poincaré map. [From Feeny and Moon (1989).] \[S_{n\,+\,1}\,=\,F(S_{n})\] (5-6.7) where \(0\leq S<1\) corresponds to the sticking regime and \(1\leq S\leq 2\) corresponds to the slipping regime. ### Double Poincare Maps So far we have only talked of Poincare maps for third-order systems, such as a single-degree-of-freedom oscillator with external forcing. But what about higher-order systems with motion in a four- or five-dimensional phase space? For example, a two-degree-of-freedom autonomous aeroelastic problem would have motion in a four-dimensional phase space (\(x_{1}\), \(\upsilon_{1}\), \(x_{2}\), \(\upsilon_{2}\)). A Poincare map triggered on one of the state variables would result in a set of points in a three-dimensional space. The fractal nature of this map, if it exists, might not be evident in three dimensions and certainly not if one projects this three-dimensional map onto a plane in two of the remaining variables. A technique to observe the fractal nature of three-dimensional Poincare map of a fourth-order system has been developed in our laboratory which we call a _double Poincare section_ (see Figure 5-18). This technique enables one to slice a finite-width section of the three-dimensional map in order to uncover fractal properties of the attractor and hence determine if it is "strange" (see Moon and Holmes, 1985). We illustrate this technique with an example derived from the forced motion of a buckled beam. In this case we examine a system with two incommensurate driving frequencies. The mathematical model has the form (See also Wiggins (1988), Section 4.2e) \[\begin{array}{l}\dot{x}\,=\,y\\ \dot{y}\,=\,-\,\gamma y\,+\,F(x)\,+f_{1}\mbox{cos}\;\theta_{1}\,+f_{2}\mbox{ cos}(\theta_{2}\,-\,\phi_{0})\\ \dot{\theta}_{1}\,=\,\omega_{1}\\ \dot{\theta}_{2}\,=\,\omega_{2}\end{array}\] (5-6.8) The experimental apparatus for a double Poincare section is shown in Figure 5-19. The driving signals were produced by identical signal generators and were added electronically. The resulting quasiperiodic signal was then sent to a power amplifier which drove the electromagnetic shaker. The first Poincare map was generated by a 1-\(\mu\)s trigger pulse synchronous with one of the harmonic signals. The Poincare map (\(x_{n}\), \(\upsilon_{n}\)using one trigger results in a fuzzy picture with no structure as shown in Figure 5-20\(a\). To obtain the second Poincare section, we trigger on the second phase of the driving signal. However, if the pulse width is too narrow, the probability of finding points coincident with the first trigger is very small. Thus, we set the second pulse width 1000 times the first, at 1 ms. The second pulse width represents less than 1% of the second frequency phase of 2\(p\). The (_x_, _u_) points were only stored when the first pulse was _coincident_ with the second as shown in Figure 5-18. This was accomplished using a digital circuit with a logical NAND gate. Because of the infrequency of the simultaneity of both events, a map of 4000 points took upwards of 10 h compared to 8-10 min to obtain a conventional Poincare map for driving frequencies less than 10 Hz. The experimental results using this technique are shown in Figure 5-20 which compares a single with a double Poincare map for the two-frequency forced problem. The single map is fuzzy, whereas Figure 5-18: _Top:_ Single Poincaré map dynamical system; finite width slice of second Poincaré section. _Bottom:_ Poincaré sampling voltages for a second-order oscillator with two harmonic driving functions. the double section reveals a fractal-like structure characteristic of a strange attractor. One can of course generalize this technique to five- or higher-dimensional phase-space problems. However, the probability of three or more simultaneous events will be very small unless the frequency is order of magnitudes higher than 1-10 Hz. Such higher-dimensional maps may be useful in nonlinear circuit problems. This technique can of course be used in numerical simulation and has been employed by Lorenz (1984) to examine a strange attractor in a fourth-order system of ordinary differential equations. Kostelich and Yorke (1985) have also employed this method to study the dynamics of a kicked or pulsed double rotor. They call the method "Lorenz cross sections" (see also Kostelich et al., 1987). ### Experimentally Measured Circle Maps: Quasiperiodicity and Mode-Locking In Chapter 2 we mentioned briefly the phenomena of quasiperiodicity and mode-locking when two oscillators interact. This can occur when Figure 5.19: Sketch of experimental apparatus to obtain a Poincaré map for an oscillator with two driving frequencies. _Note_: Strain gauges—1; steel beam—2. [From Moon and Holmes (1985) with permission of Elsevier Science Publishers, copyright 1985.] Figure 5.20: (_a_) Single Poincaré map of a nonlinear oscillator with two driving frequencies. (_b_) Double Poincaré map showing fractal structure characteristic of a strange attractor. one tries to periodically force a nonlinear oscillator with frequency \(\omega_{2}\) which is initially in a limit cycle with frequency \(\omega_{1}\) (e.g., the forced Van der Pol oscillator; see Chapter 1). In many problems, the dynamics can be reduced to a circle map \[\theta_{n+1}=\theta_{n}+\Omega+\frac{k}{2\pi}\sin 2\pi\theta_{n},\quad(\text{mod 1})\] (5-6.9) An excellent review of mode-locking, quasiperiodicity, and the circle map from an experimentalist's point of view has been given by Glazier and Libchaber (1988). We shall not attempt to reproduce all the material in this paper, but we shall discuss a few techniques from this paper. These authors also survey the successful application of the circle map to experimental problems, including periodically forced Rayleigh-Benard convection (Jensen et al., 1985), and solid-state systems [e.g., electrically forced germanium (Held and Jeffries, 1986), electrically driven barium sodium niobate (Martin and Martienssen, 1986), and a periodically forced semiconductor laser (Winful et al., 1986)]. In all of these studies, Arnold-tongue mode-locking regimes were found (Chapter 2, Figure 2-23) with dynamic observations qualitatively similar to those of the circle map. If the measured data are sampled at a Poincare section synchronous with the driving frequency \(\omega_{2}\), a set of data is generated \(\{\cdot\cdot x_{i-1}\), \(x_{i}\), \(x_{i+1}\cdot\cdot\cdot\}\). The problem for the experimentalist is to find a mapping from \(x_{i}\) to \(\theta_{i}\). In an approximate method, the Poincare points \((x_{n},\dot{x}_{n})\) are plotted. If the closed curve for the quasiperiodic motion is symmetric and elliptic in shape, the points are projected onto a circle centered at the ellipse and the angle is measured on this circle. An example of a periodically excited flexible tube with steady fluid flow is shown in Figure 5-21\(a\). However, if the closed curve of the Poincare map is twisted or badly distorted, another method may be used. Following Glazier and Libchaber (1988), one plots the \(x_{i}\) variable versus a fictitious time variable \(\theta=Wt_{i}(\text{mod 1})\). In general, a random choice of \(W\) will not reveal any pattern to the \(x_{i}\) versus \(Wt_{i}\) curve. (In practice, one starts close to the uncoupled frequency ratio \(\omega_{1}/\omega_{2}\).) If there is an underlying one-dimensional generalized circle map, a unique value of \(W\) will reveal a periodic function. For values of \(W\) close to the critical value, the curve will drift. The procedure is best done in an interactive mode with a computer terminal screen. The experimenter replots the data \(\{x_{i}\}\) for different values of \(W\) until a unique curve is achieved as in Figure 5-21\(b\). Figure 5.21: (_a_) Sketch of procedure to determine existence of a circle map \(\theta_{n+1}=\theta_{n}+F(\theta_{n})\). (_b_) Determination of rotation number or winding number from data generated in a flow induced vibration experiment shown in Figure 5.21\(c\). The critical value \(W\) is called the _winding number_. If one suspends the'mod' operation on \(\theta_{i}\), then \(W\) is the rate at which the angle \(\theta_{i}\) changes with the discrete Poincare time "\(i\)," that is, \[W=\lim_{n\to\infty}\frac{\theta_{i}-\theta_{0}}{i}\] (5-6.10) For two uncoupled oscillators, \(W=\omega_{1}/\omega_{2}\equiv\Omega\) (the frequency ratio). For the circle map (5-6.9) with \(k=0\), \(\theta_{n+1}=\theta_{n}+\Omega\), so that \(W=\Omega\) is a uniform rate of change of angle. For \(k\neq 0\) in (5-6.9), we have a measure of the nonlinear coupling between the oscillators. Once the function \(x_{i}(Wt_{i})\) is found from the above procedure, then one can parameterize the curve from say 0 to \(2\pi\) and assign corresponding angle values \(\theta_{i}\). A plot of a return map should reveal a periodic circle type map \[\theta_{n+1}=f(\theta_{n})\] (5-6.11) In addition to ascertaining if the experimental data can be modeled by a circle map, one can plot mode-locked regions as the control frequency is varied (see also Chapter 2, Figure 2-23). In some problems, one can also determine multifractal properties of the attractor as described in Chapter 7 (see also Glazier and Libchaber, 1988). Figure 5-21: (c) Sketch of a flexible tube with an end mass carrying a steady flow. Lateral periodic force leads to quasiperiodic motion (Copeland and Moon, 1992). ### Quantitative Measures of Chaotic Vibrations Poincare maps and phase-plane portraits, when they can be obtained, often provide graphic evidence for chaotic behavior and the fractal properties of strange attractors. However, quantitative measures of chaotic dynamics are also important and in many cases are the only hard evidence for chaos. The latter is especially true for systems with extreme frequencies \(10^{6}\)-\(10^{9}\) (as in laser systems) in which Poincare maps may be difficult or impossible to capture. In addition, there are systems with many degrees of freedom where the Poincare map will not reveal the fractal structure of the attractor (see SS5.6 on double or multiple Poincare maps) or where the damping is so low that the Poincare map shows no structure but looks like a cloud of points. At this time in the development of the field there are three principal measures of chaos and another of emerging importance: 1. Fourier distribution of frequency spectra 2. Fractal dimension of chaotic attractor 3. Lyapunov exponents 4. Invariant probability distribution of attractor It should be pointed out that while phase-plane pictures and Poincare maps can be obtained directly from electronic laboratory equipment, the above measures of chaos require a computer to analyze the data, with the possible exception of the frequency spectrum measurement. Electronic spectrum analyzers can be obtained but are often expensive, and one might be better off investing in a laboratory micro- or minicomputer which has the capability to perform other data analysis besides Fourier transforms. If one is to digitally analyze the data from chaotic motions, then usually an _A-D converter_ will be required as well as some means of storing the data. For example, the digitized data can be stored in a buffer in the electronic A-D device and then transmitted directly or over phone lines to a computer. Another option is a _digital oscilloscope_ which performs the A-D conversion, displays the data graphically on the oscilloscope, and stores the data on a floppy or hard disk. Finally, if one has the funds, one can store the output from the A-D converter directly into a hard disk for direct transfer to a laboratory computer. ### Frequency Spectra: Fast Fourier Transform This is by far the most popular measure mainly because the idea of decomposing a nonperiodic signal into a set of sinusoidal or harmonic signals is widely known among scientists and engineers. The assumption made in this method is that the periodic or nonperiodic signal can be represented as a synthesis of sine or cosine signals: \[f(t)=\frac{1}{2\pi}\int_{\Gamma}F(\omega)e^{i\omega t}\,d\omega\] (5-7.1) where \(e^{i\omega t}=\cos\omega t\,+\,i\,\sin\omega t\). Because \(F(\omega)\) is often complex, the absolute value \(|F(\omega)|\) is used in graphical displays. In practice, one uses an electronic device or computer to calculate \(|F(\omega)|\) from input data from the experiment while varying some parameter in the experiment (see section in Chapter 2 entitled "Bifurcations: Routes to Chaos'). When the motion is periodic or quasiperiodic, \(|F(\omega)|\) shows a set of narrow spikes or lines indicating that the signal can be represented by a discrete set of harmonic functions \(\{e^{\,\geq i\omega_{i}t}\}\), where \(k=1\), \(2\),... Near the onset of chaos, however, a continuous distribution of frequency appears (as shown in Figure 5-22\(a\)), and in the fully chaotic regime the continuous spectrum may dominate the discrete spikes. Numerical calculation of \(F(\omega)\), given \(f(t)\), can often be very time-consuming even on a fast computer. However, most modern spectrum analyzers use a discrete version of (5-7.1) along with an efficient algorithm called the _fast Fourier transform_ (FFT). Given a set of data sampled at discrete even time intervals \(\{f(t_{k})=f_{0},f_{1},f_{2},\,...,\,f_{k},\,...,\,f_{N}\}\), the discrete time FFT is defined by the formula \[T(J)=\sum_{l=1}^{N}f(I)e^{\,-2\pi i|I-1|J-1|J-1|J|/N}\] (5-7.2) where \(I\) and \(J\) are integers. Several points should be made here which may appear obvious. First, the signal \(f(t)\) is time sampled at a fixed time interval \(\tau_{0}\); thus, information is lost for frequencies above \(\pm\tau_{0}\). Second, only a finite set of points are used in the calculation, usually \(N=2^{n}\), and some built-in FFT electronics only do \(N=512\) or \(1028\) points. Thus, information is lost about very low frequencies below \(1/N\tau_{0}\). Finally the representation (5-7.2) having no information about \(F(t)\) before \(t=t_{0}\) or after\(t=t_{N}\) essentially treats \(f(t)\) as a periodic function. In general, this is not the case and because \(f(t_{0})\neq f(t_{N})\), the Fourier representation treats this as a discontinuity which adds spurious information into \(F(\omega)\). This is called _aliasing error_, and methods exist to minimize its effect on \(F(\omega)\). The reader using the FFT should be aware of this, however, when interpreting Fourier spectra about nonperiodic signals and should consult a signal processing reference for more information about FFTs. ##### Autocorrelation Function Another signal processing tool that is related to the Fourier transform is the autocorrelation function given by \[A(\tau)=\int_{0}^{x}x(t)x(t+\tau)\,dt\] (5-7.3) When a signal is chaotic, information about its past origins is lost. This means that \(A(\tau)\to 0\) as \(\tau\to\infty\), or the signal is only correlated with its recent past. This is illustrated in Figure 5-22\(b\) for the chaotic Figure 5-22: (_a_) Fourier spectrum of a chaotic signal. (_b_) Autocorrelation function of a chaotic signal. vibrations of a buckled beam. The Fourier spectrum shows a broad band of frequencies, whereas the autocorrelation function has a peak at the origin \(\tau=0\) and drops off rapidly with time. #### Autocorrelation Function Using Symbol Dynamics In this book we have talked a great deal about iterated maps and differential equations, but not much about symbol dynamics. However, modern dynamical systems theory points out the equivalence between chaotic dynamics with sequences of symbols. This equivalence is more than conceptual, however, and can be used to obtain some quantitative measures of the dynamics. A case in point is the dry friction oscillator discussed above and in Chapter 4 (see Figure 4-26). Lyapunov exponents are very difficult to measure in physical experimental systems. An alternative to this measure of chaos is the use of an autocorrelation function. There has been speculation that the time for zero autocorrelation would be inversely proportional to the Lyapunov exponent (e.g., see Singh and Joseph, 1989). To test these ideas, an autocorrelation function based on symbol dynamics has been applied to a chaotic dry-friction oscillator to estimate the largest Lyapunov exponent (Feeny and Moon, 1989). The friction problem is well-suited for symbol dynamics because two distinct states of motion can be identified: sticking and slipping. The experiment consisted of a mass attached to the end of a cantilevered elastic beam, as shown in Figure 5-17\(a\). The mass had titanium plates on both sides, providing surfaces for sliding friction. Spring-loaded titanium pads rested against the titanium plates. The titanium plates were not parallel in the direction of sliding, and thus a displacement of the mass caused a change in the force on the spring-loaded pads and caused a change in the normal load and the friction force. The device was excited by periodic acceleration using an electromagnetic shaker. The displacement of the mass was measured with a strain gage attached to the cantilevered beam. The dynamics of the oscillator can be reduced to a noninvertible 1-D map (Figure 5-17_c_), which has been studied in terms of binary symbol sequences. The 2-D map in Figure 5-17\(b\) is reduced to a 1-D map in Figure 5-17\(c\) by defining a variable "_s_" along the sticking and slipping curves of the Poincare map. To obtain Figure 5-17\(c\), one plots \(s_{n\,+\,1}\) versus \(s_{n}\). The 3-D picture of the experimental attractor is shown in Color Plate 2. Singh and Joseph (1989) have proposed a technique of extracting quantitative information from a binary symbol sequence. First, it is necessary to represent the symbol sequence \(u(k)\) as a string of 1's and - 1's. These values are chosen so that the expected mean of a random sequence of equally likely symbols is zero. As a trajectory passes through the Poincare section for the \(k\)th time, if it is not sticking, we set \(u(k)=1\). If it is sticking, we set \(u(k)=-1\). An autocorrelation on such a symbol sequence is defined as \[r(n)=\frac{1}{N}\sum_{k=1}^{N}u(k+n)u(k)\qquad(n=0,1,2,...\;;\;\;\;N>\!>\!>n)\] (5-7.4) If the sequence is chaotic, the autocorrelation should have the property \(r(n)\to 0\) as \(n\to\infty\). If the sequence becomes uncorrelated, an estimate of the largest Lyapunov exponent can be obtained using the binary autocorrelation function. The macroscopic Lyapunov exponent, \(\lambda_{m}\), is rewritten via a derivation in Singh and Joseph (1989) as \[\lambda_{m}=\frac{1}{2}\,\alpha[1-r(1)^{2}]\] (5-7.5) Application of the equations to a symbol sequence derived from the tent map yields a rapidly decaying autocorrelation and a Lyapunov exponent \(\lambda_{t}=0.787\) for a string of 100,000 symbols, and an exponent of \(\lambda_{t}=0.787\) for a string of 2048 symbols, compared to its exact value, calculated using \(\log_{2}\), \(\lambda_{tc}=1\). The binary autocorrelation function for an experimental sequence of length 2048 was obtained (5-7.4) as shown in Figure 5-23. Applying Figure 5-23: Autocorrelation based on stick–slip symbol dynamics of a friction oscillator. [From Feeny and Moon (1989) with permission of Elsevier Publishers, copyright 1989).] Eqs. (5-7.4) and (5-7.5), the resulting Lyapunov exponent is \(\lambda_{\exp}=0.790\). Using Eqs. (5-7.4) and (5-7.5) on numerical smooth-friction law data (2048 symbols) yields an autocorrelation similar to that in Figure 5-23 and a Lyapunov exponent of \(\lambda_{s1}=0.792\). ### Wavelet Transform To characterize the fractal nature of dynamic data, a new signal processing technique has been developed, called the _wavelet transform_, which generalizes the Fourier transform (see e.g., Argoul et al. (1989), and Pezeshki et al. (1992) for applications). This transform introduces a spectrum of scales to unfold the self-similar nature of the fractallike data. ### Fractal Dimension I will not go into too many technical details about fractal dimensions because Chapter 7 is entirely devoted to this topic. However, the basic idea is to characterize the "strangeness" of the chaotic attractor. If one looks at a Poincare map of a typical low-dimensional strange attractor, as in Figure 5-6, one sees sets of points arranged along parallel lines. This structure persists when one enlarges a small region of the attractor. As noted in Chapter 2, this structure of the strange attractor differs from periodic motions (just a finite set of Poincare points) or quasiperiodic motion which in the Poincare map becomes a closed curve. In the Poincare map, one can say that the dimension of the periodic map is zero and that the dimension of the quasiperiodic map is one. The idea of the fractal dimension calculation is to attach a measure of dimension to the Cantor-like set of points in the strange attractor. If the points uniformly cover some area on the plane, we might say its dimension was close to two. Because the chaotic map in Figure 5-6 has an infinite set of gaps, its dimension is between one and two--thus the term _fractal dimension_. Another use for the fractal dimension calculation is to determine the lowest-order phase space for which the motion can be described. For example, in the case of some preturbulent convective flows in a Rayleigh-Benard cell (see Figure 4-39), the fractal dimension of the chaotic attractor can be calculated from some measure of the motion \(\{x(t_{n})=x_{n}\}\) (see Malraison et al., 1983). From \(\{x_{n}\}\), pseudo-phase-spaces of different dimension can be constructed (see Section 5.4). Using a computer algorithm, the fractal dimension \(d\) was found to reach an asymptotic \(d=3.5\) when the dimension of the pseudo-phasespace was four or larger. This suggests that a low-order approximation of the Navier-Stokes equation may be used to model this motion. The reader is referred to Chapter 7 for further details. Although there are questions about the ability to calculate fractal dimensions for attractors of dimensions greater than four or five, this technique has gained increasing acceptance among experimentalists especially for low-dimensional chaotic attractors. If this trend continues in the future, it is likely that electronic computing instruments will be available commercially that automatically calculate fractal dimension in the same way as FFTs are done at present. ### Lyapunov Exponents Chaos in dynamics implies a sensitivity of the outcome of a dynamical process to changes in initial conditions. If one imagines a set of initial conditions within a sphere or radius \(\varepsilon\) in phase space, then for chaotic motions trajectories originating in the sphere will map the sphere into an ellipsoid whose major axis grows as \(d=ee^{\lambda t}\), where \(\lambda>0\) is known as a _Lyapunov exponent_. [Lyapunov (1857-1918) was a great Russian mathematician and mechanician.] A number of experimenters in chaotic dynamics have developed algorithms to calculate the Lyapunov exponent \(\lambda\). For regular motions \(\lambda\leq 0\), but for chaotic motion \(\lambda>0\). Thus, _the sign of \(\lambda\) is a criterion for chaos_. The measurement involves the use of a computer to process the data. Algorithms have been developed to calculate \(\lambda\) from the measurement of a single dynamical variable \(x(t)\) by constructing a pseudo-phase-space (e.g., see Wolf, 1984). Another experimental technique is the use of the autocorrelation function discussed above (Singh and Joseph, 1989; Feeny and Moon, 1989). A more precise definition of Lyapunov exponents and techniques for measuring them are given in Chapter 6. ### Probability or Invariant Distributions If a nonlinear dynamical system is in a chaotic state, precise prediction of the time history of the motion is impossible because small uncertainties in the initial conditions lead to divergent orbits in the phase space. If damping is present, we know that the chaotic orbit lies somewhere on the strange attractor. Because we lack specific knowledge about the whereabouts of the orbit, there is increasing interest in knowing the probability of finding the orbit somewhere on the attractor. One suggestion is to see if one can use a probability density in phase space to provide a statistical measure of the chaotic dynamics [see Section3.7, Eq. (3.7-9)]. There is some mathematical and experimental evidence that such a distribution does exist and that it does not vary with time. (See e.g., Figure 3-27.) Increasingly, measurement of the probability distribution function is being used as a diagnostic tool in chaotic vibrations, especially in periodically forced systems. In general, the dynamic attractor in a three-dimensional phase space would have a probability measure with three variables _P_(_x_, \(y\), _z_), one for each of the state variables. For a chaotic attractor with fractal properties, however, the distribution function would also have fractal properties. Experimentally, and even computationally, a small amount of noise will smooth out the distribution function. However, another smoothing operation is to integrate _P_(_x_, \(y\), _z_) over one or more of the state variables. For example, for a forced single-degree-of-freedom oscillator, integration over the forcing phase (e.g., 0 <= \(t\) <= 2_p_) and the velocity variable will yield an experimental probability density function _P_(_x_) which is piecewise smooth and gives the probability that the motion at any time will lie between \(x\)1 and \(x\)2: \[\mathcal{P}(x_{1};x_{2}) = \int_{x_{2}}^{x_{1}}P(x)\,dx,\qquad\mathcal{P}( - \infty,\infty) = 1\] Also, modern signal processing systems often have a function (sometimes called a _histogram_) that will partition an interval \(x\)1 <= \(x\) <= \(x\)2 into \(N\) bins and that will count the number of times the digitalized signal lies in each bin in a given finite length data record. With suitable normalization, this procedure will yield an approximation to _P_(_x_). To function as a good diagnostic tool, a signal processing algorithm must provide qualitatively different patterns for periodic and chaotic signals. In the case of forced systems, this is usually the case. A periodic motion usually has an elliptic shape in the phase plane (_x_, \(v\) = _x_). If the points on the orbit are subdivided and protected onto the _x_-axis, then the probability density function _P_(_x_) is continuous over a finite interval with singularities at the edges; that is, \[P(x) = \frac{1}{\pi}\frac{1}{\sqrt{A^{2} - x^{2}}}\] for a harmonic orbit centered at the origin. (_A_ is the maximum amplitude of the limit cycle.) For chaotic signals, the singularities often disappear and _P_(_x_) looks more Gaussian and often non-Gaussian. Two examples are shown in Figures 5-24 and 5-25. The first case is for the flow-induced vibration os an elastic tube carrying a steady flow of fluid (see Paidoussis and Moon, 1988). A comparison of the probability density function (PDF) for periodic and chaotic states shows a clear distinction (Figure 5-24). The second example is the experimental two-well potential problem using a buckled elastic beam (Moon, 1980a). Here we show the PDF for both displacement and velocity, that is, _P_(_x_) and _P_(_x_) (Figure 5-25). The PDF for velocity looks Gaussian, whereas the PDF for the displacement has a double peak. A few attempts have been made to analytically calculate the PDF for chaotic attractors. The solution for a one-dimensional map was outlined in Chapter 3. A special case of the standard map was presented by Tsang and Lieberman (1984) using the Fokker-Planck equation. For applied scientists or engineers, one should consult either Soong (1973) or Gardiner (1985) for a discussion of this equation. Consider the forced oscillator with some random noise _W_(_t_): \[\ddot{x}\;+\;g(x,\;\dot{x})\;+\;f(x)\;=\;F_{0}\cos\;\Omega t\;+\;W(t)\] (5-7.7) The Fokker-Planck equation for the PDF _P_(_x_, \(y\) = \(x\), _t_) is given by Figure 5-24: Probability density function for flow-induced chaotic vibrations of a flexible tube carrying steady fluid flow. [From Paidoussis and Moon (1988).] ### 5.7 Quantitative measures of chaotic vibrations Figure 5.25: Experimental probability density function for chaotic vibration of a buckled beam averaged in time over many thousands of forcing periods. (_a_) Distribution of velocities at the beam tip. (_b_) Distribution of position of the beam tip. \[\frac{\partial P}{\partial t}+\frac{\partial}{\partial x}(yP)+\frac{\partial}{ \partial y}(-g(x,y)-f(x)+f_{0}\!\cos\Omega t)P=\frac{1}{2}S_{0}\frac{\partial^{ 2}P}{\partial y^{2}}\] (5-7.8) where the constant \(S_{0}\) is a measure of the strength of the Gaussian white noise. For \(f_{0}=0\), one can find an explicit stationary solution, that is, \(\partial/\partial t=0\). For linear damping, \(g(x,y)=\gamma y\), the solution has the form \[P(x,y)=ce^{-\gamma y^{2}/S_{0}}e^{-\{y\int_{0}^{x}f(x)dx\}/S_{0}}\] (5-7.9) For a two-well potential with \(f(x)=-x+x^{3}\), one has \[P(x,y)=ce^{-\gamma y^{2}/S_{0}}e^{\gamma(2x^{2}-x^{4})/4S_{0}}\] (5-7.10) A similar problem has been studied by Kunert and Pfeiffer (1991). What is remarkable is that the analytical solution (5-7.10) for Gaussian white noise also produces a double-hump PDF for the \(x\) variable which is qualitatively similar to the deterministic periodically excited problem (i.e., \(W(t)=0\)) as shown in the experiments (Figure 5-25). Thus, there is some suggestion that an approximate PDF for a deterministic chaotic attractor may be related to the PDF for a randomly excited oscillator. These ideas, however, are speculative at this time and remain an area of potential fruitful research. The use of the probability density function to calculate Lyapunov exponents is discussed in Chapters 3 and 6. The usefulness of probability distribution for chaotic vibrations is similar to that for random vibrations (e.g., see Soong, 1973 or Lin, 1976). If the probability distribution can be determined for a chaotic system, then one can calculate the mean square amplitude, mean zero crossing times, and probability of displacements, voltages, or stresses exceeding some critical value. However, much remains to be done in this subject at both the mathematical and experimental levels. ### Cell Mapping Methods The use of probabilistic methods of analysis in chaotic vibrations has been developed by C. S. Hsu and co-workers at the University of California at Berkeley (Hsu, 1981, 1987; Hsu and Kim, 1985; Kreuzer, 1985 Tongue, 1987). This method, called the _Cell Mapping Method_, which divides the phase space into many cells, uses ideas from the theory of Markov processes. This computer based method may be useful for low order systems in obtaining a global picture of the possible dynamic attractors. New algorithms have been developed to improve numerical efficiency in this method (see e.g., Tongue, 1987). ## Problems ### 5-1 Show that the autocorrelation function of a periodic function is periodic. ### 5-2 Suppose a signal has a subharmonic \[x(t)=A_{1}\cos\omega t+A_{2}\cos\frac{\omega}{2}t\] What does the autocorrelation function look like? ### 5-3 Consider a signal with quasiperiodic sinusoidal components. What does the FFT and autocorrelation function look like? ### 5-4 Sketch the PDF, \(P(x)\), of a two-frequency quasiperiodic signal. ### 5-5 Consider a periodic signal \(x(t)\) with a single frequency. Derive the PDF, \(P(x)\). Sketch the PDF in the phase plane, that is, \(P(x,\,\dot{x})\). ### 5-6 Suppose a signal has a subharmonic \[x(t)=A_{1}\cos\omega t+A_{2}\cos\left(\frac{\omega}{2}t+\varphi_{0}\right)\] Sketch the PDF, \(P(x)\). (Hint, see Figure 3-17.) ### 5-7 Consider the problem of a linear harmonic oscillator linearly coupled to an auto-oscillatory system such as the Van der Pol oscillator (Chapter 1). Show that this represents a fourth-order dynamical system. Numerically find a chaotic parameter regime. How would you define a Poincare map? Write a program to calculate a double Poincare map for this system. How would you define the second map? ### 5-8 Write a program to plot the Henon map (Chapter 1) in the chaotic
## Criteria for Chaotic Vibrations _But you will ask, how could a uniform chaos coagulate at first irregularly in heterogeneous veins or masses to cause hills--Tell me the cause of this, and the answer will perhaps serve for the chaos_. Isaac Newton _On Creation_--from a letter circa 1681 ### Introduction In this chapter, we study how the parameters of a dynamical system determine whether the motion will be chaotic or regular. This is analogous to finding the critical velocity in viscous flow of fluids above which steady flow becomes turbulent. This velocity, when normalized by a characteristic length and by the kinematic viscosity of the fluid, is known as the critical _Reynolds number_, Re. A reliable theoretical value for the critical Re has eluded engineers and physicists for over a century, and for most fluid problems experimental determination of (Re)crit is necessary. In like manner, the determination of criteria for chaos in mechanical or electrical systems in most cases must be found by experiment or computer simulation. For such systems the search for critical parameters for deterministic chaos is a ripe subject for experimentalists and theoreticians alike. Despite the paucity of experimentally verified theories for the onset of chaotic vibrations, there are some notable theoretical successes and some general theoretical guidelines. We distinguish between two kinds of criteria for chaos in physicalsystems: a predictive rule and a diagnostic tool. A _predictive_ rule for chaotic vibrations is one that determines the set of input or control parameters that will lead to chaos. The ability to predict chaos in a physical system implies either that one has some approximate mathematical model of the system from which a criterion may be derived or that one has some empirical data based on many tests. A _diagnostic_ criterion for chaotic vibrations is a test that reveals if a particular system was or is in fact in a state of chaotic dynamics based on measurements or signal processing of data from the time history of the system. We begin with a review of empirically determined criteria for specific physical systems and mathematical models which exhibit chaotic oscillations (Section 6.2). These criteria were determined by both physical and numerical experiments. We examine such cases for two reasons. First, it is of value for the novice in this field to explore a few particular chaotic systems in detail and to become familiar with the conditions under which chaos occurs. Such cases may give clues to when chaos occurs in more complex systems. Second, in the development of theoretical criteria, it is important to have some test case with which to compare theory with experiment. In Section 6.3 we present a review of the principal, predictive models for determining when chaos occurs. These include the period-doubling criterion, homoclinic orbit criterion, Shil'nikov criterion, and the overlap criterion of Chirikov for conservative chaos, as well as intermittency and transient chaos. We also review several ad hoc criteria that have been developed for specific classes of problems. Finally, in Section 6.4 we discuss an important diagnostic tool, namely, the Lyapunov exponent. Another diagnostic concept, the fractal dimension, is described in Chapter 7. ### Empirical Criteria for Chaos In the many introductory lectures the author has given on chaos, the following question has surfaced time and time again: _Are chaotic motions singular cases in real physical problems or do they occur for a wide range of parameters?_ For engineers this question is very important. To design, one needs to predict system behavior. If the engineer chooses parameters that produce chaotic output, then he or she loses predictability. In the past, many designs in structural engineering, electrical circuits, and control systems were kept within the realm of linear system dynamics. However, the needs of modern technology have pushed devices into nonlinear regimes (e.g., large deformations and deflections in structural mechanics) that increase the possibility of encountering chaotic dynamic phenomena. To address the question of whether chaotic dynamics are singular events in real systems, we examine the range of parameters for which chaos occurs in seven different problems. A cursory scan of the figures accompanying each discussion will lead one to the conclusion that chaotic dynamics are not a singular class of motions and that _chaotic oscillations occur in many nonlinear systems for a wide range of parameter values_. We examine the critical parameters for chaos in the following problems?: 1. Circuit with nonlinear inductor: Duffing's equation 2. Particle in a two-well potential or buckling of an elastic beam: Duffing's equation 3. Experimental convection loop: a model for Lorenz's equation 4. Vibrations of nonlinear coupled pendulums 5. Rotating magnetic dipole: pendulum equation 6. Circuit with nonlinear capacitance 7. Surface waves on a fluid ### Forced Oscillations of a Nonlinear Inductor: Duffing's Equation In Chapter 4, we examined the chaotic dynamics of a circuit with a nonlinear inductor (see also Figure 3-33). Extensive analog and digital simulation for this system was peformed by Y. Ueda (1979, 1980) of Kyoto University. The nondimensional equation, where \(x\) represents the flux in the inductor, takes the form \[\ddot{x}\,+\,k\dot{x}\,+\,x^{3}\,=\,B\,\cos\,t\] (6-2.1) The time has been nondimensionalized by the forcing frequency so that the entire dynamics can be determined by the two parameters \(k\) and \(B\) and the initial conditions (\(x(0)\), \(\dot{x}(0)\)). Here \(k\) is a measure of the resistance of the circuit, while \(B\) is a measure of the driving voltage. Ueda found that by varying these two parameters one could obtain a wide variety of periodic, subharmonic, ultrasubharmonic, and chaotic motions. The regions of chaotic behavior in the (\(k\), \(B\)) plane are plotted in Figure 6-1. The regions of subharmonic and harmonic motions are quite complex, and only a few are shown for illustration. The two different hatched areas indicate either (a) regions of only chaos or (b) regions with both chaotic and periodic motion, depending on initial conditions. A theoretical criterion for this relatively simple equation has been suggested by Szemplinska-Stupnika and Bajkowski (1986). (See also the Color Plate 1 for solutions of (1-2.4).) ### Forced Oscillations of a Particle in a Two-Well Potential This example was discussed in great detail in Chapters 2 and 3. It was first studied by Holmes (1979) and was later studied in a series of papers by the author and co-workers. The mathematical equation describes the forced motion of a particle between two states of equilibrium, which can be described by a two-well potential: \[\begin{equation*}\ddot{x}\,+\,\delta\dot{x}\,-\,{\frac{1}{2}}x(1\,-\,x^{2})=f\cos\omega t\end{equation*}\] This equation can represent a particle in a plasma, a defect in a solid, and, on a larger scale, the dynamics of a buckled elastic beam Figure 6: Chaos diagram showing regions of chaotic and periodic motions for a nonlinear circuit as functions of nondimensionalized damping and forcing amplitude. [From Ueda (1980).] (see Chapter 3). The dynamics are controlled by three nondimensional groups (\(\delta,f,\,\omega\)), where \(\delta\) represents nondimensional damping and \(\omega\) is the driving frequency nondimensionalized by the small-amplitude natural frequency of the system in one of the potential wells. Regions of chaos from two studies are shown in Figures 6-2 and 6-3. The first represents experimental data for a buckled cantilevered beam (Chapter 2). The ragged boundary is the experimental data, whereas the smooth curve represents a theoretical criterion (see Section 6.3). Recently, an upper boundary has been measured beyond which the motion becomes periodic. The experimental criterion was determined by looking at Poincare maps of the motion (see Chapters 2 and 5). Results from numerical simulation of Eq. (6-2.2) are shown in Figure 6-3. The diagnostic tool used to determine if chaos was present was the Lyapunov exponent using a computer algorithm developed by Wolf et al. (1985) (see Section 6.4). This diagram shows that there are complex regions of chaotic vibrations in the plane (\(f,\,\omega\)) for fixed damping \(\delta\). For very large forcing \(f>\!\!>1\), one expects the behvior to emulate the previous problem studied by Ueda. The theoretical boundary found by Holmes (1979) is discussed in the next section. It has special significance because below this boundary Figure 6-2: Experimental chaos diagram for vibrations of a buckled beam for different values of forcing frequency and amplitude. [From Moon (1980b), reprinted with permission from _New Approaches to Nonlinear Problems in Dynamics_, edited by P. J. Holmes, copyright 1980 by SIAM.] periodic motions are predictable, whereas above this boundary one loses the ability to exactly predict which of the many periodic or chaotic modes the motion will be attracted. Above the theoretical criteria (based on homoclinic orbits), the motion is very sensitive to initial conditions, even when it is periodic (see Section 7.7). ### Experimental Convection Loop: Lorenz Equations Aside from the logistic equation, the Lorenz model for convection turbulence (see Chapters 1 and 4) is perhaps the most studied system of equations that admit chaotic solutions. Yet most mathematicians have focused on a few sets of parameters. These equations take the form (see also Sparrow, 1982) \[\begin{array}{l}\dot{x}\,=\,\sigma(y\,-\,x)\\ \dot{y}\,=\,rx\,-\,y\,-\,xz\\ \dot{z}\,=\,xy\,-\,bz\end{array}\] (6-2.3) An experimental realization of these equations can be obtained in a circular convection loop, also known as the _thermosiphon_ (see Section 4.7) (Figure 6-4_a_). This experiment has received extensive study from a group at the University of Houston (Widmann et al., 1989; Gorman et al., 1986). A qualitative diagram of the various dy Figure 6-3: Chaos diagram for vibration of a mass in a double-well potential [Duffing’s equation, Eq. (6-2.2)]. The smooth boundary represents the homoclinic orbit criterion (Section 6-3.). namic regimes as a function of the applied heat flux is shown in Figure 6-4\(b\) as observed in their experiments. In their 1986 paper the Houston group showed that the Lorenz equation only gave good predictive results for the steady regime, and that an additional degree of freedom was needed to improve the agreement between theory and experiment (see also Yorke et al., 1985). Figure 6-4: (\(a\)) Sketch of a toroidal container of fluid under gravity and thermal gradients, otherwise known as the _thermosiphon_. (\(b\)) Qualitative chaos diagram of the dynamic regimes for the thermosiphon. [From Widmann et al. (1989).] ### Forced Vibrations of Two-Coupled Pendulums Figure 6-5 is a sketch of an experiment with two masses hung on a lightweight cable which is assumed to be inextensible. This problem is equivalent to two coupled spherical pendulums with a constraint. The effective number of degrees of freedom is three: one in-plane mode and two out-of-plane modes. The end points are excited with harmonic excitation. In mechanical engineering this represents a 3-D four-bar linkage, whereas in civil engineering it could represent a model for cable car dynamics. An experimental chart of four dynamic regimes shows both chaotic as well as quasiperiodic regions in the parameter space of excitation amplitude and frequency. Quasiperiodic motions are typical in multiple-degree-of-freedom systems of this kind. These experiments were performed in our laboratory by Professor F. Benedettini of the University of l'Aquila, Italy. ### Forced Motions of a Rotating Dipole in Magnetic Fields: The Pendulum Equation In this experiment, a permanent magnet rotor is excited by crossed steady and time harmonic magnetic fields (see Moon et al., 1987), as shown in Figure 4-6. The nondimensionalized equation of motion for the rotation angle \(\theta\) resembles that for the pendulum in a gravitational potential: \[\ddot{\theta}\;+\;\gamma\dot{\theta}\;+\;\sin\;\theta\;=f\cos\;\theta\;\cos\;\omega t\] (6-2.4) The regions of chaotic rotation in the \(f\)-\(\omega\) plane, for fixed damping, are shown in Figure 6-6. This was one of the first published examples where both experimental and numerical simulation data are compared with a theoretical criterion for chaos. The theory is based on the homoclinic orbit criterion and is discussed in Section 6.4. As in the case of the two-well potential, chaotic motions are to be found in the vicinity of the natural frequency for small oscillations (\(\omega=1.0\) in Figure 6-6). See Figure 5-13 for a sketch of the experiment. ### Empirical criteria for chaos Figure 6-5: Experimental chaos diagram for coupled spherical pendulum with a constraint. [From Benedettini and Moon (1992).] (_a_) Experimental model. (_b_) Behavior chart: in-phase excitation. (_c_) Behavior chart. Out-phase excitation. Figure 6-6: Experimental chaos diagram for forced motions of a rotor with nonlinear torque-angle property. Comparison with homoclinic orbit criterion calculated using the Melnikov method (Section 6.3). [From Moon et al. (1987) with permission of North-Holland Publishing Co., copyright 1987.] et al., 1984). In this example, regions of period doubling are shown as precursors to the chaotic motions. However, in the midst of the hatched chaotic regime, a period-5 subharmonic was observed. Periodic islands in the center of chaotic domains are common observations in experiments on chaotic oscillations. [See a similar study by Bucko et al. (1984). See also Figure 4-32.] ### Harmonically Driven Surface Waves in a Fluid Cylinder As a final example, we present experimentally determined harmonic and chaotic regions of the amplitude-frequency parameter space for surface waves in a cylinder filled with water from a paper by Ciliberto and Gollub (1985). A 12.7-cm-diameter cylinder with 1-cm-deep water was harmonically vibrated in a speaker cone (Figure 6-8). The amplitude of the transverse vibration above the flat surface of the fluid can be written in terms of Bessel functions where the linear mode shapes are given by \(U_{nm}=J_{n}(k_{nm}r)\sin(n\theta\,+\,d_{nm})\). Figure 6-8 shows the driving amplitude-frequency plane in a region where two modes can interact: \((n,\,m)=(4,\,3)\) and \((7,\,2)\). Below the lower boundary, the surface Figure 6-8: Experimental chaos diagram for surface waves in a cylinder filled with water. The diagram shows where two linear modes interact. [From Ciliberto and Gollub (1985).] remains flat. A small region of chaotic regimes intersect. Presumably, other chaotic regimes exist where other modes (\(n\), \(m\)) interact. (See also Figure 8-1.) In summary, these examples show that, given periodic forcing input to a physical system, large regions of periodic or subharmonic motions do exist and presumably are predictable using classical methods of nonlinear analysis. However, these examples also show that _chaos is not a singular happening_; that is, it can exist for wide ranges in the parameters of the problem. Also, and perhaps most important, there are regions where both periodic and chaotic motions can exist and the precise motion that will result may be unpredictable. ### Theoretical Predictive Criteria The search for theoretical criteria to determine under what set of conditions a given dynamical system will become chaotic has tended to be ad hoc. The strategy thus far has been for theorists to find criteria for specific mathematical models and then use these models as analogs or paradigms to infer when more general or complex physical systems will become unpredictable. An example is the period-doubling bifurcation sequence discussed by May (1976) and Feigenbaum (1978) for the quadratic map (e.g., see Chapters 1 and 3). Although these results were generalized for a wider class of one-dimensional maps using a technique called _renormalization theory_, the period-doubling criterion is not always observed for higher-dimensional maps. In mechanical and electrical vibrations, a Poincare section of the solution in phase space often leads to maps of two or more dimensions. Nonetheless, the period-doubling scenario is one possible route to chaos. In more complicated physical systems, an understanding of the May-Feigenbaum model can be very useful in determining when and why chaotic motions occur. In this section, we briefly review a few of the principal theories of chaos and explore how they lead to criteria that may be used to predict or diagnose chaotic behavior in real systems. These theories include the following: 1. Period doubling 2. Homoclinic orbits and horseshoe maps 3. Shil'nikov criterion 4. Intermittency and transient chaos 5. Overlap criteria for conservative chaos 6. Ad hoc theories for multiple-well potential problems ### Period-Doubling Criterion This criterion is applicable to dynamical systems whose behavior can be described exactly or approximately by a first-order difference equation (see Chapter 3): \[x_{n+1}\,=\,\lambda\,x_{n}(1\,-\,x_{n})\] (6-3.1) The dynamics of this equation were studied by May (1976), Feigenbaum (1978, 1980), and others. They discovered solutions whose period doubles as the parameter \(\lambda\) is varied (the period in this case is the number of integers \(p\) for \(x_{n+p}\) to return to the value \(x_{n}\)). One of the important properties of Eq. (6-3.1) that Feigenbaum discovered was that the sequence of critical parameters \(\{\lambda_{m}\}\) at which the period of the orbit doubles satisfies the relation \[\lim_{m\to\pm}\frac{\lambda_{m+1}-\lambda_{m}}{\lambda_{m}-\lambda_{m-1}}=\frac {1}{\delta},\qquad\delta\,=\,4.6692\,\ldots\] (6-3.2) This important discovery gave experimenters a specific criterion to determine if a system was about to become chaotic by simply observing the prechaotic periodic behavior. It has been applied to physical systems involving fluid, electrical, and laser experiments. Although these problems are often modeled mathematically by continuous differential equations, the Poincare map can reduce the dynamics to a set of difference equations. For many physical problems, the essential dynamics can be modeled further as a one-dimensional map (see e.g., Chapter 5). \[x_{n+1}\,=\,f(x_{n})\] (6-3.3) The importance of Feigenbaum's work is that he showed how period-doubling behavior was typical of one-dimensional maps that have a hump or zero tangent [i.e., the map is _noninvertible_ or there exist two values of \(x_{n}\) which when put into \(f(x_{n})\) give the same value of \(x_{n+1}\)]. He also demonstrated that if the mapping function dependson some parameter \(\Lambda\) [i.e., \(f(x_{n}\); \(\Lambda)\)], then the sequence of critical values of this parameter at which the orbit's period doubles \(\{\Lambda_{m}\}\) satisfies the same relation (6-3.2) as that for the quadratic map. Thus the period-doubling phenomenon has been called _universal_, and \(\delta\) has been called a universal constant (now known quite naturally as the _Feigenbaum number_). The author must raise a flag of caution here. The term 'universal' is used in the context of one-dimensional maps (6-3.3). There are many chaotic phenomena which are described by two- or higher-dimensional maps (e.g., see the buckled beam problem in Chapter 2). In these cases, period doubling may indeed be one route to chaos, but there are many other bifurcation sequences that result in chaos beside period doubling (see Holmes, 1984). #### Renormalization and the Period-Doubling Criterion There are two ideas that are important in understanding the period-doubling phenomenon. The first is the concept of _bifurcation_ of solutions, and the second is the idea of _renormalization_. The concept of bifurcation was illustrated in Chapter 3. For example, in Figure 6-9 a steady periodic solution \(x_{0}\) becomes unstable at a parameter value of \(\lambda\), and the amplitude now oscillates between two values \(x^{+}\) and \(x^{-}\), completing a cycle in twice the time of the previous solution. Further changes in \(\lambda\) make \(x^{+}\) and \(x^{-}\) unstable, and the solution branches to a new cycle with period 4. A readable description of renormalization as it applies to period doubling may be found in Feigenbaum (1980). The technique recognizes the fact that a cascade of bifurcations exists (Figure 6-10) and that it might be possible to map each bifurcation into the previous one by a change in scale of the physical variable \(x\) and a transformation of Figure 6-9: Diagram showing two branches of a bifurcation diagram near a period-doubling point. the control parameter. To illustrate this technique, we outline an approximate scheme for the quadratic map (see also Lichtenberg and Lieberman, 1983). One form of the quadratic map is given by \[x_{n\,+\,1}\,=\,f(x_{n})\] (6-3.4) where \(f(x)\,=\,\lambda\,x(1\,-\,x)\). Period-1 cycles are just constant values of \(x\) given by fixed points of the mapping, that is, \(x_{n}\,=\,f(x_{n})\). Now a fixed point or equilibrium point can be stable or unstable. That is, iteration of \(x\) can move toward or away from the fixed point, \(x_{0}\). The stability of the map depends on the slope of \(f(x)\) at \(x_{0}\); that is, \[\left|\frac{df(x_{0})}{dx}\right|\,>1\quad\text{implies instability}\] (6-3.5) Figure 6-10: Bifurcation diagram for the quadratic map (3-6.2). Steady-state behavior as a function of the control parameter showing the period-doubling phenomenon. Because the slope\(f^{\prime}=\lambda(1-2x)\) depends on \(\lambda,x_{0}\) becomes unstable at \(\lambda_{2}=\pm 1/|1-2x_{0}|\). Beyond this value, the stable periodic motion has period 2. The fixed points of the period-2 motion are given by \[x_{2}=f(f(x_{2}))\quad\text{or}\quad x_{2}=\lambda^{2}x_{2}(1-x_{2})[1-\lambda x _{2}(1-x_{2})]\] (6-3.6) The function \(f(f(x))\) is shown in Figure 6-11. Again there are stable and unstable solutions. Suppose the \(x_{0}\) solution bifurcates and the solution alternates between \(x^{+}\) and \(x^{-}\) as shown in Figure 6-9. We then have \[x^{+}=\lambda x^{-}(1-x^{-})\quad\text{and}\quad x^{-}=\lambda x^{+}(1-x^{+})\] (6-3.7) To determine the next critical value \(\lambda=\lambda_{2}\) at which a period-4 orbit emerges, we change coordinates by writing \[x_{n}=x^{z}+\eta_{n}\] (6-3.8) Putting Eq. (6-3.8) into (6-3.7), we get \[\begin{split}\eta_{n+1}&=\lambda\eta_{n}[(1-2x^{+})-\eta_{n}]\\ \eta_{n+2}&=\lambda\eta_{n+1}[(1-2x^{-})-\eta_{{}_{n+1}}]\end{split}\] (6-3.9) Figure 6-11: First and second iteration functions for the quadratic map (3-6.2). ### 6.3 Theoretical predictive criteria We next solve for \(\eta_{n\,+\,2}\) in terms of \(\eta_{n}\), keeping only terms to order \(\eta_{n}^{2}\) (this is obviously an approximation), to obtain \[\eta_{n\,+\,2}\;=\;\lambda^{2}\eta_{n}[A\;-\;B\eta_{n}]\] (6-3.10) where \(A\) and \(B\) depend on \(x^{+}\), \(x^{-}\), and \(\lambda\). Next, we rescale \(\eta\) and define a new parameter \(\overline{\lambda}\) using \[\overline{x}\;=\;\alpha\eta,\qquad\overline{\lambda}\;=\;\lambda^{2}A,\qquad \alpha\;=\;B/A,\qquad\overline{x}_{n\,+\,2}\;=\;\overline{\lambda}\overline{x} _{n}(1\;-\;\overline{x}_{n})\] This has the same form as our original equation [Eq. (6-3.1)]. Thus, when the solution bifurcates to period 4 at \(\lambda\,=\,\lambda_{2}\), the critical value of \(\overline{\lambda}\) equals \(\lambda_{1}\). We therefore obtain an equation \[\lambda_{1}\;=\;\lambda_{2}^{2}A(\lambda_{2})\] (6-3.11) Starting from the point \(x_{0}=0\), there is a bifurcation sequence for \(\lambda<0\). For this case Lichtenberg and Lieberman show that (6-3.11) is given by \[\lambda_{1}\;=\;-\,\lambda_{2}^{2}\;+\;2\lambda_{2}\;+\;4\] (6-3.12) It can be shown that \(\lambda_{1}\,=\,-\,1\), so that \(\lambda_{2}\,=\,(1\,-\,\sqrt{6})\,=\,-\,1.4494\). If one is bold enough to propose that the recurrence relation (6-3.12) holds at higher-order bifurcations, then \[\lambda_{\kappa}\,=\,-\,\lambda_{\kappa\,+\,1}^{2}\,+\,2\lambda_{\kappa\,+\,1} \,+\,4\] (6-3.13) At the critical value for chaos, \[\lambda_{x} =\;-\,\lambda_{x}^{2}\,+\,2\lambda_{x}\,+\,4\] \[=\;(1\,-\,\sqrt{17})/2\,=\,-\,1.562\] (6-3.14) One can also show that another bifurcation sequence occurs for \(\lambda>0\) (Figure 6-10) where the critical value is given by \[\lambda\;=\;\hat{\lambda}_{x}\;=\;2\;-\;\lambda_{x}\;=\;3.56\] (6-3.15) The exact value is close to \(\lambda_{x}\,=\,3.56994\). Thus, the rescaling approximation scheme is not too bad. This line of analysis also leads to the relation \[\lambda_{\kappa}\,\simeq\,\lambda_{\kappa}\,+\,\,\alpha\delta^{-\kappa}\] (6-3.16) which results in the scaling law (6-3.2). Thus, knowing that two successive bifurcation values can give one an estimate of the chaos criterion \(\lambda_{\kappa}\), we obtain \[\lambda_{\kappa}\simeq\frac{1}{(\delta\,-\,1)}\,[\delta\lambda_{\kappa\,+\,1} \,-\,\lambda_{\kappa}]\] (6-3.17) A final word before we leave this section: The fact that \(\lambda\) may exceed the critical value (\(|\lambda|>|\lambda_{\kappa}|\)) does not imply that chaotic solutions will occur. They certainly are possible. But there are also many _periodic windows_ in the range of parameters greater than the critical value in which periodic motions as well as chaotic solutions can occur. We do not have space to do complete justice to the rich complexities in the dynamics of the quadratic map. It is certainly one of the major paradigms for understanding chaos, and the interested reader is encouraged to study this problem in the aforementioned references. (See also Appendix B for computer experiments.) ### Homoclinic Orbits and Horseshoe Maps One theoretical technique that has led to specific criteria for chaotic vibrations is a method based on the search for horseshoe maps and homoclinic orbits in mathematical models of dynamical systems. This strategy and a mathematical technique, called the _Melnikov method_, has led to Reynolds-type criteria for chaos relating the parameters in the system. In two cases, these criteria have been verified by numerical and physical experiments. Keeping with the tenor of this book, we do not derive or go into too much of the mathematical theory of this method, but we do try to convey the rationale behind it and guide the reader to the literature for a more detailed discussion of the method. We illustrate the Melnikov method with two applications: the vibrations of a buckled beam and the rotary dynamics of a magnetic dipole motor. The homoclinic orbit criterion is a mathematical technique for obtaining a predictive relation between the nondimensional groups in the physical system. It gives one a necessary but not sufficient condition for chaos. It may also give a necessary and sufficient condition for predictability in a dynamical system (see Chapter 7, Section 7.7, "Fractal Basin Boundaries"). Stripped of its complex, somewhat arcane mathematical infrastructure, it is essentially a method to prove whether a model in the form of partial or ordinary differential equations has the properties of a horseshoe or a baker's-type map. The horseshoe map view of chaos (see also Chapters 1, 3) looks at a collection of initial condition orbits in some ball in phase space. If a system has a horseshoe map behavior, this initial volume of phase space is mapped under the dynamics of the system onto a new shape in which the original ball is stretched and folded (Figure 6-12). After many iterations, this folding and stretching produces a fractal-like structure and the precise information as to which orbit originated where is lost. More and more precision is required to relate an initial condition to the state of the system at a later time. For a finite precision problem (as most numerical or laboratory experiments are), predictability is not possible. ##### Homoclinic Orbits A good discussion of homoclinic orbits may be found in the books by Lichtenberg and Lieberman (1983), Guckenheimer and Holmes (1983), and Wiggins (1988). We have learned earlier that although many dynamics problems can be viewed as a continuous curve in some phase space (\(x\) versus \(v=\dot{x}\)) or solution space (\(x\) versus \(t\)), the mysteries of nonlinear dynamics and chaos are often deciphered by looking at a digital sampling of the motion such as a Poincare map. We have also seen that although the Poincare map is a sequence of points in some \(n\)-dimensional space, it can lie along certain continuous curves. These curves are called _manifolds_. A dis Figure 6-12: Evolution of an initial condition sphere. cussion of homoclinic orbits refers to a sequence of points. This sequence of points is called an _orbit_. In the dynamics of mappings, one can have critical points at which orbits move away from or toward. One example is a saddle point at which there are (a) two manifold curves on which orbits approach the point and (b) two curves on which the sequence of Poincare points move away from the point, as illustrated in Figure 6-13 (see also Sections 3.2, 3.4). Such a point is similar to a _saddle point_ in nonlinear differential equations. To illustrate a homoclinic orbit, we consider the dynamics of the forced, damped pendulum. First, recall that for the unforced, damped pendulum, the unstable branches of the saddle point swirl around the Figure 6-13: (_a_) Periodic orbit in a Poincaré map. (_b_) Quasiperiodic orbit. (_c_) Homoclinic orbit. equilibrium point in a vortexlike motion in the \(\theta\)-\(\dot{\theta}\) phase plane as shown in Figure 6-14. Although it is not obvious, the Poincare map synchronized with the forcing frequency also has a saddle point in the neighborhood of \(\theta=\pm n\pi(n\) odd), as shown in Figure 6-15 for the case of the forced pendulum. For small forcing, the stable and unstable branches of the saddle do not touch each other. However, as the force is increased, these two manifolds intersect. It can be shown that _if they intersect once_, they will _intersect an infinite number of times_. [Another example of the intersection of stable and unstable manifolds is given in Section 3.4, Figure 3-10 for the Standard map, and Eq. (3-4.1).] The points of intersection of stable and unstable manifolds are called _homoclinic points_. A Poincare point near one of these points will be mapped into all the rest of the intersection points. This is called a _homoclinic orbit_ (Figure 6-13). Now why are these orbits important for chaos? The intersection of the stable and unstable manifolds of the Poincare map leads to a horseshoe-type map in the vicinity of each homoclinic point. As we saw in Chapter 1, horseshoe-type maps lead to unpredictability, and unpredictability or sensitivity to initial conditions is a hallmark of chaos. To see why homoclinic orbits leads to horseshoe maps, we recall that for a dissipative system the areas get mapped into smaller areas. However, near the unstable manifold, the areas are also stretched. Because the total area must decrease, the area must also contact more than it stretches. Areas near the homoclinic points also get folded, as shown in Figure 6-16\(a\). A dynamic process can be thought of as a transformation of phase space; that is, a volume of points representing different possible initial Figure 6-14: Stable and unstable manifolds for the motion of an unforced, damped pendulum. conditions is transformed into a distorted volume at a later time. Regular flow results when the transformed volume has a conventionally shaped volume. Chaotic flows result when the volume is stretched, contracted, and folded as in the baker's transformation or _horseshoe_ map. The Melnikov MethodThe Melnikov function is used to measure the distance between unstable and stable manifolds when that distance is small [see Guckenheimer and Holmes (1983) or Wiggins (1988, 1990) for a mathematical discussion of the Melnikov method]. It has been applied to problems where the dissipation is small and where the equations for the manifolds of the zero dissipation problem are known. For example, suppose we consider the forced motion of a nonlinear oscillator where (_q_, _p_) are the generalized coordinate and momentum variables. We assume that both the damping and forcing are small and that we can write the equations of motion in the form \[\begin{array}{l} \dot{q} = \frac{\partial H}{\partial p} + \varepsilon g_{1} \\ \dot{p} = - \frac{\partial H}{\partial q} + \varepsilon g_{2} \\ \end{array}\] Figure 6.15: Sketch of stable and unstable manifolds of the Poincaré map for the harmonically forced, damped pendulum. ### Theoretical predictive criteria Figure 6.16: (_a_) The development of a folded horseshoe map for points in the neighborhood of a homoclinic orbit. (_b_) Saddle point of a Poincaré map and its associated stable and unstable manifolds before a homoclinic orbit develops. where \({\bf g}={\bf g}(p\), \(q\), \(t)=(g_{1}\), \(g_{2})\), \(\varepsilon\) is a small parameter, and \(H(q\), \(p)\) is the Hamiltonian for the undamped, unforced problems (\(\varepsilon=0\)). We also assume that \({\bf g}(t)\) is periodic so that \[{\bf g}(t\ +\ T)\ =\ {\bf g}(t)\] (6-3.19) and that the motion takes place in a three-dimensional phase space (\(q\), \(p\), \(\omega t\)), where \(\omega t\) is the phase of the periodic force and is modulo the period \(T\). In many nonlinear problems, a saddle point exists in the unperturbed Hamiltonian problem [\(\varepsilon=0\) in Eq. (6-3.18)], such as for the pendulum or the double-well potential Duffing's equation, Eq. (6-2.2). When \(\varepsilon\neq 0\), one can take a Poincare section of the three-dimensional torus flow synchronized with the phase \(\omega t\). It has been shown (see Guckenheimer and Holmes, 1983) that the Poincare map also has a saddle point with stable and unstable manifolds, \(W^{s}\) and \(W^{u}\), shown in Figure 6-16\(b\). The Melnikov function provides a measure of the separation between \(W^{s}\) and \(W^{u}\) as a function of the phase of the Poincare map \(\omega t\). This function is given by the integral \[M(t_{0})=\int_{-\pi}^{\pi}{\bf g}^{*}\cdot\nabla{\bf H}(q^{*},p^{*})\ dt\] (6-3.20) where \({\bf g}^{*}={\bf g}(q^{*}\), \(p^{*}\), \(t\ +\ t_{0}\)) and \(q^{*}(t)\) and \(p^{*}(t)\) are the solutions for the unperturbed homoclinic orbit originating at the saddle point of the Hamiltonian problem. The variable \(t_{0}\) is a measure of the distance along the original unperturbed homoclinic trajectory in the phase plane. We consider two examples. _Magnetic Pendulum_. A convenient experimental model of a pendulum may be found in the rotary dynamics of a magnetic dipole in crossed steady and time-periodic magnetic fields as shown in Figure 4-6 (see also Moon et al., 1987). The equation of motion, when normalized, is given by \[\ddot{\theta}\ +\ \gamma\dot{\theta}\ +\ \sin\ \theta\ =f_{1}\ \cos\ \theta\ \cos\ \omega t\ +f_{0}\] (6-3.21) The \(\sin\ \theta\) term is produced by the steady magnetic field, and the \(f_{1}\) term is produced by the dynamic field. We have also included linear damping and a constant torque \(f_{0}\). Keeping with the assumptions of the theory, we assume that one can write \(\gamma=\varepsilon\overline{\gamma}\), \(f_{0}=\varepsilon\overline{f}_{0}\), and \(f_{1}=\varepsilon\overline{f}_{1}\), where \(0<\!<\varepsilon<1\) and \(\overline{\gamma},\overline{f}_{0}\), and \(\overline{f}_{1}\) are of order one. The Hamiltonian for the undamped, unforced problem is given by \[H\,=\,\frac{1}{2}\!\!\!\!v^{2}\,+\,(1\,-\,\cos\,\theta)\] where \(q\equiv\theta\) and \(p\equiv v=\,\dot{\theta}\). The energy \(H\) is constant (\(H=2\)) on the homoclinic orbit emanating from the saddle point (\(\theta=v=0\)). The unperturbed homoclinic orbit is given by \[\begin{array}{l}\theta^{\star}\,=\,2\tan^{-1}\!\!\!(\sinh\,t)\\ v^{\star}\,=\,2\,\,{\rm sech}\,\,t\end{array}\] (6-3.22) In Eq. (6-3.18), \(g_{1}=0\) and \(g_{2}=f_{0}\,+f_{1}{\rm cos}\,\,\theta\,\cos\,\omega t\). The resulting integral can be carried out exactly using contour integration [e.g., see Guckenheimer and Holmes (1983) for a similar example]. The result gives \[M(t_{0})\,=\,-\,8\overline{\gamma}\,+\,2\pi\overline{f}_{0}\,+\,2\pi\overline{ f}_{1}\omega^{2}{\rm sech}\left(\frac{\pi\omega}{2}\right)\,{\rm cos}\,\omega t_{0}\] (6-3.23) The two perturbed manifolds will touch transversely when \(M(t_{0})\) has a simple zero, or when \[f_{1}>\left|\frac{4\gamma}{\pi}-f_{0}\right|\,\frac{{\rm cosh}(\pi\omega/2)}{ \omega^{2}}\] (6-3.24) where we have canceled the \(\varepsilon\) factors. When \(f_{0}=0\), the critical value of the forcing torque is given by \[f_{1c}=\frac{4\gamma}{\pi\omega^{2}}\left(\frac{{\rm cosh}\,\pi\omega}{2}\right)\] (6-3.25) This function is plotted in Figure 6-6 along with experimental and numerical simulation data. The criterion (6-3.25) gives a remarkably good lower bound on the regions of chaos in the forcing amplitude-frequency plane. Two-Well Potential ProblemForced motion of a particle in a two-well potential has numerous applications such as postbuckling behavior of a buckled elastic beam (Moon and Holmes, 1979). Damped periodically forced oscillations can be described by a Duffing-type equation \[\ddot{x}\,+\,\gamma\dot{x}\,-\,x\,+\,x^{3}=f\cos\omega t\] (6-3.26) The Hamiltonian for the unperturbed problem is \[H(x,\,v)\,=\,\frac{1}{2}(v^{2}\,-\,x^{2}\,+\,\frac{1}{2}x^{4})\] For \(H=0\), there are two homoclinic orbits originating and terminating at the saddle point at the origin. The variables \(x\)* and \(v\)* take on values along the right half-plane curve given by \[x^{\star}\,=\,\sqrt{2}\,\,\,\mbox{sech}\,\,t\quad\mbox{and}\quad v^{\star}\,=\, \,-\,\sqrt{2}\,\,\,\mbox{sech}\,\,t\,\,\mbox{tanh}\,\,t\] In this problem, \(g_{1}=0\) and \(g_{2}=\widehat{f}\cos\omega t\,-\,\overline{\gamma}v\), where \(\gamma=\epsilon\overline{\gamma}\) and \(f=\epsilon\widehat{f}\) as in the previous example. The Melnikov function (6-3.20) then takes the form \[M(t_{0})\,=\,-\,\sqrt{2}f\Big{\rfloor}_{-\,x}^{x}\,\,\mbox{sech} \,\,t\,\mbox{tanh}\,\,t\,\cos\omega(t\,+\,t_{0})\,dt\] \[\,-\,2\gamma\,\Big{\rfloor}_{-\,x}^{x}\,\,\mbox{sech}^{2}t\,\, \mbox{tanh}^{2}t\,dt\] which can be integrated exactly using methods of contour integration. The solution was originally found by Holmes (1979), but an error crept into his paper. The correct analysis is in Guckenheimer and Holmes (1983): \[M(t_{0})\,=\,-\frac{4\gamma}{3}\,-\,\sqrt{2}\,f\pi\omega\,\,\mbox{sech}\,\frac {\pi\omega}{2}\sin\omega t_{0}\] For a simple zero we require \[f\!>\!\frac{4\gamma}{3}\frac{\cosh(\pi\omega/2)}{\sqrt{2}\pi\omega}\] (6-3.27) This lower bound on the chaotic region in (\(f\), \(\omega\), \(\gamma\)) space has been verified in experiments by Moon (1980a) (see also Figures 6-2 and 6-3). ### Theoretical predictive criteria Figure 6.17: (_a_) One-well potential problem with escape barrier, (_b_) Phase plane portrait of unforced motion of a particle in a one-well potential, (_c_) Homoclinic orbit criterion (6-3.28b) for the one-well potential problem (6-3.28a) (from Thompson, 1989b). and co-workers at University College, London, have used a one-well potential oscillator with a one-sided escape barrier: \[\ddot{x}\ +\ \beta\dot{x}\ +\ x\ -\ x^{2}\ =\ F\ \sin\ out\] It is straightforward to show that the critical value of the force \(F\) as a function of frequency \(\omega\) for homoclinic tangency is approximately given by (Thompson, 1989b; Thompson et al., 1990), \[F=\frac{\beta\sinh\pi\omega}{5\pi\omega^{2}}\] This curve is plotted in Figure 6-17\(c\). Also shown in this figure are the escape regions (capsize of the ship). These data from Thompson (1989b) also show narrow bands of chaotic regions just below the escape or capsize regime. The solid curves show values of \(F\), \(\omega\) where a stable and unstable solution coalesce, called a fold and also where a period doubling bifurcation occurs (shown as the flip boundary). These results demonstrate that while the Melnikov criteria sometimes provides a lower bound for chaotic or complex motions, the use of classical perturbation and bifurcation analysis may be useful to obtain more precise bounds on chaotic behavior [see Thompson and Stewart (1986) for a discussion of bifurcation theory and chaos]. #### Multiple Homoclinic Criteria: Three- and Four-Well Potentials The homoclinic orbit criterion for a map is more easily applied if the underlying phase-space flow has homoclinic or heteroclinic orbits. The existence of such infinite time orbits are usually associated with the presence of saddle points--that is, equilibrium points with both stable (inflow) trajectories and unstable (outflow) trajectories. However, when two or more saddles are present, then there exist more than one mechanism for inflow and outflow trajectories in the Poincare map to get tangled up. Thus, we can derive multiple criteria for chaos. The implications of such a phenomena are illustrated in two problems involving (a) a particle in a one-degree-of-freedom three-well potential and (b) a particle in a two-degree-of-freedom four-well potential. In each case, a small amount of periodic excitation is added along with a small amount of damping (see Li and Moon, 1990a,b). Experimentally, each case can be realized by placing three or four magnets below a steel cantilever beam as in Figure 4-1 (see Appendix B for a description of a two-well potential experiment). These problems represent multi-equilibrium systems when the excitation is absent. ### Three-Well Potential The three-well problem can be modeled by an equation of the form \[\ddot{x}\,+\,\gamma\dot{x}\,+\,x(x^{2}\,-\,x_{0}^{2})(x^{2}\,-\,1)\,=\,f\cos\,\omega t\] (6-3.29) which can be derived from an appropriate sixth-order polynomial potential function. When \(f=\,0\), this problem possesses three stable and two unstable (saddles) equilibrium positions. When the forcing is present (\(f\neq 0\)) and small, it can be demonstrated that the Poincare map also has two saddle points. Figure 6-18: (\(a\)–\(c\)) The intersections of stable and unstable manifolds of the saddle points of a Poincaré map based on numerical integrations of the forced three-well potential oscillator (6-3.29). [From Li and Moon (1990a).]There are two ways in which one can get homoclinic orbits in the map: Unstable manifolds from each saddle may intersect stable manifolds of the same saddle point, or an unstable manifold from one saddle may intersect the stable manifold from the other saddle point of the map. This is illustrated in Figure 6-18 for numerical integration of Eq. (6-3.29). Criteria for chaos can then be found by procedures similar to that for the two-well potential problem, but with the aid of numerical computation (see Li and Moon, 1990a,b). An example is shown in Figure 6-19 as a function of driving amplitude and frequency (\(f\), \(\omega\)). interpretation of this diagram is that below both criteria, the problem is predictable--that is, insensitive to small changes in initial conditions. Between the two criteria, there are regions in the initial condition space which are respectively sensitive and insensitive to small changes. Above both criteria the problem is strongly sensitive to initial conditions. These results were confirmed by experimental observations as well as by numerical studies of basins of attractions (e.g., see color plate CP-7,8). Four-Well PotentialThe four-well potential problem with harmonic forcing and damping is similar to a particle on a roulette table, as the level curves of the potential function show (Figure 6-20). The dynamics of this problem takes place in a five-dimensional phase space Figure 6-19: Two criteria for homoclinic orbits from the three-well potential oscillator (6-3.29). [From Li and Moon (1990a).] in contrast to the three-dimensional phase spaces of the two- and three-well potential problems presented above. The Poincare map triggered on the forcing phase lives in a four-dimensional phase space which defies the imagination. In this space the symmetric four-well problem has five saddles and many opportunities for entanglement of stable and unstable manifolds. To date we have no systematic way to determine all the possible ways of generating homoclinic orbits in this problem. Instead we guess at two obvious mechanisms based by analogy with the two-well and pendulum examples treated above. [The advanced reader should consult Wiggins (1988) for a treatment of homoclinic orbits in higher-dimensional phase spaces.] The structure of the mathematical model for this problem is of the form (see Li and Moon, 1990b and Li, 1987) \[\begin{array}{l}\ddot{x}\,+\,\gamma_{1}\dot{x}\,+\,\frac{\partial V(x,y)}{ \partial x}=f_{1}\mbox{cos}(\Omega t\,+\,\varphi_{0})\\ \\ \ddot{y}\,+\,\gamma_{2}\dot{y}\,+\,\frac{\partial V(x,y)}{\partial y}=f_{2} \mbox{cos}\ \Omega t\end{array}\] The two criteria are derived from a guess at two restricted classes of motions: radial motion through two of the wells and circumferential motion through four of the potential wells. Motion restricted to radial Figure 6.20: Level curves of a two-degree-of-freedom oscillator (6-3.30) in a four-well potential for the oscillator shown in Figure 4.1. [From Li (1987).] motion is exactly similar to the two-well problem with one degree of freedom studied by Holmes (1979). Therefore, one obtains a criterion for homoclinic tangles in the Poincare map similar to (6-3.27). Experimental observation of a cantilevered steel rod with four magnets below the end of the rod shows that sometimes the rod will exhibit circumferential motions through each of the wells. However, the precise orbit is not circular. But numerical calculations show that a nearly circular orbit is possible. Thus, we artificially restrict the radial motion (\(\dot{r}=0\), \(r=\sqrt{x^{2}+y^{2}}\)). The problem is similar to a rotor or pendulum with four potential wells, and a criterion similar to (6-3.25) may be possible. Using these analogies, two criteria were derived numerically as shown in Figure 6-21. Numerical and experimental observations seem to indicate that chaos results when the parameters (\(f_{1}\), \(\Omega\)) exceed both criteria. Basin boundary studies (see color plate on this book's jacket) show increasing complexity as each criterion is crossed. These two studies show that even in simple problems in Newtonian particle mechanics, extremely complex dynamic phenomena are possible; they also show that our knowledge to date is still extremely primitive, especially as regards phase spaces of dimension four or higher (see also Kittel et al., 1990). ### Shil'nikov Chaos The above discussion shows how a homoclinic orbit in the Poincare map can lead to a horseshoe map structure and eventually chaos. But Figure 6-21: Two criteria for homoclinic orbits from the four-well potential oscillator (6-3.30). [From Li and Moon (1990b).] that about homoclinic orbits in a flow described by a set of differential equations? For a 2-D phase plane, homoclinic or heteroclinic orbits cannot lead to chaos without periodic forcing. But what about homoclinic orbits in 3-D? In 1965, two years after the publication of Lorenz's paper on nonperiodic solutions for fluid convective problems, L. P. Shil'nikov of the Soviet Union proposed a theorem which suggests that the existence of a homoclinic orbit in a 3-D flow would imply the existence of nonperiodic trajectories. In this work he chose a system of three first-order nonlinear differential equations with a fixed point which is characterized by a saddle focus (see Chapter 1; also see Guckenheimer and Holmes, 1983). A saddle focus has eigenvalues (\(\rho\pm i\omega\), \(\lambda\)), as shown in Figure 6-22. Near the fixed point, trajectories spiral out or in on some 2-D surface characterized by \(\rho\pm i\omega\), or they approach or depart the fixed point in an exponential manner with time exponent \(\lambda\) along a direction transverse to the spiral surface. If the trajectory that spirals out (or in) eventually joins the trajectory coming into (or out of) the fixed point, then one has a homoclinic orbit. (Note however, that from any point on this orbit it takes an infinite time forward or backward to reach the fixed point.) Shil'nikov proposed a criteria for the existence of these nonperiodic orbits that are generated by the homoclinic orbit: \[|\rho/\lambda|\leq 1\] (6-3.31) Figure 6-22: Homoclinic orbit in three-dimensional phase space generated by a saddle-focus fixed point. Several experimental studies have been published which purport to have measured chaotic dynamics originating from a Shil'nikov homoclinic orbit. Argoul et al. (1987a) studied a continuous chemical flow reactor for the Belousov-Zhabotinski reaction, and Bassett and Hudson (1988) performed an experiment on the electrodissolution of a rotating copper disk in a H\({}_{2}\)SO\({}_{4}\)/NaCl solution. In both papers the experimental results were compared to a third-order model of the form \[\begin{array}{l}\dot{x}=y\\ \dot{y}=z\\ \dot{z}=-\eta z-\nu y-\mu x-k_{1}x^{2}-k_{2}y^{2}-k_{3}xy-k_{4}xz-k_{5}x^{2}z \end{array}\] (6-3.32) For example, in Argoul et al. (1987a) the following parameters are chosen: \(k_{1}=-1\), \(k_{2}=1.425\), \(k_{3}=0\), \(k_{4}=-0.2\), \(k_{5}=0.01\). This system has two equilibrium points, one at the origin and the other at \((x,y,z)=(-\mu\)/k, 0, 0). For \(\eta=1.3\) and \(\mu\geq 1.3\), a spiral-type strange attractor can be found as shown in Figure 6-23 which is qualitatively similar to that obtained from the experiments using a reconstructed phase space with variables \(C(t)\), \(C(t+T)\), \(C(t+2T)\), where \(C\) Figure 6-23: Shil’nikov-type strange attractor based on numerical integration of a model for a chemical flow reactor (6-3.32). (\(\eta\), \(\nu\), \(\mu\)) = (1, 1.3, 1.38) [From Argoul et al. (1987a).] represents the concentration of \(Ce\) in the continuously stirred tank reactor. The Poincare map is obtained by intersecting the attractor with a plane in the 3-D space and shows a linear structure which suggests the use of a 1-D return map. In this numerical simulation of (6-3.32) the 1-D map derived from the Poincare section shows an intersecting _multibranched map_ (Figure 6-24) which is also obtained from the experiments. Each branch is labeled with an integer and represents the number of turns the trajectory orbits around the saddle focus in between two successive Poincare map times. Argoul et al. (1987) also proposed a scaling law for the distance between two successive branches: \[\lim_{n\to\infty}\frac{X^{(n\,+\,1)}-X^{(n)}}{X^{(n)}\,-\,X^{(n\,-\,1)}}=\, \exp(\,-\,2\pi\rho/\omega)\,=\,\delta\sim 0.8\] (6-3.33) The dynamics can thus be described in terms of an infinite set of symbols (each representing the number of turns around the saddle focus between mapping times). There have also been claims and counterclaims about Shil'nikov chaos in laser dynamics (e.g., see Arecchi et al., 1987 and Swetits and Buoncristiani, 1988). Figure 6-24Multibranched Poincaré map of a Shil'nikov-type strange attractor based on numerical integration of a model for a chemical flow reactor (6-3.32). [From Argoul et al. (1987).] ### Intermittent and Transient Chaos Thus far we have discussed what one might call "steady-state" chaotic vibration. Two other forms of unpredictable, irregular motions are intermittency and transient chaos. In the former, bursts of chaotic or noisy motion occur between periods of regular motion (see Figure 6-25). Such behavior was even observed by Reynolds in pipe flow preturbulence experiments in 1883 (see Sreenivasan, 1986). Transient chaos is also observed in some systems as a precursor to steady-state chaos. For certain initial conditions, the system may behave in a randomlike way, with the trajectory moving in phase space as if it were on a strange attractor. However, after some time, the motion settles onto a regular attractor such as a periodic vibration. Scaling properties of nonlinear motion can sometimes be used to determine experimentally a critical parameter for these two types of chaotic motion. In the case of intermittency, where the dynamic system is close to a periodic motion but experiences short bursts of chaotic transients, an explanation of this behavior has been posited by Manneville and Pomeau (1980) in terms of one-dimensional maps or difference equations. From numerical experiments on maps, the mean time duration of the periodic motion between chaotic bursts \(\langle\tau\rangle\) has been found to be \[\langle\tau\rangle\sim\frac{1}{|\lambda\,-\,\lambda_{c}|^{1/2}}\] (6-3.34) where \(\lambda\) is a control parameter (e.g., fluid velocity, forcing amplitude, or voltage) and \(\lambda_{c}\) is the value at which a chaotic motion occurs. As \(\lambda\,-\,\lambda_{c}\) increases, the chaotic time interval increases and the periodic interval decreases.Thus, one might call this _creeping chaos_. Figure 6-25: Sketch of intermittent chaotic motion. To measure \(\lambda_{c}\) experimentally, one must measure two average times \(\langle\tau\rangle_{1}\) and \(\langle\tau\rangle_{2}\) at corresponding values of the control parameter, that is, \(\lambda_{1}\) and \(\lambda_{2}\). This should determine the proportionality constant in Eq. (6-3.34) as well as \(\lambda_{c}\). Having obtained a candidate value for \(\lambda_{c}\), however, one should then measure other values of (\(\langle\tau\rangle\), \(\lambda\)) to validate the scaling relation (6-3.34). The case of transient chaos has been studied by Grebogi et al. (1983a,b, 1985b) of the University of Maryland in a series of papers describing numerical experiments on two-dimensional maps. In one study (1983), they investigated a two-dimensional extension of the one-dimensional quadratic difference equation called the _Henon map_ (see also Section 1.3): \[x_{n\,+\,1} = \,1\,-\,\alpha\,x_{n}^{2}\,+\,y_{n}\] \[y_{n\,+\,1} = \,-\,Jx_{n}\] where \(J\) is the determinant of the Jacobian matrix which controls the amount of area contraction of the map. In the Maryland group's research on transient chaos, the case of \(J=\,-\,0.3\) with the parameter \(a\) varied was investigated. For example, for \(\alpha>\alpha_{0}=\,1.062371838\), a period-6 orbit gave birth to a six-piece strange attractor that exists in the region \[\alpha_{0}<\alpha<\alpha_{c}\,=\,1.080744879\] For \(\alpha>\alpha_{c}\), the orbit under the iteration of the Henon map was found to wander around the ghost of the strange attractor in the \(x\)-\(y\) plane, sometimes for over \(10^{3}\) iterations, before settling onto a period-4 motion. They also discovered that the average time for the transient chaos \(\langle\tau\rangle\) followed a scaling law: \[\langle\tau\rangle\,\sim\,(\alpha\,-\,\alpha_{c})^{-1/2}\] (6-3.35) The average was found by choosing \(10^{2}\) initial conditions for each choice of \(\alpha\). The initial conditions were chosen in the original basin of attraction of the defunct strange attractor. These transients can be very long. For example, in the case of the Henon map, Grebogi and co-workers found \(\langle\tau\rangle\approx 10^{4}\) for \(\alpha\,-\,\alpha_{c}\), \(=\,5\,\times\,10^{-\,7}\) and \(\langle\tau\rangle\approx 10^{3}\) for \(\alpha\,-\,\alpha_{c}\,=\,10^{-\,5}\). ### Chirikov's Overlap Criterion for Conservative Chaos The study of chaotic motions in conservative systems (no damping) predates the current interest in chaotic dissipative systems. Because the practical application of conservative dynamical systems is limited to areas such as planetary mechanics, plasma physics, and accelerator physics, engineers have not followed this field as closely as other advances in nonlinear dynamics. In this section, we focus on the bouncing ball chaos described in Chapter 4 (Figure 4-11). However, the resulting difference equations are relevant to the behavior of coupled nonlinear oscillators (e.g., see Lichtenberg and Lieberman, 1983) as well as to the behavior of electrons in electromagnetic fields. The equations for the impact of a mass, under gravity, on a vibrating table are given by (4-19); with a change of variables, these become (see also Section 3.4) \[\begin{array}{l} {\upsilon_{n\,+\,1}}=\,{\upsilon_{n}}\,+\,K\,\sin\varphi_{n}\\ {\varphi_{n\,+\,1}}=\,{\varphi_{n}}\,+\,{\upsilon_{n\,+\,1}}\end{array}\] where \({\upsilon_{n}}\) is the velocity before impact and \({\varphi_{n}}\) is the time of impact normalized by the frequency of the table (i.e., \(\varphi=\mathit{out}\ \mathrm{mod}\ 2\pi\)). \(K\) is proportional to the amplitude of the vibrating table in Figure 4-11. These equations differ from those in (4-19) by the assumption that there is no energy loss on impact. This implies that regions of initial conditions in the phase space (\(\upsilon\), \(\varphi\)) preserve their area under multiple iteration of the map (6-36). Orbits in the (\(\upsilon\), \(\varphi\)) plane for different initial conditions are shown in Figure 6-26 for two different values of \(K\). Consider the case of \(K=0.6\). The dots at \(\upsilon=0\), \(2\pi\) correspond to period-1 orbits; that is, \[\begin{array}{l} {\upsilon_{1}}=\,{\upsilon_{1}}\,+\,K\,\sin\varphi_{1}\\ {\varphi_{1}}=\,{\varphi_{1}}\,+\,{\upsilon_{1}}\end{array}\] whose solution is given by \(\varphi_{1}=0\), \(\pi\), \({\upsilon_{1}}=0\) (both \(\mathrm{mod}\ 2\pi\)). The solution near \(\varphi=\pi\) is stable for \(|2\,-\,K|<2\). The solution near \(\varphi=0\), \(2\pi\), however, can be shown to be unstable for \(|2\,+\,K|<2\) and can represent saddle points of the map. Near \(\upsilon=\pi\) one can see a period-2 orbit given by the solution to \[\begin{array}{l} {\upsilon_{2}}=\,{\upsilon_{1}}\,+\,K\,\sin\varphi_{1}\,,\qquad{\varphi_{2 }}=\varphi_{1}\,+\,{\upsilon_{2}}\\ {\upsilon_{1}}=\,{\upsilon_{2}}\,+\,K\,\sin\varphi_{2}\,,\qquad{\varphi_{1}}= \varphi_{2}\,+\,{\upsilon_{1}}\end{array}\]Again one can show that there are both stable and unstable period-2 points. One can also show that the stable points exist as long as \(K<2\). The rest of the continuous-looking orbits in Figure 6-26 represent quasiperiodic solutions where the ball impact frequency is incommen Figure 6-26: (_a_) Poincaré map for elastic motion of a ball on a vibrating table (standard map) for the parameter \(\gamma=0.6\) in Eq. (6-3.36) showing periodic and quasiperiodic orbits. (_b_) The case of \(\gamma=1.2\) showing the appearance of stochastic orbits. surate with the driving period. Finally, a third type of motion is present in Figure 6-26\(b\) (_K_ = 1.2). Here we see a diffuse set of dots near where saddle points and the saddle separatrices used to exist. This diffuse set of points represents _conservative chaos_. For \(K\) < 1, it is localized around the saddle points. However, for \(K\) 1, this wandering orbit becomes global in nature. (See also Figures 1-13, 3-35.) The reader should note that in Figure 6-26 (_K_ = 0.6) one can obtain all types of motion by simply choosing different initial conditions (because there is no damping, there are no attractors). A criterion for global chaos in this system was proposed by the Soviet physicist Chirikov (1979). He observed that as \(K\) is increased, the vertical distance between the separatrices associated with both period-1 and period-2 motion decreased. If chaos did not intervene, these separatrices would overlap (Figure 6-27)--thus the name _overlap criterion_. If one performs a small-_K_ analysis of the standard map (6-3.36) near one of these periodic resonances, the size of each separatrix region is found to be \[\begin{array}{l} {\Delta_{1} = 4K^{1/2}} \\ {\Delta_{2} = K} \\ \end{array}\] Each analysis ignores the effect of the other resonance. The condition Figure 6-27: Sketch of period-1 and period-2 orbits and concomitant quasiperiodic orbits for the standard map used in the deviation of Chirikov’s criterion. for overlap is that \(\Delta_{1}\,+\,\Delta_{2}\,=\,2\pi\), or \[4K_{c}^{1/2}\,+\,K_{c}\,=\,2\pi\] (6-3.38) The solution to this equation is \(K_{c}\,=\,1.46\). This value overestimates the critical value of \(K=K_{c}\) for global chaos which is found numerically to be around \(K_{c}\,\approx\,1.0\). The reader is referred to Lichtenberg and Lieberman (1983) for further details concerning the overlap criterion. The more practical-minded reader might ask: _What happens when we have a small amount of damping present_? For that case, some of the multiperiod subharmonics become attractors and the ellipses surrounding these attractors become spirals that limit the periodic motions. _What of the conservative chaos_? Initial conditions in regions where there was conservative chaos become long chaotic transients which wander around phase space before settling into a periodic motion. _And what about real chaotic motions_? When damping is present, one needs a much larger force, \(K>6\), for which a fractal-like strange attractor appears (see Figure 4-11). Thus, the overlap criterion discussed above is only useful for strictly conservative, Hamiltonian systems. ### Criteria for a Multiple-Well Potential In this section, we describe an ad hoc criterion for chaotic oscillations in problems with multiple potential energy wells. Such problems include the buckled beam (Chapter 2) and a magnetic dipole motor with multiple poles. In solid-state physics, interstitial atoms in a regular lattice can have more than one equilibrium position. Often the forces that create such problems can be derived from a potential. Let \(\{q_{i}\}\) be a set of generalized coordinates and \(V(q_{i})\) be the potential associated with the conservative part of the force such that \(\,-\,\partial V/\partial q_{j}\) is the generalized force associated with the \(q_{j}\) degree of freedom. For one degree of freedom, a special case might have the following equation of motion: \[\ddot{q}\,+\,\gamma\dot{q}\,+\,\frac{\partial V}{\partial q}\,=f\cos\omega t\] (6-3.39) where linear damping and periodic forcing have been added. \(V(q_{i})\) has as many local minima as stable equilibrium positions, as shown in Figure 6-28. For small periodic forcing, the system oscillates periodically in one potential well. But for larger forcing, the motion "spills over" into other wells and chaos often results. This criterion then seeks to determine _what value of the forcing amplitude will cause the periodic motion in one well to jump into another well_. To illustrate the method, consider the particle in a two-well symmetric potential (i.e., the buckled beam problem of Chapter 2): \[\ddot{q}\;+\;\gamma\dot{q}\;-\;\hbox{$\frac{1}{2}$}q(1\;-\;q^{2})\;=f\cos\;\omega \tag{6.3.40}\] Because we are seeking a criterion that governs the transition from periodic to chaotic motion, we use standard perturbation theory to find a relation between the amplitude of forced motion \(\langle q^{2}\rangle\) (where \(\langle\;\rangle\) indicates a time average) and the parameters \(\gamma,f\), and \(\omega\). We then try to find a _critical_ value of \(\langle q^{2}\rangle\equiv A_{c}\) independent of the forcing amplitude; that is, \[\langle q^{2}\rangle=g(\gamma,\omega,f)=A_{c}(\omega) \tag{6.3.41}\] The left-handed equality in Eq. (6.3.41) is found using classical perturbation theory, whereas the right-hand equality is based on a heuristic postulate. To carry out this program for the two-well potential, we Figure 6.28: Multiple-well potential energy function and associated phase plane. must write Eq. (6-3.40) in coordinates centered about one of the equilibrium positions: \[\eta = q - 1\] To obtain a perturbation parameter, we write \(\eta = \mu X\), so that the equation of motion takes the form \[\ddot{X} + \gamma\dot{X} + X(1 + \tfrac{3}{2}\mu X + \tfrac{1}{2}\mu^{2}X^{2}) = \frac{f}{\mu}\cos(\omega t + \phi_{0})\] (6-3.42) The phase angle \(\phi_{0}\) is adjusted so that the first-order motion is proportional to \(\cos\omega t\). The resulting periodic motion for small \(f\) is assumed to take the form \[X = C_{0}\cos\omega t + \mu(C_{1} + C_{2}\cos\omega t) + \mu^{2}X_{1}(t) \tag{6.43}\] Using either Duffing's method or Lindstedt's perturbation method (e.g., see Stoker, 1950), the resulting amplitude force relation can be found to be \[(\mu C_{0})^{2}\{[(1 - \omega^{2}) - \tfrac{3}{2}(\mu C_{0})^{2}]^{2} + \gamma^{2}\omega^{2}\} = f^{2}\] (6-3.44) Based on numerical experiments, we postulate the existence of a _critical velocity_. We propose that chaos is imminent when the maximum velocity of the motion is near the maximum velocity on the sparatrix for the phase plane of the undamped, unforced oscillator. In terms of the original variables, this criterion becomes (see Figure 6-29) \[\mu C_{0} = \frac{\alpha}{2\omega}\] (6-3.45) where \(\alpha\) is close to unity. Substituting Eq. (6-3.45) into Eq. (6-3.44), we obtain a lower bound on the criterion for chaotic oscillations: \[f_{c} = \frac{\alpha}{2\omega}\left\{\left[ \left(1 - \omega^{2} \right) - \frac{3\alpha^{2}}{8\omega^{2}} \right]^{2} + \gamma^{2}\omega^{2} \right\}^{1/2}\] (6-3.46) This expression has been checked against experiments by the author (Moon, 1980a), and a factor of \(\alpha \approx 0.86\) seemed to give excellent agreement with experimental chaos boundaries as shown in an earlier figure (Figure 6-2). For low damping, this criterion gives a much better bound than does the homoclinic orbit criterion using the Melnikov function. As illustrated in Figure 6-29, this criterion is similar to the Chirikov overlap criterion--namely, that chaos results when a regular motion becomes too large. The method outlined in this section has also been used on a three-well potential problem, (6-3.29), and has been tested successfully in experiments on a vibrating beam with three equilibria by Li (1984). Criteria Derived from Classical Perturbation AnalysisThe novitiate to the field of nonlinear dynamics may be misled by the current interest in chaos to conclude that the field lay dormant in the prechaos era. However, a large literature exists describing (a) mathematical perturbation methods for calculating primary and subharmonic resonances and (b) the stability characteristics of solutions to nonlinear systems (e.g., see Nayfeh and Mook, 1979). Thus, it is no surprise that studies are beginning to emerge that attempt to use the more classical analyses in the effort to find criteria for chaotic motion. For example, Nayfeh and K hdeir (1986) use perturbation techniques to predict the occurrence of period-doubling or period-tripling bifurcations as precursors to chaotic oscillations of ships in regular sea waves (see also Chapter 4, Eq. (4-2.17)). In another study, Szemplinska-Stupnicka and Bajkowski (1986) Figure 6-29: Overlap criteria for a multiple-well problem using semiclassical analytic methods. have studied the Duffing oscillator of Ueda [Eq. (4-6.1)]. They found subharmonic solutions using perturbation techniques and link the onset of chaos to the loss of stability of the subharmonics using classical stability analysis. They use analog computer experiments to check their results. They conclude that for the Duffing-Ueda attractor [Eq. (4-6.1)], the chaotic motion is a transition zone between the subharmonic and resonant harmonic solutions. See Szemplinska-Stupnicka (1992). Although the author believes that the fundamental nature of chaotic motion is more closely related to such mathematical paradigms as horseshoe maps, fractals, and homoclinic orbits, the use of semiclassical methods of perturbation analysis may provide more practical analytic chaos criteria for certain classes of nonlinear systems. ### Lyapunov exponents Thus far we have discussed mainly predictive criteria for chaos. Here we describe a tool for _diagnosing_ whether or not a system is chaotic. Chaos in deterministic systems implies a sensitive dependence on initial conditions. This means that if two trajectories start close to one another in phase space, they will move exponentially away from each other for small times on the average. Thus, if \(d_{0}\) is a measure of the initial distance between the two starting points, at a small but later time the distance is \[d(t)=d_{0}2^{\lambda t}\] If the system is described by difference equations or a map, we have \[d_{n}=d_{0}2^{\lambda n}\] [The choice of base 2 in Eqs. (6-4.1) and (6-4.2) is convenient but arbitrary.] The symbols \(\Lambda\) and \(\lambda\) are called _Lyapunov exponents_.1 Footnote 1: Lyapunov was a Russian mathematician (1857–1918) who introduced this idea around the turn of the century. An excellent review of Lyapunov exponents and their use in experiments to diagnose chaotic motion is given by Wolf et al. (1985). This review also contains two useful computer programs for calculating Lyapunov exponents. Another review is Abarbanel et al. (1991). The divergence of chaotic orbits can only be locally exponential, because if the system is bounded, as most physical experiments are,\(d(t)\) cannot go to infinity. Thus, to define a measure of this divergence of orbits, we must average the exponential growth at many points along a trajectory, as shown in Figure 6-30. One begins with a reference trajectory [called a _fiduciary_ by Wolf et al. (1985)] and a point on a nearby trajectory and measures \(d(t)/d_{0}\). When \(d(t)\) becomes too large (i.e., the growth departs from exponential behavior), one looks for a new "nearby" trajectory and defines a new \(d_{0}(t)\). One can define the Lyapunov exponent by the expression \[\lambda=\frac{1}{t_{N}-t_{0}}\sum_{k=1}^{N}\log_{2}\frac{d(t_{\kappa})}{d_{0}(t _{\kappa-1})}\] (6-4.3) Then the criterion for chaos becomes \[\lambda>0\quad\text{(chaotic)}\] (6-4.4) \[\lambda\leq 0\quad\text{(regular motion)}\] The reader by now has surmised that this operation can only be done with the aid of a computer whether the data are from a numerical simulation or from a physical experiment. Only in a few pedagogical examples can one calculate \(\lambda\) explicitly. To examine one such case, consider the extension of the concept of Lyapunov exponents to a one-dimensional map (see Chapter 3), \[x_{n+1}=f(x_{n})\] (6-4.5) Following Chapter 3, we define the Lyapunov or characteristic expo Figure 6-30: Sketch of the change in distance between two nearby orbits used to define the largest Lyapunov exponent. nent as \[\Lambda\,=\,\lim_{n\to\infty}\frac{1}{N}\sum_{k\to\infty}^{N}\log_{2}\left|\frac{df( x_{n})}{dx}\right|\] (6-4.6) An illustrative example given in Chapter 3 is the Bernoulli map (3-7.3) \[x_{n\,+\,1}\,=\,2\,x_{n}(\text{mod}\ 1)\] (6-4.7) s shown in Figure 3-24. Except for the switching value at \(x=\frac{1}{2}\), \(|f^{\prime}|=2\). Applying the definition (6-4.6), we find \(\Lambda\,=\,1\). Thus, on the average, the distance between nearby points grows as \[d_{n}\,=\,d_{0}2^{n}\] (6-4.8) The units of \(\Lambda\) are one bit per iteration. One interpretation of \(\Lambda\) is that one bit of information about the initial state is lost every time the map is iterated (see Section 3.7). So if we start out with \(m\) significant decimal places of information, we lose one for each iteration; that is, we lose one bit of information. _After \(m\) iterations we have lost knowledge of the initial state of the system_. Earlier in this chapter, we learned that the solution for the logistic or quadratic map becomes chaotic when the control parameter \(\alpha\) is greater than 3.57: \[x_{n\,+\,1}\,=\,ax_{n}(1\,+\,x_{n})\] (6-4.9) This can be verified by calculating the Lyapunov exponent as a function of \(a\) as shown in Figure 3-25. Beyond \(a=3.57\), the exponent becomes nonpositive in the periodic windows \(3.57<a<4\). When \(a=4\), it has been shown that \(\lambda\,=\,\ln\,2\) (e.g., see Schuster, 1984). Another example of a map for which one can calculate the Lyapunov exponent is the _tent map;_ (3-7.4). As in the Bernoulli map (6-4.7), \(|f^{\prime}(x)|\) is a constant and the Lyapunov exponent is found to be (Lichtenberg and Lieberman, 1983, pp. 416-417) \[\lambda\,=\,\log\,2r\] where \(2r\) is the slope in (3-7.4). When \(2r>1\), \(\lambda>0\) and the motion is chaotic, but when \(2r<1\), \(\lambda<0\) and the orbits are regular; in fact, all points in \(0<x<1\) are attracted to \(x=0\). ### Numerical Calculation of the Largest Lyapunov Exponent For every dynamical process, be it a continuous time history or discrete time evolution, there is a spectrum of Lyapunov or characteristic numbers that tells how lengths, areas, and volumes change in phase space. The idea of a spectrum of such numbers is discussed in the following section. However, inasfar as a criterion for chaos is concerned, one need only calculate the largest exponent, which tells whether nearby trajectories diverge (\(\lambda>0\)) or converge (\(\lambda<0\)) on the average. As yet there is no analog instrument that will measure the Lyapunov exponent, although if this measure of chaotic motion continues to prove useful, some clever person will probably invent one. At the present time, however, calculations of Lyapunov exponents must be done by digital computer, preferably a midsized laboratory computer. There are two general methods: One is for data generated by a known set of differential or difference equations (flows and maps), and the other is to be used for experimental time series data. The Wolf et al. (1985) paper discusses both methods, but our experience to date reveals that more research on finding a reliable algorithm for experimental data is needed (see also Abarbanel et al. 1991). We will review briefly techniques for a set of differential equations of the form \[\mathbf{\dot{x}} = \mathbf{f}(\mathbf{x};\mathbf{c})\] where **x** is a set of \(n\) state variables and **c** is a set of \(n\) parameters. More complete discussion of these techniques may be found in Shimada and Nagashima (1979), the works of Benettin et al. (see the 1980 reference for a complete list), and Ueda (1979). The main idea in calculating using (6-4.3) is to be able to determine the length ratio \(d(t_{k})/d(t_{k-1})\). One method is to numerically integrate the above set of equations to obtain a reference solution **x***(_t_; **x**0), where **x**0 is the initial condition. Then at each time step \(t_{k}\) integrate the equation again, using as an initial condition some nearby point **x***(_t_) + \(\eta\). However, a more direct method is to use the equation to find the variation of trajectories in the neighborhood of the reference trajectory **x***(_t_). That is, at each time step \(t_{k}\) we solve the variational equations \[\mathbf{\dot{\eta}} = \mathbf{A} \cdot \mathbf{\eta}\] where **A** is the matrix of partial derivatives \(\nabla\mathbf{f}(\mathbf{x}\)*(_{k})\). We note that,in general, the elements of **A** depend on time. However, if **A** were constant, the solution of \(\eta(t)\) between \(t_{k}<t<t_{k+1}\) would depend on the initial condition. If this initial condition is chosen at random, then it is likely to have a component that lies in the direction of the largest positive eigenvalue of **A**. It is the change in length in this direction that the largest Lyapunov exponent measures. Thus, the numerical scheme goes as follows. Integrate (6-4.10) to find **x***(_t_). Allow a certain time to pass before calculating \(d(t)\) in order to get rid of transients. After all, we are assuming we are on a stable attractor. After the transients are judged to be small, begin to integrate (6-4.11) to find \(\eta(t)\). One can choose \(|\eta(0)|=1\), but choose the initial direction to be arbitrary. Then numerically integrate \(\dot{\eta}=\textbf{A}(x^{*}(t))\cdot\eta\), taking into account the change in **A** through \(x^{*}(t)\). [In practice one can integrate both (6-4.10) and (6-4.11) simultaneously. [After a given time interval \(t_{k+1}-t_{k}=\tau\), take \[\frac{d(t_{k+1})}{d(t_{k})}=\frac{|\eta(t;t_{k})|}{|\eta(0;t_{k})|}\] (6-4.12) To start the next time step in (6-4.3), use the direction of \(\eta(\tau;t_{k})\) for the new initial condition, that is, \[\eta(0;t_{k+1})=\frac{\eta(\tau;t_{k})}{|\eta(\tau;t_{k})|}\] (6-4.13) where we have normalized the initial distance to unity. An example of this calculation is shown in Figure 6-31, where we have numerically integrated the Duffing equation [Eq. (6-2.1)] in the chaotic state as a function of the elapsed time. The equations used were \[\dot{x} = y\] \[\dot{y} = - ky-x^{3}+B\cos z\] (6-4.14) \[\dot{z} = 1\] The resulting matrix becomes \[\textbf{A}=\left[\begin{array}{cc}0&1\\ -3x^{2}&-k\end{array}\right]-B\sin z\\ 0&0&0\end{array}\] (6-4.15)Because this really is a periodically driven oscillator, changes of lengths in the phase space direction \(z=t\) are zero, as manifested by the row of zeroes in the matrix \(\mathbf{A}\). Thus, to find the largest Lyapunov exponent in this problem, one can work in the _projection_ of the phase space (\(x\), \(y\), \(z\)) onto the phase plane (\(x\), \(y\)), using the inner bracketed matrix in (6-4.15). For the data in Figure 6-31, the time step for numerical integration was \(\Delta t=0.01\) and the number of time steps to integrate \(\boldsymbol{\eta}(t)\) was chosen to be 10, or \(\tau=0.1\). The inner matrix in \(\mathbf{A}\), (6-4.15), was updated at every Runge-Kutta time step \(\Delta t\). It is clear from Figure 6-29 that \(\lambda\) is a statistical property of the motion; that is, one must average the changes in lengths over a long time in order to get reliable values. Also, one has to be careful in choosing the Runge-Kutta step size \(\Delta t\) as well as the Lyapunov exponent step size \(\tau\). A comparison of Lyapunov exponents for different parameters in the Duffing equation is shown in Table 6-1. This algorithm for calculat Figure 6-31: Calculation of the largest Lyapunov exponent for chaotic motion of the Duffing attractor (6-2.1) as a function of the total time record. ing Lyapunov exponents has proved very useful in constructing empirical chaos criteria or chaos diagrams. If one has access to a really fast computer such as the so-called supercomputers, then one can calculate \(\lambda\) as a function of the parameters in the problem [**c** in Eq. (6-4.10)]. For example, one can choose \(\mathbf{c}=(k\), \(B)\) in the Duffing problem and find \(\lambda\) for \(100\,\times\,100\) values of \(k\) and \(B\). If \(\lambda>0\), then one prints out a symbol; otherwise, if \(\lambda\sim 0\) or \(\lambda<0\), one leaves a blank. Such numerically determined chaos diagrams are useful to search for possible regions of parameter space where chaotic motion may exist (see Figure 6-3). Given the vagaries of numerical calculation, however, one should not rely solely on this technique to certify a region as chaotic. Other tests such as spectral analysis, Poincare maps, or fractal dimension should also be used to confirm suspected regions of chaotic motion. #### Lyapunov Exponents and Distribution Functions The calculation of the Lyapunov exponent (6-4.3) may be thought of as an average over time or iterates of the mapping (6-4.5). If one has a probability density function that tells the probability that certain trajectories will be in a given region of phase space, then it is possible to replace this time average by a spatial average in phase space. This idea has been explored by several researchers (Everson, 1986; Hsu, 1987). The idea is illustrated for a two-dimensional map following Everson. The case for a one-dimensional map was discussed in Section 3.6, Eq. (3-7.12). We recall that when the system is chaotic, at least one Lyapunov exponent will be greater than zero. Start with the distance between two neighboring trajectories \(\mathbf{x}_{n}\) and \(\mathbf{y}_{n}\). This distance is given by \(\left|{\bf x}_{n}\right.-{\bf y}_{n}|\) and the Lyapunov exponent is given by \[\Lambda=\lim_{N\rightarrow\infty}\frac{1}{N}\sum^{N}\log\frac{d_{n\,+\,1}}{d_{n}}\] (6-4.16) If an invariant probability distribution function \(\rho({\bf x})\) is assumed, then \(\Lambda\) can be calculated by \[\Lambda=\int\int\log\frac{d_{n\,+\,1}}{d_{n}}\rho(u,v)\;du\;dv\] (6-4.17) where a two-dimensional phase space is assumed with \({\bf x}=(u,\,v)\). The invariant density function is assumed to satisfy the normalization condition \[\int\int\rho(u,v)\;du\;dv=1\] where the integral is taken over all of phase space. Everson (1986) applied this idea to a map related to the bouncing ball problem (4-2.19) and the standard map (6-3.32), \[\begin{array}{l}\theta_{n\,+\,1}=\,\theta_{n}\,+\,BV_{n},\qquad{\rm mod}\;2 \pi\\ V_{n\,+\,1}=\,\varepsilon\,V_{n}\,+\,(1\,+\,\varepsilon)(1\,+\,\sin\,\theta_{n \,+\,1})\end{array}\] (6-4.18) This is similar to the problem examined by Holmes (1982), where \(0<\varepsilon<1\) represents dissipation and \(BV_{n}\) represents the velocity of the ball as it leaves the platform at the \(n\)th bounce (see Figure 4-11\(a\)). Everson (1986) used two observations to apply (6-4.17) to (6-4.18) to calculate the largest Lyapunov exponent. First, he notes that from numerical experiments the invariant distribution function appears to be independent of the phase \(\theta\), so that in polar coordinates (\({\bf V},\,\theta\)) \[\int_{0}^{\,\kappa}\rho\;dV=\frac{1}{2\pi}\] (6-4.19) Second, he was able to obtain an approximate expression for the expression \(d_{n\,+\,1}/d_{n}\); that is, for \(B>\!>1\), \[\frac{d_{n\,+\,1}}{d_{n}}\rightarrow\left|B(1\,+\,\varepsilon)\cos\,\theta\right|\] (6-4.20)which is independent of the velocity. Using (6-4.19) and (6-4.20), he was able to calculate \[\Lambda=\log\frac{B(1+\varepsilon)}{2}\] (6-4.21) which agrees quite well with numerical calculations. In another application of this technique, Hsu (1987) used (6-4.17) but found the probability density function numerically using a technique called _cell mapping_ [e.g., see Hsu (1981, 1987), Kreuzer (1985), and Tongue (1987)]. Further study of the determination of invariant probability distribution functions in the future may allow more general application of this method of determining Lyapunov exponents. ##### Lyapunov Spectrum Thus far we have talked only of the stretching of distance between orbits in a chaotic process. However, in three or more dimensions we know that regions of phase space may contract as well as stretch under a dynamic process. In particular, for dissipative systems, a small volume of initial conditions gets mapped into a smaller volume at a later time. This is illustrated in Figure 6-32, where a small sphere of initial conditions of radius \(\delta\) is mapped at a later time into an ellipsoid with principal axes (\(\mu_{1}^{\eta}\delta\), \(\mu_{2}^{\eta}\delta\), \(\mu_{3}^{\eta}\delta\)). Thus, for every dynamical system there is a spectrum of Lyapunov exponents or numbers \(\{\lambda_{i}\}\), \(\lambda_{i}=\log\,\mu_{i}\). Computationally, this spectrum can be calculated from a time history of a motion in phase space by finding out how lengths, areas, volumes, and hypervolumes change under a dynamic process. Wolf Figure 6-32: Sketch showing the divergence of orbits from a small sphere of initial conditions for a chaotic motion. et al. (1985) used this idea to develop a computation algorithm to calculate the \(\{\lambda_{i}\}\). If the \(\lambda_{i}\) are ordered such that \(\lambda_{1}>\lambda_{2}\cdots>\lambda_{n}\), then they show that lengths vary as \(d(t)\approx d_{0}2^{\lambda_{i}t}\), areas (formed from one point on the reference trajectory, and two nearby points) vary as \(A(t)\approx A_{0}2^{\lambda_{1}+\lambda_{2}t}\), and small volumes vary as \(V(t)\approx V_{0}2^{(\lambda_{1}+\lambda_{2}+\lambda_{3})t}\), and so on. Farmer et al. (1983) provided an analytic definition for the complete Lyapunov spectrum along with one example for which one can calculate the \(\{\lambda_{i}\}\) exactly. In the remainder of this chapter we give a sketch of the calculation of Lyapunov exponents for a two-dimensional map. Many of the details are omitted, and the interested reader is referred to the original Farmer et al. paper. To begin, we consider a general \(N\)-dimensional map \[{\bf x}_{n+1}={\bf F}({\bf x}_{n})\] (6-4.22) where \({\bf x}_{n}\) is a vector in an \(N\)-dimensional phase space. Then the change in shape of some small hypersphere will depend on the derivatives of the functions \({\bf F}({\bf x}_{n})\) with respect to the different components of \({\bf x}_{n}\). The relevant matrix is called a _Jacobian matrix_. For example, if \[F=(f(x,y,z),g(x,y,z),h(x,y,z))\] then \[J=\left[\begin{array}{ccc}\frac{\partial f}{\partial x}&\frac{\partial f}{ \partial y}&\frac{\partial f}{\partial z}\\ \frac{\partial g}{\partial x}&\frac{\partial g}{\partial y}&\frac{\partial g} {\partial z}\\ \frac{\partial h}{\partial x}&\frac{\partial h}{\partial y}&\frac{\partial h} {\partial z}\end{array}\right]=[\nabla{\bf F}]\] (6-4.23) After \(n\) iterations of the map, the local shape of the initial hypersphere depends on \[[J_{n}]=[\nabla{\bf F}({\bf x}_{n})][\nabla{\bf F}(x_{n-1})]\cdots[\nabla{\bf F }(x_{1})]\] (6-4.24) In general, one can find the eigenvalues of \(J_{n}\) which one orders according to \(j_{1}(n)\geq j_{2}(n)\geq\cdots\geq j_{N}(n)\), where the \(j_{K}(n)\) are the absolute values of the eigenvalues. The Lyapunov exponents are then defined by \[\lambda_{i}=\lim_{n\to\pm}\frac{1}{n}\log_{2}j_{i}(n)\] (6-4.25) Farmer et al. illustrated the use of this definition for a two-dimensional map called a _baker's transformation_ (Figure 6-33), named for its analogy to rolling and cutting pie dough. It is similar to the horseshoe map described in Chapter 1. The equations for this map are \[x_{n+1}=\begin{cases}\lambda_{a}x_{n},&y<\frac{1}{2}\\ \frac{1}{2}+\lambda_{b}x_{n},&y>\frac{1}{2}\end{cases}\] (6-4.26) \[y_{n+1}=\begin{cases}2y_{n},&y<\frac{1}{2}\\ 2(y-\frac{1}{2}),&y>\frac{1}{2}\end{cases}\] Figure 6-33: Baker's transformation. This map is a generalization of the Bernoulli map in the previous section (6-4.8). In this case, the Jacobian matrix becomes \[J=\left[\begin{array}{cc}S_{1}&0\\ 0&2\end{array}\right]\] (6-4.27) where \(S_{1}=\lambda_{a}\) for \(y<\frac{1}{2}\) and \(S_{1}=\lambda_{b}\) for \(y>\frac{1}{2}\). For iterations of the map, the magnitudes of the eigenvalues become \[j_{1}(n)=2^{n},\qquad j_{2}(n)=\lambda_{a}^{k}\lambda_{b}^{l},\qquad k+l=n\] where one assumes that there are \(k\) iterations in the left half-plane and \(l\) iterations in the right half-plane. Applying the definition (6-4.25), \[\lambda_{1}=\lim_{n\to\pm}\frac{1}{n}\log_{2}2^{n}\] \[\lambda_{1}=\lim_{n\to\pm}\left\{\frac{k}{n}\log_{2}\lambda_{a}+\frac{l}{n} \log_{2}\lambda_{b}\right\}\] \begin{table} \begin{tabular}{l c c c} System & Parameter Values & Lyapunov Spectrum (bits/s) & Lyapunov Dimension (see Chapter 7) \\ _Henon_ & & & \\ \(X_{n+1}=1-aX_{n}^{2}+Y_{n}\) & & \(\left\{\begin{array}{ll}a=1.4\\ b=0.3\end{array}\right.\) & & \(\lambda_{1}=0.603\) \\ \(Y_{n+1}=bX_{n}\) & & & (bits/iteration) \\ _Rossler chaos_ & & & \\ \(\dot{X}=-(Y+Z)\) & & \(a=0.15\) & \(\lambda_{1}=0.13\) \\ \(\dot{Y}=X+aY\) & & \(b=0.20\) & \(\lambda_{2}=0.00\) \\ \(\dot{Z}=b+Z(X-c)\) & & \(c=10.0\) & \(\lambda_{3}=-14.1\) \\ _Lorenz_ & & & \\ \(\dot{X}=\sigma(Y-X)\) & & \(\sigma=16.0\) & \(\lambda_{1}=2.16\) \\ \(\dot{Y}=X(R-Z)-Y\) & & \(R=45.92\) & \(\lambda_{2}=0.00\) \\ \(\dot{Z}=XY-bZ\) & & \(b=4.0\) & \(\lambda_{3}=-32.4\) \\ _Rossler hyperchaos_ & & & \\ \(\dot{X}=-(Y+Z)\) & & \(a=0.25\) & \(\lambda_{1}=0.16\) \\ \(\dot{Y}=X+aY+W\) & & \(b=3.0\) & \(\lambda_{2}=0.03\) \\ \(\dot{Z}=b+XZ\) & & \(c=0.05\) & \(\lambda_{3}=0.00\) \\ \(\dot{W}=cW-dZ\) & & \(d=0.5\) & \(\lambda_{4}=-39.0\) \\ \end{tabular} \end{table} Table 6-2: Lyapunov Exponents for Dynamical ModelsHere we invoke an assumption that after many iterations an orbit spends as much time in the left half-plane as in the right half-plane, or \[\frac{k}{n}=\frac{1}{2},\qquad\frac{l}{n}=\frac{1}{2}\] so that \[\lambda_{1}=1,\qquad\lambda_{2}=\frac{1}{2}\log_{2}\lambda_{a}\lambda_{b}<0\] (6-4.28) Knowing these two Lyapunov exponents, one can then calculate a fractal dimension for this map. The relation between Lyapunov exponents and fractal dimensions has been examined by Farmer et al. (1983) and is discussed briefly in Chapter 7. The spectra of Lyapunov exponents for several dynamics flows and maps are shown in Table 6-2, taken from Wolf et al. (1985). ### Lyapunov Exponents for a Continuous Time Dynamics When the dynamics are governed by a set of \(N\) ordinary differential equations \(\mathbf{\dot{x}}=\mathbf{f(x)}\), the spectrum of Lyapunov exponents is related to the integration of the linearly perturbed dynamics, \(\mathbf{\eta}(t)\), about a solution \(x(t)\), as in (6-4.11). The matrix of partial derivatives \(\mathbf{A}=\mathbf{\nabla}\mathbf{f}\) is time-dependent, because it depends in general on the reference solution \(\mathbf{x}^{*}(t)\). The solution at time \(t=\tau\), \(\mathbf{\eta}(\tau)\) can then be written formally in the form \[\mathbf{\eta}(\tau)=\mathbf{\Phi}(\mathbf{\eta}^{*}(t))\cdot\mathbf{\eta}(0)\] (6-4.29) where \(\mathbf{\Phi}\) is an \(N\times N\) matrix and where \(\mathbf{\eta}(0)\) is the initial perturbation. A formal definition of the Lyapunov exponent \(\lambda_{i}\) follows from the construction of a positive and symmetric matrix \[[\Phi^{T}\Phi]^{1/2\tau}\] (6-4.30) whose eigenvalues are denoted by \(\mu_{i}\). Then the Lyapunov exponents are defined by the limit \[\lambda_{i}=\lim_{\tau\to\infty}\log\mu_{i}\] (6-4.31) (e.g., see Geist et al., 1990 and Abarbanel et al., 1991). This formal definition does not provide an obvious and practical way to determine the complete set of \(\lambda_{i}\) from numerical or experimental data. The reader is referred to the above references for the latest techniques and is also referred to Wolf et al. (1985) and Parker and Chua (1989). The latter book gives algorithms for calculating many of the measures of chaotic dynamics. Typically, one has a set of discrete time sampled data \(x_{k}=x(t=k\tau)\), where \(x(t)\) is a measured state variable and \(\tau\) is the sampling time. One numerical technique for a set of data \(\{\cdots\)\(x_{i-1}\), \(x_{i}\), \(x_{i+1}\), \(\cdots\}\) is to construct a set of vectors in an embedding space of \(N\) dimensions; \(\mathbf{x}=(x_{j},x_{j+1},\ldots,x_{j+N-1})\). \(N\) is chosen at least as large as the dimension of the space of the chaotic attractor. Two methods are then used to calculate Lyapunov exponents. One is based on calculating the change in a small hypervolume as discussed above as the dynamics evolves. For example, an \(M\leq N\)-dimensional volume will change on the average according to \(V(t)=V_{0}\)exp[(\(\lambda_{1}+\lambda_{2}+\cdots+\lambda_{M}\))]. This method has been adopted by Wolf et al. (1985). Another method is based on estimating the local Jacobian matrices \(\nabla\mathbf{f}\) (e.g., see Abarbanel et al., 1991 or Geist et al., 1990). In most of these methods, several factors can raise questions as to the accuracy of the exponents. These factors include: * Ill-conditioned matrices \(\Phi\) * Small number of data points (e.g., \(<10^{4}\)) * Accuracy of data * Peculiar geometry of the attractor * Spurious exponents when \(N\) is too high To date, reliable algorithms for experimental calculation of all the \(\lambda_{i}\) are wanting. The researcher should always use measured Lyapunov exponents with some suspicion, especially where the dimension of the attractor is six or higher. ## Hyperchaos In the introductory chapter of this book, we defined chaotic dynamics as a sensitivity of the time history of a system to initial conditions. This sensitivity is exemplified by the horseshoe map (Chapter 1) in which a small ball or cube of initial conditions in phase space is stretched and folded back on itself. This stretching is measured by a positive Lyapunov exponent. Actually, for an \(n\)-dimensional phase space there are \(n\) Lyapunov exponents each measuring the relative stretching and contraction of the various axes of the ball of initial conditions. In many systems this stretching occurs along one direction and results in one positive Lyapunov exponent. However, in some systems, two or more directions in the phase space suffer stretching under the dynamic process. The occurrence of two or more positive Lyapunov exponents is called _hyperchaos_. One example that has been studied numerically is the example of two coupled Van der Pol oscillators (Kapitaniak and Steeb, 1991): \[\begin{array}{l}\ddot{x}\,-\,a(1\,-\,x^{2})\dot{x}\,+\,x^{3}\,=\,b(\sin\,\omega t \,+\,y)\\ \ddot{y}\,-\,a(1\,-\,y^{2})\dot{y}\,+\,y^{3}\,=\,b(\sin\omega t\,+\,x)\end{array}\] (6-4.32) Hyperchaos was studied for the values \(a=0.2\), \(\omega=4.0\) and for a variety of control parameters, for example, \(b=6.5\), 7.0, 8.0. This system must be described in a five-dimensional phase space. For \(b=7.0\), Kapitaniak and Steels numerically calculated a set of Lyapunov exponents (0.69, 0.23, 0, \(-\)0.66, \(-\)0.94). This system also has multiple solutions dependent on the initial conditions for the case \(b=7.0\); the initial conditions that led to hyperchaos were \(x(0)=1.0\), \(\dot{x}(0)=\dot{y}(0)=y(0)=0\). In the case of \(b=7.0\), the above authors also show that there are four separate solutions; time \(T\) periodic, \(3T\) periodic, chaotic, and hyperchaotic solutions. Each has its own basin of attraction whose boundaries may be fractal (see Chapter 7 for a discussion of basin boundaries). This example shows the complexity that arises when the dimension of the phase space becomes greater than three. Some mathematicians (e.g., Rossler, 1979) believe that there are new dynamical phenomena to be discovered in four or more dimensions. This author believes that in spite of the flood of books and papers on nonlinear and chaotic dynamics, we are still at the beginning of an era of new knowledge in this field. For novitiates to the field of chaos, there is still much to be discovered. (See Table 6-2 for another example.) ## Problems ### 6-1 _Period Doubling_ Use a small computer to enumerate the critical values of \(\lambda\) in the logistic equation (6-3.1) and show that the sequences of values of \(\lambda_{n}\) and \(a_{n}\) approach the universal numbers (6-3.2) and (3-6.5).
## FRACTALS AND DYNAMICS DYNAMICAL SYSTEMS _Do you see O my brothers and sisters? It is not chaos or death--it is form, union, plan--it is eternal life--it is Happiness_. Walt Whitman _Leaves of Grass_ ### 7.1 Introduction Both "chaotic" and "strange attractor" have been used to describe the nonperiodic, randomlike motions that are the focus of this book. Whereas "chaotic" is meant to convey a loss of information or loss of predictability, the term "strange" is meant to describe the unfamiliar geometric structure on which the motion moves in phase space. In Chapter 6, we described a quantitative measure of the chaotic aspect of these motions using Lyapunov exponents. In this chapter, we will describe a quantitative measure of the strangeness of the attractor. This measure is called the _fractal dimension_. To do this, we will have to describe the concept of fractal as it pertains to our applications. In addition to the application of fractal ideas to the description of the attractor itself, it has been discovered that other geometric objects in the study of chaos, such as the boundary between chaotic and periodic motions in initial condition or parameter space, may also have fractal properties. Thus, we will also include a section on _fractal basin boundaries_. At the beginning of this book, we noted that the revolution in nonlinear dynamics has been sparked by the introduction of new geometric, analytic, and topological ideas which have given experimentalists (including numerical analysts) new tools to analyze dynamical processes. This in some ways parallels the earlier Newtonian revolution which introduced the calculus into dynamics. (Of course, Newton contributed much more by proposing new physical laws along with new mathematics.) Thus, in some sense, we are entering the second phase of the Newtonian revolution in dynamics, and new geometric concepts like fractals must be mastered if one is to use the results of the new dynamics in practical problems. Perhaps the most singular characteristic of chaotic vibrations in dissipative systems is the Poincare map. These pictures provide a cross section of the attractor on which the motion rides in phase space and when the motion is chaotic, a mazelike, multisheeted structure appears. We have learned that this threadlike collection of points seems to have further structure when examined on a finer scale. To characterize such Poincare patterns, we have used the term _fractal_. In this chapter we will try to make the mathematical meaning of fractal more precise. However, this treatment is not rigorous. Instead, what follows is one engineer's attempt to understand fractal structures and how to apply them to chaotic dynamics. In the following section, we will begin with a few simple examples of fractal curves and sets, namely, _Koch curves_ and _Cantor sets_. We will also introduce a quantitative measure of fractal qualities: the fractal dimension. Then we will illustrate these concepts in several applications in nonlinear and chaotic vibrations. The author presumes that the reader has no prior knowledge of set theory or topology beyond engineering mathematics at the baccalaureate level. For the reader who wants to study more about fractals, there are now several excellent texts to use. Two books which have already become classics are the treatise by Mandelbrot (1982) and the beautiful colorful tour of fractal sets by Peitgen and Richter (1986). However, those who desire a more mathematical treatment may find a very readable book in Falconer (1990) or Barnsley (1988). The latter book provides both mathematical and computational tools for the reader who wishes to play with fractals on the computer. A very readable introductory text on fractals in that by Peitgen et al. (1992). Finally, there is a treatment oriented toward applications of fractals in the physical sciences by Feder (1988). ### Koch Curve This example is chosen from the book by Mandelbrot (1977) and was originally described by von Koch in 1904. One begins with a geometric construction that starts with a straight line segment of length 1. After dividing the line into three segments, one replaces the middle segment by two lines of length 1/3 as shown in Figure 7-1. Thus, we are left with four sides, each of length 1/3, so that the total length of the new boundary is 4/3. To get a fractal curve, one repeats this process for each of the new four segments and so on. At each step, the length is increased by 4/3 so that the total length approaches infinity. After many steps, one can see that the curve looks fuzzy. In fact, in the limit one has a continuous curve that is nowhere differentiable. In some sense, this new curve is trying to cover an area as would a young child scribbling with crayons. Thus, we have the apparent paradox of a continuous curve that has some properties of an area. It is not surprising that one can define a dimension of this fractal curve which results in a value between 1 and 2. ### Cantor Set The Cantor set is attributed to George Cantor (1845-1918), who discovered it in 1883. It is a very important concept in modern nonlinear dynamics. If the Koch curve can be considered a process of adding finer and finer length structure to an initial line segment, then the Cantor set is the complement operation of removing smaller and smaller segments from a set of points initially on a line. The construction begins as in the previous example with a line segment of length 1 which is subdivided into three sections as in Figure 7-2. However, instead of adding two more segments as in the Koch curve, one removes the middle segment of points so that the total number of segments is increased to two, and the total length is reduced Figure 7-1: Partial construction of a fractal Koch curve. to 2/3. This process is continued for the remaining line segments and so on. At each stage one throws away the middle segments of points, creating twice as many line segments but reducing the total length by 2/3. In the limit the total length approaches 0, although as we shall see below, the fractal dimension of this set of points is between 0 and 1. ##### The Devil's Staircase The discontinuous fractal Cantor set can be used to generate a continuous fractal function by integrating an appropriate distribution function defined on the set. For example, we imagine a distribution of mass on the interval \(0\leq x\leq 1\) with total mass equal to 1 in some units. Then if we redistribute the mass on the remaining Cantor intervals, at each step of the limiting process the mass density increases on the decreasing Cantor intervals such that the total mass is 1. At the \(n\)th step, the number of intervals is \(2^{n}\) each of length \((1/3)^{n}\) so that the density is \((3/2)^{n}\). Integrating the mass density along \(x\), we obtain the mass as a function of \(x\): \[M_{n}(x)=\int_{0}^{x}\rho_{n}(x)\;dx\] where \(\rho_{n}=(3/2)^{n}\) on the Cantor intervals and \(\rho_{n}=0\) otherwise. The limit of this process as \(n\rightarrow\infty\) is a function called the _devil's staircase_ which has an infinite number steps. One intermediate function \(M_{n}(x)\) is shown in Figure 7-3. In the limit, \(M(x)=\lim_{n\rightarrow\infty}M_{n}(x)\). The expression \(dM(x)/dx\) is an infinite set of delta functions. Figure 7-2: _Top to bottom: Sequential steps in the construction of a Cantor set._ #### Fractal Dimension Thus far we have two examples of fractal sets but do not have any test to determine if a set of points is fractal. To classify the Poincare map of some nonlinear system, we need some quantitative measure of the fractal nature of the attractor. There are many measures of the dimension of a set of points. We will describe a very intuitive or geometric definition called the _capacity_ or _box-counting_ dimension. Other definitions, which incorporate deeper mathematical subtleties, may be found in Mandelbrot1 (1977), Farmer et al. (1983), or Feder (1988) as well as in the next section. We begin with the measurement of the dimension of points along a line or distributed on some area. First consider a _uniform_ distribution of \(N_{0}\) points along some line or one-dimensional manifold in a three-dimensional space, as shown in Figure 7-4. We then ask how we can _cover_ this set of points with small cubes with sides of length \(\varepsilon\). (One can also use spheres of radius \(\varepsilon\).) To be more specific, we calculate the minimum number of such cubes \(N(\varepsilon)\) to cover the set (\(N(\varepsilon)<N_{0}\)). When \(N_{0}\) is large and \(\varepsilon\) small enough, the number of cubes to cover a line will scale as \[N(\varepsilon)\approx\frac{1}{\varepsilon}\] Figure 7-3: Devil’s staircase function. Similarly, if we distribute points uniformly on some two-dimensional surface in three-dimensional space, one will find that the minimum number of cubes to cover the set will scale in the following way: \[N(\varepsilon)\approx\frac{1}{\varepsilon^{2}}\] If the reader is convinced that this is intuitive, then it is natural to define the dimension by the following scaling law: \[N(\varepsilon)\approx\frac{1}{\varepsilon^{d}}\] (7-1.1) Taking the logarithm of both sides of Eq. (7-1.1) and adding a subscript to denote _capacity dimension_, we have \[d_{c}=\lim_{\varepsilon\to 0}\frac{\log N(\varepsilon)}{\log(1/\varepsilon)}\] (7-1.2) Implicit in this definition is the requirement that the number of points in the set be large or \(N_{0}\to\infty\). A set of points is said to be fractal if its dimension is non-integer--hence the term _fractal dimension_. In the two examples of the Koch curve or Cantor set, the fractal dimension can be calculated exactly. For example, consider the \(n\)th Figure 7-4: Covering procedure for linear and planar distributions of points. iteration of the generation of the Koch curve where we let the size of the cubes be equal to the length of a straight line segment. At the \(n\)th step in the construction, the number of segments is \[N_{n}\,=\,4^{n}\] where the size \(\varepsilon\) is given by \[\varepsilon_{n}=\left(\frac{1}{3}\right)^{n}\] Replacing the limit \(\varepsilon\to 0\) with \(n\to\infty\) in Eq. (7-1.2), one can easily see that for the \(Koch\)_curve_ \[d_{c}=\frac{\log 4}{\log 3}\,=\,1.26185\,\ldots\] (7-1.3) Similarly, one can show that for the \(Cantor\)_set_ \[d_{c}=\frac{\log 2}{\log 3}\,=\,0.63092\,\ldots\] (7-1.4) One way to interpret the fractal dimension of the Koch curve is that the distribution of points cover more than a line but less than an area. Another fractal-producing process that begins with an area distribution of points is shown in the exercise in Figure 7-5 called the _Sierpinski triangle_ (Named after the Polish mathematician Waclaw Sierpinski, 1882-1969). At each step one removes a triangular area, creating three new triangles, but the scale is half the size of the original. One can show that this process leads to a fractal dimension of \(d_{c}\,=\,\log\,3/\log\,2\). The connection between dynamics and fractals may not be evident so far, but in each of the three examples above, one has an iterative process. The relationship between fractals and iterative maps is made more explicit with the following two examples. The _horseshoe map_ has been discussed earlier in Chapters 1, 3 and 6 and is shown graphically in Figure 7-6. It is perhaps the simplest example of an iterative dynamical process in the plane that leads to a loss of information and fractal properties. The calculation of the capacity fractal dimension is similar to that for the Cantor set except that the vertical direction leads to a contribu [MISSING_PAGE_EMPTY:8] Another example for which one can calculate the fractal properties is the _baker's transformation_ two-dimensional map (Figure 6-33). This example may be found in Farmer et al. (1983) and is similar to the horseshoe map. Its name derives from the idea of a baker rolling, stretching, and cutting pastry dough as shown in Figure 6-33. In this example, one can write out the specific difference equation or mapping relating a piece of dough at position \((x_{n},y_{n})\) to its new position in one iteration: \[x_{n+1} = \left\{ \begin{array}{ll} \lambda_{a}x_{n} &\quad\mbox{if $y_{n}<\alpha$} \\ \frac{1}{2}+\lambda_{b}x_{n}&\quad\mbox{if $y_{n}>\alpha$} \\ y_{n+1} = \left\{ \begin{array}{ll} y_{n}/\alpha &\quad\mbox{if $y_{n}<\alpha$} \\ \frac{1}{1-\alpha}(y_{n}-\alpha)&\quad\mbox{if $y_{n}>\alpha$} \\ \end{array} \right.\end{array}\right.\] (7-1.6) where \(0\leq x_{n}\leq 1\) and \(0\leq y_{n}\leq 1\). The article by Farmer et al. (1983) is very readable, so we will not present the details but will quote the results. The problem is used by Farmer et al. to show the difference between different definitions of fractal dimension. They define the following function: \[H(\alpha)=\alpha\log\frac{1}{\alpha}+(1-\alpha)\log\frac{1}{1-\alpha}\] (7-1.7) Using the definition of capacity, they find that \[d_{c}=1+\hat{d}_{c}\] (7-1.8) where \(\hat{d}_{c}\) satisfies a transcendental equation \[1=\lambda_{a}^{\hat{d}_{c}}+\lambda_{b}^{\hat{d}_{c}}\] (7-1.9) When \(\lambda_{a}=\lambda_{b}=\lambda\), \[d_{c}=1+\frac{\log 2}{\log|\lambda|}\] (7-1.10) which is independent of \(\alpha\) and identical to that for the horseshoe map (7-1.5). Other examples of iterated maps which produce fractal distribution of points are found in Barnsley (1988). Barnsley showed how simple maps can produce natural-looking fractal objects such as trees, ferns, and clouds. It is probably safe to say that artists have intuitively understood the nature of fractal properties of nature, especially the impressionists in the way they used dots of color to achieve different effects of filling Euclidean space. In a more recent example, an advertisement in a popular magazine featured a Japanese artist whose design for a kimono material shows these fractal properties quite clearly (Figure 1-28). ### Measures of Fractal Dimension There are two criticisms of the use of capacity as a measure of fractal dimension of strange attractors--one theoretical and the other computational. First, capacity dimension is a geometric measure; that is, it does not account for the frequency with which the orbit might visit the covering cube or ball. Second, the process of counting a covering set of hypercubes in phase space is very time-consuming computationally. In this section we will discuss three alternative definitions of fractal dimension which will address the shortcomings of the capacity or box-counting dimensions. However, it should be pointed out that for many strange attractors these different dimensions give roughly the same value. ##### Pointwise Dimension Let us consider a long time trajectory in phase space as shown in Figure 7-7. First, we time sample the motion so that we have a large number of points per orbit. Second, we place a sphere or cube of radius or length \(r\) at some point on the orbit and count the number of points within the sphere \(N(r)\). The probability of finding a point in this sphere is then found by dividing by the total number of points in the orbit \(N_{0}\); that is, \[P(r)=\frac{N(r)}{N_{0}}\] (7-2.1) For a one-dimensional orbit, such as a closed periodic orbit, \(P(r)\) will be linear in \(r\) as \(r\!\to\!0\), \(N_{0}\!\to\!\infty\); \(P(r)\approx br\). If the orbit were quasiperiodic, that is it moves on a two-dimensional toroidal surface in a three-dimensional phase space, then the probability of finding a point on the orbit in a small cube or sphere of radius \(r\) would be \(P(r)\approx br^{2}\). This leads one to define a dimension of an orbit at a point \(\mathbf{x}_{i}\) (here \(\mathbf{x}_{i}\) is a vector in phase space) by measuring the relative percentage of time that the orbit spends in the small sphere; that is \[d_{P}=\lim_{r\to 0}\frac{\log P(r;\mathbf{x}_{i})}{\log r}\] (7-2.2) For some attractors, this definition will be independent of the point \(\mathbf{x}_{i}\). But for many, \(d_{P}\) will depend on \(\mathbf{x}_{i}\) and an averaged pointwise dimension is best used. Also, for some sets of points such as a Cantor set, there will be gaps in the distribution of points so that \(P(r)\) is not a smooth function of \(r\) as \(r\to 0\), as can be seen in the Devil's staircase in Figure 7-3. To obtain an averaged pointwise dimension, one randomly chooses a set of points \(M<N_{0}\) and calculates \(d_{P}(\mathbf{x}_{i})\) at each point. The averaged pointwise dimension is given by \[\hat{d}_{P}=\frac{1}{M}\sum_{i=1}^{M}d_{P}(\mathbf{x}_{i})\] (7-2.3) As an alternative, one can average the probabilities \(P(r;\mathbf{x}_{i})\). Choose a random _subset_ of \(M\) points distributed around the attractor, where \(M<\!<N_{0}\). We then conjecture that \[\lim_{r\to 0}\frac{1}{M}\sum_{i=1}^{M}P(r;\mathbf{x}_{i})=ar^{d_{P}}\] Figure 7-7: Long-time trajectory of motion in phase space showing the time-sampled data points and the counting sphere. or \[d_{P}=\lim_{r\to 0}\frac{\log(1/M)\Sigma P(r)}{\log r}\] In practice, if \(N_{0}\approx 10^{3}\)-\(10^{4}\) points, then \(M\approx 10^{2}\)-\(10^{3}\). ### Correlation Dimension This measure of fractal dimension has been successfully used by many experimentalists [e.g., see Malraison et al. (1983), Swinney (1985), Ciliberto and Gollub (1985), and Moon and Li (1985a)] and in some ways is related to the pointwise dimension. An extensive study of this definition of dimension has been given by Grassberger and Proccacia (1983). As in the definition of pointwise dimension, one discretizes the orbit to a set of \(N\) points \(\{{\bf x}_{i}\}\) in the phase space. (One can also create a pseudo-phase-space; see Chapter 5 and next section.) One then calculates the distances between pairs of points, say \(s_{ij}=|{\bf x}_{i}-{\bf x}_{j}|\), using either the conventional Euclidean measure of distance (square root of the sum of the squares of components) or some equivalent measure such as the sum of absolute values of vector components. A correlation function is then defined as \[C(r)=\lim_{N\to\infty}\frac{1}{N^{2}}\binom{\mbox{number of pairs }(i,j)}{\mbox{with distance }s_{ij}<r}\] (7-2.4) For many attractors this function has been found to exhibit a power law dependence on \(r\) as \(r\to 0\); that is, \[\lim_{r\to 0}C(r)=ar^{d}\] so that one may define a fractal or correlation dimension using the slope of the ln \(C\) versus ln \(r\) curve: \[d_{G}=\lim_{r\to 0}\frac{\log C(r)}{\log r}\] (7-2.5) It has been shown that \(C(r)\) may be calculated more effectively by constructing a sphere or cube at each point \({\bf x}_{i}\) in phase space and counting the number of points in each sphere; that is,\[C(r)=\lim_{r\to 0}\frac{1}{N^{2}}\sum_{i}^{N}\sum_{j}^{N}H(r-|{\bf x}_{i}-{\bf x}_{j}|)\] (7-2.6) where \(H(s)=1\) if \(s>0\) and \(H(s)=0\) if \(s<0\). This differs from the pointwise dimension in that the sum here is performed about _every_ point. ##### Information Dimension Many investigators have suggested another definition of fractal dimension that is similar to the capacity (7-1.2) but tries to account for the frequency with which the trajectory visits each covering cube. As in the definition of capacity, one covers the set of points, whose dimension one wishes to measure, by a set of \(N\) cubes of size \(\varepsilon\). This set of points is again a uniform discretization of the continuous trajectory. (It is assumed that a long enough trajectory is chosen to effectively cover the attractor whose dimension one wants to measure. For example, if the motion is quasiperiodic, the trajectory has to run long enough to "visit" all regions on the toroidal surface of the attractor.) To calculate the information dimension, one counts the number of points \(N_{i}\) in each of the \(N\) cells and determines the probability of finding a point in that cell \(P_{i}\), where \[P_{i}\equiv\frac{N_{i}}{N_{0}},\qquad\sum^{N}P_{i}=1\] (7-2.7) where \(N_{0}\) is the total number of points in the set. Note that \(N_{0}\neq N\). The _information entropy_ is defined by the expression \[I(\varepsilon)=-\sum^{N}P_{i}\log P_{i}\] (7-2.8) [When the log function is with respect to base 2, \(I(\varepsilon)\) has the units of bits.] For small \(\varepsilon\) it is found that \(I\) behaves as \[I\approx{\rm d}_{I}\log(1/\varepsilon)\] so that for small \(\varepsilon\) we may define a dimension \[d_{I}=\lim_{\varepsilon\to 0}\frac{I(\varepsilon)}{\log(1/\varepsilon)}= \lim\frac{\Sigma P_{i}\log P_{i}}{\log\varepsilon}\] (7-2.9)To see that this definition is related to the capacity, we note that if the probabilities \(P_{i}\) were equal for all cells, that is, \[P_{i}=\frac{N_{i}}{N_{0}}=\frac{1}{N}\] (7-2.10) then \[I= P_{i}\log P_{i}=\ -N\,\frac{1}{N}\log\frac{1}{N}=\,\log N\] so that \(d_{I}=d_{c}\). In general, it can be shown that (see Farmer et al., 1983) \[d_{I}\leq\,d_{c}\] (7-2.11) Further discussion of the information dimension may be found in Farmer et al. (1983), Grassberger and Proccacia (1983), and Shaw (1984). The information entropy is a measure of the _unpredictability_ in a system. That is, for a uniform probability in each cell, \(P_{i}=1/N\), \(I\) is at a maximum. If all the points are located in one cell (maximum predictability), \(I=0\) as can be seen by the calculation \[\text{For}\ P_{i}=1/N,\qquad I=\,\log N\] \[\text{For}\ P_{1}=1,P_{i}=0,i\neq 1,\qquad I=1\cdot\,\log 1=0\] Definition (7-2.8) and the use of the symbol \(I(\varepsilon)\) are confusing in the literature. Shaw (1981) used the symbol \(H\) to denote entropy and \(I\) to denote the negative entropy (\(-H\)) or _information_. Thus, for Shaw, a more predictable system (i.e., sharper \(P_{i}\) distribution) has _higher_ information. ### Relationship Between Fractal Dimension and Lyapunov Exponents Thus far we have defined the following fractal dimensions: \(d_{c}\): the capacity (7-1.2) \(d_{P}\): pointwise dimension (7-2.2) \(d_{G}\): correlation dimension (7-2.5) \(d_{I}\): information dimension (7-2.9)Grassberger and Proccacia (1983) have shown that the information dimension and the correlation dimension are lower bounds on the capacity definition; that is, \[d_{G}\leq d_{I}\leq d_{c}\] For many of the standard strange attractors, however, all three were very close (see Table 7-1). In summary, one can say that the capacity dimension takes no account of the distribution of points between covering cells, whereas the information entropy dimension measures the probability of finding a point in a cell. Finally, the correlation dimension accounts for the probability of finding two points in the same cell (e.g., see Grassberger and Proccacia, 1984). A further relationship betweeen fractal dimension, information entropy, and Lyapunov exponents was made by Kaplan and Yorke (1978). We recall from Chapter 6 that the Lyapunov exponents measure the rate at which trajectories _on_ the attractor diverge from one another and trajectories _off_ the attractor converge toward the attractor (e.g., see Figure 6-32). Thus, a small sphere of initial conditions centered at some point on the attractor in phase space is imagined to deform in time under the dynamical process into an ellipse. For example, for a chaotic two-dimensional map, \[{\bf x}_{n\,+\,1}\,=\,f({\bf x}_{n})\] a circle of initial conditions (with radius \(\varepsilon\)) deforms into an ellipse after \(M\) iterations of the map. The major and minor radii are given by \begin{table} \begin{tabular}{c c c c} Name of Systems & Dimension & Type & Source of Data \\ Henon map (1-3.8) & 1.26 & Capacity & Grassberger and \\ (\(\alpha=1.4\), \(b=0.3\)) & 1.21 \(\pm\) 0.01 & correlation & Proccacia (1983) \\ Logistic map (1-3.6) & 0.538 & Capacity & Grassberger and \\ (\(\lambda=3.5699456\)) & 0.500 \(\pm\) 0.005 & correlation & Proccacia (1983) \\ Lorenz equations & 2.06 \(\pm\) 0.01 & Capacity & Grassberger and \\ (1-3.9) & 2.05 \(\pm\) 0.01 & correlation & Proccacia (1983) \\ Two-well potential & 2.14 (\(\gamma=0.15\)) & Correlation & Moon and Li (1985a) \\ [Eq. (6-3.7)], \(f=0.16\), & & & \\ \(\omega=0.8333\)) & 2.61 (\(\gamma=0.06\)) & & \\ Chua’s circuit & 2.82 & Lyapunov & Matsumoto et al. (1985) \\ \end{tabular} \end{table} Table 7-1: Fractal Dimension of Selected Dynamical Systems\(L_{1}^{M}\varepsilon\) and \(L_{2}^{M}\varepsilon\). When \(L_{1}\) and \(L_{2}\) are averaged over the whole attractor, they are referred to as _Lyapunov numbers_, and \(\lambda_{i}=\log L_{i}\) are called the _Lyapunov exponents_. Kaplan and Yorke (1978) (see also Farmer et al., 1983)2 have suggested that one can calculate a dimension for a fractal attractor based on the Lyapunov exponents. For a two-dimensional map this dimension becomes \[d_{L}=1+\frac{\log L_{1}}{\log\left(1/L_{2}\right)}=1-\frac{\lambda_{1}}{ \lambda_{2}}\] (7-2.14) For higher-dimensional maps in an \(N\)-dimensional phase space, the relation is more complicated. First we order the Lyapunov numbers; that is, \[L_{1}>L_{2}>...>L_{k}>...>L_{N}\] (7-2.15) Then find \(L_{k}\) such that the product is \[L_{1}L_{2}\cdots L_{k}\geq 1\] The Lyapunov dimension is defined to be \[d_{L}=k+\frac{\log(L_{1}L_{2}\cdots L_{k})}{\log(1/L_{k+1})}\] (7-2.16) Kaplan and Yorke (1978) suggested that this is a lower bound on the capacity dimension; that is, \[d_{L}\leq d_{c}\] (7-2.17) As an example, consider a three-dimensional set of points generated by a Poincare map of a fourth-order set of first-order differential equations with dissipation. If the attractor is strange, we assume \[L_{1}>1,\qquad L_{2}=1,\qquad L_{3}<1\] For example, one principal axis of the ellipsoid of initial conditions grows, one stays the same length, and one axis contracts. Also, be cause the system is dissipative, the volume of the ellipsoid must be less than that of the original sphere of initial conditions so that \(L_{1}L_{2}L_{3}<1\). This leads us to use \(k=2\) in Eq. (7-2.16) and \[d_{L}=2+\frac{\log L_{1}}{\log(1/L_{3})}=2+\frac{\lambda_{1}}{|\lambda_{3}|}\] (7-2.18) The usefulness of this formula for experimental data is unclear at this time because it is not easy to obtain a measurement of the contraction Lyapunov number \(L_{3}\) (e.g., see Wolf et al., 1985). A comparison of the different definitions of fractal dimension for the baker's transformation (7-1.6) has been given by Farmer et al. (1983). This example is one of the few dynamical systems for which one can analytically calculate the properties of the chaotic dynamics. They show that the Lyapunov dimension (7-2.20) is equal to the information dimension (7-2.9) and is given by \[d_{l}=d_{L}=1+\frac{H(\alpha)}{\alpha\log(1/\lambda_{a})+\beta\log(1/\lambda_{ b})}\] (7-2.19) where \(\beta=1-\alpha\). When \(\lambda_{a}=\lambda_{b}\) one can show that \[d_{l}=d_{L}=1+\frac{H(\alpha)}{\log(1/\lambda)}\] (7-2.20) Furthermore, if \(\alpha=\frac{1}{2}\), then \(H(\alpha)=\log 2\) and \[d_{l}=d_{1}=d_{c}\] In some ways, \(\alpha\) and \(\lambda_{a}/\lambda_{b}\) represent inhomogeneity factors in the map. When \(\alpha=\frac{1}{2}\) and \(\lambda_{a}/\lambda_{b}=1\), the map is like the horseshoe or Cantor maps and all these definitions of dimension \(d_{l}\), \(d_{L}\), \(d_{c}\) become equal. The implications are that different definitions of fractal dimension are likely to yield different results when the dynamical process leads to a 'nonuniform' Poincare map. ### Fractal-generating maps The title of this text, _Chaotic and Fractal Dynamics_, may be provocative to some dynamicists. The meaning of this term has two interpretations. First, a modern understanding of chaotic dynamics requires some knowledge of fractal mathematics. Second, whereas fractals can be studied independently of dynamics, the creation of fractal sets is closely linked with iterative processes as illustrated above for the baker's map. And these iterative processes which lead to the unpredictability inherent in fractal mathematics are close analogs of the dynamic processes in physics that also lead to fractal structures. #### Iterated Linear Maps It is now accepted that many geometric objects in the natural world have fractal-like shapes and surfaces such as coastlines, clouds, mountain ranges, certain trees, and leaves. In a recent book, Barnsley (1988) showed how one can recreate these shapes using iterated linear maps and made a very nice connection between the static fractal objects and the dynamical equations that generate them. In this section, we try to outline a few of these ideas in the hope that it may inspire the reader to delve deeper into these techniques. One potential application of these dynamical methods of generating fractals is the concept of data compression. Thus, if one wants to send a good picture of a fractal-like object (e.g., a landscape) instead of using a high-resolution image scanner (TV camera) with upwards of \(10^{6}\) pixels of data, Barnsley and his associates propose to send the mathematical equations (with perhaps only \(10^{2}\) bytes of information) which can dynamically generate an approximation of the landscape after transmission. To get an idea of this technique, we have to recall some of the properties of linear maps. These maps take the form \[A^{\prime}\ =\ TA\] where for 2-D planar maps \(A\) represents a point in the initial area and \(A^{\prime}\) represents the new point under the matrix operation \(T\): \[T=\left[\begin{matrix}a&c\\ b&d\end{matrix}\right]\] As discussed in Chapter 3, a linear map can contract or expand, rotate, shear, or reflect an area collection of points. Of course, the iteration of one linear map cannot create a fractal object or a chaotic orbit; however, a _sequence_ of different linear maps can. One example is the Cantor set discussed above. The step-by-step process of contractingthe current set of points along the line and replicating it twice can be written as two linear maps \(\omega_{1}\), \(\omega_{2}\); that is \[\begin{array}{l}A^{\prime}=WA\\ W=\bigcup\limits_{i=1}^{2}\omega_{i}\\ \omega_{1}=\frac{1}{3}x\\ \omega_{2}=\frac{1}{3}x\ +\frac{2}{3}\end{array}\] (7-3.3) The notation \(\cup\)\(\omega_{i}\) means that first \(\omega_{i}\) is applied to the set of points \(A\), and then \(\omega_{2}\) is applied to \(A\) and the new set of points \(A^{\prime}\) is the union of the two sets \(\omega_{1}A\), \(\omega_{2}A\). In this case there is no overlap (see Figure 7-8). [See Barnsley (1988) for a discussion of overlapping linear maps.] Thus, the dynamical process that generates the Cantor set can be written as a map that acts on a \(set\) of points: \[A_{n+1}=WA_{n}\] (7-3.4) This differs from Chapter 3, where the map acts on a position vector **x** thereby generating a single orbit \(\{\textbf{x}_{n}\); \(n=1\), \(2\),..., \(\infty\}\). The map (7-3.4) generates a _dense bundle_ of orbits. Under suitable assumptions, repeated application of the mapping \(A_{n+1}=WA_{n}\) leads to an attractor. This means that starting from different sets \(A_{1}\), \(A_{1}^{\prime}\) one ends up with the same set \(A\). This property is illustrated in Figure 7-10 for the Sierpinski triangle. After many iterations, each point in the initial set \(A_{1}\) undergoes an orbit. However, Figure 7-8: Sketch of the action of a set of nonoverlapping linear maps. any attempt to trace this orbit back through the order of transformations [\(\omega_{i}\), \(\omega_{j}\), \(\omega_{k}\),...] is very complex (see Figure 7-9). It has been shown (Barnsley, 1988) that such an orbit looks random. This is similar to the result for the horseshoe map. Another example of the generation of a fractal set in the plane is given by the following map: (Barnsley, 1988) \[\left\{\begin{array}{c}\mathrm{x}^{\prime}\\ \mathrm{y}^{\prime}\end{array}\right\}=\left[\begin{array}{cc}\mathrm{A}& \mathrm{B}\\ \mathrm{C}&\mathrm{D}\end{array}\right]\left\{\begin{array}{c}\mathrm{x}\\ \mathrm{y}\end{array}\right\}+\left\{\begin{array}{c}\mathrm{E}\\ \mathrm{F}\end{array}\right\}\] (7-3.5) \[\begin{array}{cccccccc}\mathrm{A}&\mathrm{B}&\mathrm{C}&\mathrm{D}&\mathrm{E }&\mathrm{F}\\ \omega_{1}\colon&0.5&0&0&0.5&1&1\\ \omega_{2}\colon&0.5&0&0&0.5&50&1\\ \omega_{3}\colon&0.5&0&0&0.5&50&50\end{array}\] Iteration of this set of linear transformations generates the fractal called the _Sierpinski gasket_, which is somewhat like a planar sponge (Figures 7-5, 7-9, 7-10). This method of generating a fractal can be computationally very time-consuming, because at every iteration cycle all three linear maps must act on all the points which define \(A\). Another method, however, makes use of the chaotic nature of the orbits in this iteration process. The generation of the Cantor set using the sum of two linear transformations is not unlike the dynamical process of the horseshoe map described in Chapters 1 and 3. Here the contraction operation of the horseshoe is represented by the \(\sharp x\) terms in \(\omega_{1}\), \(\omega_{2}\) [Eq. (7-3.3)], whereas the bending operation is represented by adding the second map \(\omega_{2}\), which replicates the set \(\omega_{1}A\) and shifts it by \(\sharp\). It can be shown (e.g., Guckenheimer and Holmes, 1983 or Barnsley, 1988) that horseshoe-type maps contain an infinite set of chaotic orbits which jump from one half of the domain (\(0\leq x\leq\sharp\)) to the other half (\(\sharp<x\leq 1\)) as if it were equivalent to a random coin toss operation. Figure 7-9 Sierpinski triangle generated by application of a set of three linear maps (7-3.5). Using this property, Barnsley then constructed an algorithm to generate fractal sets based on a single orbit \(\mathbf{x}_{n}=(x_{n},y_{n})\). If the generating functions contain \(k\) linear maps \(\{w_{i};\,i=1,\,2,\,...,\,k\}\), then the orbit is given by \[\mathbf{x}_{n+1}\ =\ \omega_{l}\mathbf{x}_{n}\] (7-3.6) where the particular linear map at each iteration step is chosen at random, that is \[I\ =\ 1\ +\ \text{Integer}[k\ ^{*}\ \text{Random Number}\ [0,\ 1]\ -\ 10^{-4}]\] One can also bias some of the \(\omega_{i}\) more than others by using a set of probabilities \(\{p_{i}\}\) where \(\Sigma p_{i}=1\) so that each \(\omega_{i}\) is given a different probability weight. An example is shown in Figure 7-5, which shows a sequence of images as the iteration progresses. Further discussion of these fascinating ideas is beyond the scope of this book, and the Figure 7-10: Sequence of images generated by a system of iterated linear maps (7-3.5) starting from two different initial sets of points (From Barnsley, 1988). reader is encouraged to look at the many color images in the Barnsley text. It is interesting to note that fractal objects can be created with both deterministic and random dynamic processes. It is this author's belief that the random processes are substitutes for unknown deterministic dynamics as is the case with the two iterated map algorithms. What is amazing, in either the deterministic chaotic models or the random models, is the global fractal structure that results. These mysteries between determinism, chaos, randomness, and fractals will keep both dynamicists and philosophers busy into the next century. ### Analytic Maps on the Complex Plane Many readers have perhaps seen the beautiful multicolor fractal pictures associated with the name of Mandelbrot (1982) (e.g., see Peitgen and Richter, 1986). These pictures are associated with a two-dimensional map involving the complex variable \(z=x+iy\), \[z_{n+1}=z_{n}^{2}+c\] where \(c=a+ib\) is complex. In terms of real variables, this map becomes \[\eqalign{x_{n+1}&=x_{n}^{2}-y_{n}^{2}+a\cr y_{n+1}&=2x_{n}y_{n}+b}\] This map looks similar to other 2-D maps studied in this book where \[\eqalign{x_{n+1}&=f(x_{n},y_{n})\cr y_{n+1}&=g(x_{n},y_{n})}\] However, in the case of the complex map (7-3.8), \(F=f+ig\) is an analytic function of \(z\). This means that a derivative \(dF(z)/dz\) exists and that the functions \(f(x,y)\) and \(g(x,y)\) satisfy \[\eqalign{{\partial f\over\partial x}&={\partial{\tenrm g}\over\partial y}\cr{ \partial f\over\partial y}&=-{\partial g\over\partial x}}\]or \[\nabla^{2}\!f\,=\,\nabla^{2}\!g\,=\,0\] In general, the 2-D maps studied earlier in the book do not satisfy these conditions. Thus, the quadratic complex map (7-3.7) and more general complex maps \(z_{n+1}=F(z_{n})\) are very special maps and have been found to have incredible complex dynamics and geometric properties (e.g., see Devaney, 1989). Because this is an introductory book, we will briefly describe two geometric properties of complex maps: Julia sets and the Mandelbrot set. #### Julia Sets As with other maps, one can define fixed points and periodic or cycle points by the relations \[z\,=\,F(z)\quad\text{and}\quad z\,=\,F^{p}(z)\] (7-3.11) where the superscript \(p\) indicates the application or composition of the map \(p\) times. Also, one can study the stability of each of these fixed points by looking at the derivative \[\lambda\,=\,\frac{d}{dz}\,F^{p}(z)\] (7-3.12) It can be shown that the fixed point is attracting or repelling depending on whether \(|\lambda|<1\) or \(|\lambda|>1\) (see Devaney, 1989). The Julia set of a complex map \(F(z)\), sometimes denoted by \(J(F)\), is the set of all the repelling fixed or periodic points. In the case of the map \(F(z)=z^{2}\), one can show that \(J(F)\) is a circle about the origin. That a dynamical system can have a continuous ring of unstable fixed points is not unusual. For example, a particle in a cylindrically symmetric potential \(U=ar^{2}(1\,-\,\frac{1}{2}r^{2})\) has a circle of unstable saddles on \(r\,=\,1\). The interesting property about these complex maps, however, is that by adding a constant to \(F(z)=z^{2}\,+\,c\), the Julia set becomes wrinkled or fractal. For example, if \(|c|<\frac{4}{4}\), the \(J(F)\) is still a closed curve but contains no smooth arcs (Devaney, 1989). This is illustrated in Figure 7-11. For larger values of \(|c|\) the Julia set becomes even more interesting as illustrated by the case \(c\,=\,-\,1\) in Figure 7-12. Here we see a fractal necklace with infinitely many loops. One can also show that once on the Julia set, further iterations of the map keep one on the set [i.e., \(J(F)\) is invariant]. It has also been shown that the dynamics on this set are chaotic; that is, there is sensitivity to initial conditions. Because the real and imaginary parts of the mapping function \(F(z)\) satisfy Laplace's equation [Eq. (7-3.10)], attempts have been made to interpret these Julia sets in terms of electric charge potentials. However, it is difficult to find a physical analog to the dynamic equations \(z_{n+1}=F(z_{n})\), when \(z\) is complex. However, the chaotic dynamics of repeller potentials, as in particle scattering problems, have received attention in recent years. Although the study of chaos and complex maps appears to be a modern subject, the mathematics of repelling sets in complex maps has its origins in the work of mathematicians Julia and Fatou around the close of the 19th century. Mandelbrot SetsWhereas the Julia set is described in the plane of the state variables (\(x\), \(y\)) of the complex map \(z_{n+1}=F(z_{n})\), the Mandelbrot set is described in the parameter space (\(a\), \(b\)) of the control variable \(c=a+ib\) in the complex quadradic map \(F(z)=z^{2}+c\). In constructing the Mandelbrot set, one fixes the initial conditions \(z_{0}=0\) or (\(x\), \(y\)) = (0, 0) and looks for complex parameter values for which the iterates of the map do _not_ go to infinity. This set is shown as the dark pattern in Figure 7-13 and Color Plate 3. Each color outside the set represents a given number of iterations for the vector \(z\) to go beyond a certain prefixed radius. What is remarkable about this set is the fractal nature of the boundary, which contains smaller versions of the Mandelbrot set as one looks at the surface with a larger and larger computer microscope. When \(c\) is real, then initial conditions on the real \(z\) axis, \(y=0\), yield a one-dimensional map \(x_{n+1}=x_{n}^{2}+a\). One can show that this map is equivalent to the logistic map \(x_{n+1}=\lambda x_{n}(1-x_{n})\) and that period-doubling bifurcations occur as one moves along the real axis in the Mandelbrot set. Again, although the physical relevance of these complex maps is not transparent at this time, they have served as a dramatic visual Figure 7.12: (_a_) Julia set for the case \(c=-1\) for the map \(z_{n+1}=z_{n}^{2}+c\); (_b_) Enlargement of points in the box in (_a_). paradigm about the intimate connection between dynamical systems and fractals and how incredible patterns of complexity can occur from simple mathematical models. ### Fractal Dimension of Strange Attractors There are two principal applications of fractal mathematics to nonlinear dynamics: characterization of strange attractors and measurement of fractal boundaries in initial condition and parameter space. In this section, we discuss the use of the fractal dimension in both numerical and experimental measurements of motions associated with strange attractors. As yet, there are no instruments, electronic or otherwise, which will produce an output proportional to the fractal dimension, although electro-optical methods may achieve this end in the future (see Section 7.5). To date, in both numerical and experimental measurements, the fractal dimension and Lyapunov exponents are found by discretizing the signals at uniform time intervals and the data are processed with a computer. There are three basic methods: 1. Time discretization of phase-space variables 2. Calculation of fractal dimension of Poincare maps 3. Construction of pseudo-phase-space using single variable measurements (sometimes called the _embedding space method_) Figure 7.13: Mandelbrot set for the map \(z_{n\sim 1}=z_{n}^{2}+\epsilon\). (Courtesy of J. Hubbard, Cornell University.) In both the first and third methods, the variables are measured and stored at uniform time intervals \(\{\mathbf{x}(t_{0}\,+\,n\tau)\}\), where \(n\) is a set of integers. The time interval \(\tau\) is chosen to be a fraction of the principal forcing period or characteristic orbit time. If the Poincare map in (b) is based on a time signal, then the \(\tau\) is just the period of the time-based Poincare map. However, if the Poincare map is based on other phase-space variables, then the data are collected at variable times depending on the specific type of Poincare map (see Chapter 5). There are three principal definitions of fractal dimension used today: averaged pointwise dimension, correlation dimension, and Lyapunov dimension. In most of the experience with actual calculation of fractal dimension, 20,000 or more points are used, though several papers claim to have reliable algorithms based on as little as 1000 points (e.g., see Abraham et al., 1986). Direct algorithms for calculating fractal dimension based on \(N_{0}\) points generally take \(N_{0}^{2}\) operations so that superminicomputers or mainframe computers are often used. However, clever use of basic machine operations can reduce the number of operations to order \(N_{0}\ln N_{0}\) and significantly speed up calculation (e.g., see Grassberger and Proccacia, 1983). ### Discretization of Phase-Space Variables Suppose we know or suspect a chaotic system to have an attractor in three-dimensional phase space based on the physical variables \(\{x(t),\,y(t),\,z(t)\}\). For example, in the case of the forced motion of a beam or particle in a two-well potential (see Chapter 2), \(x=\) position, \(v=\dot{x}\) is the velocity, and \(z=\omega t\) is the phase of the periodic driving force. In this method, time samples of \((x(t),\,y(t),\,z(t))\) are obtained at a rate that is smaller than the driving force period. To each time interval there corresponds a point \(\mathbf{x}_{n}=(x(n\tau),\,y(n\tau),\,z(n\tau))\) in phase space. To calculate an averaged pointwise dimension, one chooses a number of random points \(\mathbf{x}_{n}\). About each point one calculates the distances from \(\mathbf{x}_{n}\) to the nearest points surrounding \(\mathbf{x}_{n}\). (Note that these points are not the nearest in time, but in distance.) One does not need to use a Euclidean measure of distance. For example, the sum of absolute values of the components of \((\mathbf{x}_{n}\,-\,\mathbf{x}_{m})\) could be used; that is, \[s_{nm}=\left|x(n\tau)\,-\,x(m\tau)\right|\,+\,\left|y(n\tau)\,-\,y(m\tau)\right| \,+\,\left|z(n\tau)\,-\,z(m\tau)\right|\] (7-4.1) Then the number of points within a ball, cube, or other geometric shape of order \(\varepsilon\) is counted and a probability measure is found as a function of \(\varepsilon\): \[P_{n}(\varepsilon)=\frac{1}{N_{0}}\sum_{m=1}H(\varepsilon\,-\,s_{nm})\] (7-4.2) where \(N_{0}\) is the total number of sampled points and \(H\) is the Heaviside step function; \(H(r)=1\) if \(r>0\); \(H(r)=0\) if \(r<0\). The averaged pointwise dimension, following Eq. (7-2.3), is then \[\begin{split} d_{n}&=\lim_{\varepsilon\to 0}\frac{\log P_{n}(\varepsilon)}{\log\varepsilon}\\ d&=\frac{1}{M}\sum_{n=1}^{M}d_{n}\end{split}\] (7-4.3) where the limit defining \(d_{n}\) exists. For some attractors, the function \(P_{n}\) versus \(\varepsilon\) is not a power law but has steps or abrupt changes in slope. Then one can calculate a modified average pointwise dimension by first averaging \(P_{n}\). For example, let \[\begin{split}\hat{C}(\varepsilon)&=\frac{1}{M}\sum_{n=1}^{M}P_{n}(\varepsilon)\\ d&=\lim_{\varepsilon\to 0}\frac{\log\hat{C}(\varepsilon)}{\log\varepsilon}\end{split}\] (7-4.4) This is similar to the correlation dimension discussed in the previous section. The example of the two-well potential (6-2.2) is shown in Figure 7-14\(a\),_b_ using the correlation dimension. This dimension is computed from numerically generated data using the equation \(\dot{x}=y\), \(\dot{y}=-\delta y-\frac{1}{2}x(1\,-\,x^{2})\,+\,f\cos z\), \(\dot{z}=\omega\) for values of \(\delta,f,\,\omega\) in the chaotic regime. Figure 7-14\(a\) shows the logarithm of the correlation function, whereas Figure 7-14\(b\) shows the local slope versus the logarithm of the size of the test volume. The slope for the intermediate values of \(\varepsilon\) is around 2.5. This is consistent with the fact that the attractor lives in a three-dimensional space (\(x\), \(y\), \(z\)). In practice, \(N_{0}\approx 3\,\times\,10^{3}\)-\(10^{4}\) points and \(M\approx.20N_{0}\). One should experiment with the choice of \(M\) by starting with a small value and increasing it until \(d\) reaches some limit. The choice of \(\varepsilon\) also requires some judgment. The upper limit of \(\varepsilon\) is much smaller than the maximum size of the attractor yet large enough to capture the large-scale structure in the vicinity of the point Figure 7-14: (_a_) Log \(C\) versus log \(e\) for chaotic motion in a two-well potential (4-2.2). Data obtained from numerical integration. (_b_) Local slope of (_a_) showing fractal dimension in linear region of (_a_) of around 2.5. \(\mathbf{x}_{n}\). The smallest value of \(\varepsilon\) must be such that the associated sphere or cube contains at least one sample point. Another constraint on the minimum size of \(\varepsilon\) is the'real noise' or uncertainty in the measurements of the state variables (\(x\), \(y\), \(z\)). In an actual experiment, there is a sphere of uncertainty surrounding each measured point in phase space. When \(\varepsilon\) becomes smaller than the radius of this sphere, the theory of fractal dimension discussed above comes into question because for smaller \(\varepsilon\) one cannot expect a self-similar structure. ##### Fractal Dimension of Poincare Maps In systems driven by a periodic excitation, as in the Duffing-Ueda strange attractor (4-6.1) or the two-well potential strange attractor (4-2.2), time or the phase \(\phi=\omega t\) becomes a natural phase-space variable. In most cases, this time variable will lie in the attractor subspace and time can be considered as one of the contributions to the dimension of the attractor. In the case of a periodically forced, nonlinear, second-order oscillator, the Poincare map based on periodic time samples produces a distribution of points in the plane. To calculate the fractal dimension of the complete attractor, it is sometimes convenient to calculate the fractal dimension of the Poincare map \(0<D<2\). If \(D\) is independent of the phase of the Poincare map (remember \(0\leq\omega t\leq 2\pi\)), then the dimension of the complete attractor is just \[d\ =\ 1\ +\ D\] (7-4.5) As an example, we present numerical and experimental data for the two-well potential or Duffing-Holmes strange attractor (Chapter 2): \[\ddot{x}\ +\ \gamma\dot{x}\ -\ \tfrac{1}{2}x(1\ -\ x^{2})\ =\ f\cos\omega t\] (7-4.6) In this example, we are interested in two questions: 1. Does the fractal dimension of the strange attractor vary with the phase of the Poincare map? 2. How does the fractal dimension vary with the damping \(\gamma\)? The fractal dimension was calculated for a set of Poincare maps and are listed in Table 7-2. This table shows an almost constant value around the attractor. Thus, the assumption \(d=1\ +\ D\) in Eq. (7-4.5) appears to be a good one. A numerically generated Poincare map for the case of a particle in a two-well potential under periodic excitation is shown in Figure 7-15. The correlation function (Figure 7-16_a_) \(C(\varepsilon)\) versus \(\varepsilon\) is shown plotted in a log-log scale and shows a linear dependence as assumed in the theory. The data in Figure 7-15 was the same as that used in Figure 7-14. From Figure 7-16\(b\), \(D\simeq 1.5\) or \(d=2.5\), which agrees with that calculated directly from the attractor in the phase space (\(x\), \(\dot{x}\), \(\omega t\)) as in Figure 7-14. The effect of damping on the fractal dimension of the two-well potential strange attractor was determined from Runge-Kutta numerical simulation. This dependence is shown in Figure 7-17. The data show that low damping yields an attractor that fills phase space (\(D=2\), \(d=3\)) as would a Hamiltonian (zero damping) system. As damping is increased, however, the Poincare map looks one-dimensional and the attractor has a dimension close to \(d=2\), as in the case of the Lorenz equations. The fractal dimension of a chaotic circuit (diode, inductor, and resistor in series driven with an oscillator) has been measured by Linsay (1985) using a Poincare map. He measured the current at a sampling time equal to the period of the oscillator and constructed a three-dimensional pseudo-phase-space using (\(I(t)\), \(I(t\ +\ \tau)\), \(I(t\ +\ 2\tau)\)) (see next section). He obtained a fractal dimension of the Poincare map of \(D=1.58\) and infers a dimension of the attractor of \(2.58\). ability to measure all the state variables. However, in many experiments, the time history of only one state variable may be available or possible. Also in continuous systems involving fluid or solid continua, the number of degrees of freedom or minimum number of significant modes contributing to the chaotic dynamics may not be known a-priori. In fact, one of the important applications of fractal mathematics is to allow one to determine the smallest number of first-order differential equations that may capture the qualitative features of the dynamics of continuous systems. This has already had some success in thermofluid problems such as Rayleigh-Benard convection (see Malraison et al., 1983). In early theories of turbulence (e.g., Landau, 1944), it was thought that chaotic flow was the result of the interaction of a very large or infinite set of modes or degrees of freedom in the fluid. At the present time, it is believed that the chaos associated with the transition to chaos is a consequence of the existence of a stable state. In the present case, the chaos associated with the transition to chaos is a consequence of the existence of a stable state. Figure 7-16: (_a_) Log \(C\) versus log \(\epsilon\) for the set of points in the Poincaré map in Figure 7-15. (_b_) Local slope of (_a_) showing a fractal dimension in the linear region of (_a_) of around 1.5. some forms of turbulence can be modeled by a finite set of ordinary differential equations (see e.g., Aubry et al. (1988)). Thus, suppose that the number of first-order equations required to simulate the dynamics of a dissipative system is \(N\). Then the fractal dimension of the attractor would be \(d\leq N\). Then if we were to Figure 7.17: Dependence of fractal dimension on the damping for the two-well potential oscillator (4-2). determine \(d\) by some means, we would then determine the minimum \(N\). Not knowing \(N\), we cannot know how many physical variables \(x(t)\), \(y(t)\), \(z(t)\),...) to measure. Instead we construct a pseduo-phase-space using time-delayed measurements of one physical variable, say (\(x(t)\), \(x(t\,+\,\tau)\), \(x(t\,+\,2\tau)\),...) (see Chapter 5 and also see Packard et al., 1980). For example, three-dimensional pseudo-phase-space vectors are calculated using three successive components of the digitized \(x(t)\) (Figure 7-18), that is, \[\mathbf{x}_{n}\,=\,\{x(t_{0}\,+\,n\tau),\,x(t_{0}\,+\,(n\,+\,1)\tau),\,x(t_{0} \,+\,(n\,+\,2)\tau)\}\] (7-4.7) With these position vectors, one can use the correlation function (7-2.5) or averaged probability function (7-2.3) to calculate a fractal dimension. To determine the minimum \(N\), one constructs higher-dimensional pseudo-phase-spaces based on the time-sampled \(x(t)\) measurements until the value of the fractal dimension reaches an asymptote, say, \(d\,=\,M\,+\,\mu\), where \(0\,<\,\mu\,<\,1\). Then the minimum phase-space dimension for this chaotic attractor is \(N\,=\,M\,+\,1\). In reconstructing a dynamical attractor from the time history measurements of a single variable, the question arises of how many dimensions are required in the embedding space in order to capture all the topological features of the original attractor. A mathematician named Takens has proved several theorems about this question. If the original phase-space attractor lives in an \(N\)-dimensional space, then in general one must reconstruct an embedding space (our pseudo-phase-space) of \(2N\,+\,1\) dimensions. To illustrate these ideas we have applied the embedding space method to find the dimension of the two-well potential (or buckled Figure 7-18: Sketch of an orbit in a three-dimensional pseudo-phase-space constructed from a single time series measurement. Figure 7-19: (\(a\)) Log \(C\) versus log \(\epsilon\) for the two-well potential problem for different dimension embedding spaces. The time history data are identical to those in Figures 7-14 and 7-16. (\(b\)) Fractal dimensions of attractor versus the dimension of the embedding space. beam) attractor (4-2.2). Earlier we saw that this attractor lives in a three-dimensional phase space (\(x\), \(\dot{x}\), \(\omega t\)) and has a fractal dimension of \(d=2.5\) (Figure 7-14). Using the same data we also saw that we could calculate \(d\) from the Poincare map (Figures 7-15 and 7-16). Using the same numerical data from a Runge-Kutta integration, we reconstructed the motion in a pseudo-phase-space using digitized values of \(x(t)\) and embedding space dimensions of \(m=2\)-\(8\). The graphs in Figure 7-19\(a,b\) show the correlation function as well as the calculated dimension of the attractor in each embedding space. One can see in Figure 7-19\(a,b\) that the dimension reaches an asymptote of \(d=2.5\) after \(M\sim 4\)-\(5\), which is in agreement with Taken's theorem. An example of calculating the fractal dimension from experimental data is shown in Figure 7-20 for the periodic excitation of a long, thin cantilevered beam with rectangular cross-section (Cusumano and Moon, 1990). In this problem, the resonant vibrations near the natural frequencies can couple into the torsional out-of-plane modes. The result is a dynamic snapping back and forth from one torsion-bending motion to another in a chaotic way. The data were obtained from both strain gage measurements on the beam and optical measurements of the tip displacement. In these calculations a random set of 20,000-25,000 points were selected from 100,000 time series points. As another example using experimental data, we describe the work of a group at the French research laboratory at Saclay (e.g., see Malraison et al., 1983 and Berge et al., 1984). They measured the Figure 7-20: Calculation of fractal dimension from experimental data for periodic vibration of a thin cantilever beam. [From Cusumano and Moon (1990).] fractal dimension of a convective fluid cell under a thermal gradient (Rayleigh-Benard convection, see Chapter 4). They calculated the fractal dimension using an averaged pointwise dimension (7-2.3) for different sizes of pseudo-phase-spaces. The fractal dimension saturated at a value of \(d=2.8\) when the embedding dimension of the phase space reached 5 or greater. They used 15,000 points and averaged \(P_{n}(\varepsilon)\) over 100 random points. However, they also found regimes of chaotic flow where no clear slope of log \(C(\varepsilon)\) versus log \(\varepsilon\) existed. Similar results for the flow between two cylinders (Taylor-Couette flow) has been reported by a group from the Soviet Union (L'vov et al., 1981). They claim to measure the information dimension. Figure 7-21 shows the value of the slope of log \(C(\varepsilon)\) versus log \(\varepsilon\) as a function of \(\varepsilon\). This is characteristic of these measurements. The slope values at small \(\varepsilon\) reflect instrumentation noise, whereas the values at large \(\varepsilon\) are those for which the size of the covering sphere or hypercube reaches the scale of the attractor. Using such techniques, one can determine how the fractal dimension changes as some control parameter in the experiment is varied. For example, in the case of Taylor-Couette flow (see Figure 4-42), Swinney and co-workers have measured the change in \(d\) as a function of Reynolds number (see Swinney, 1985). In another fluid experiment, Ciliberto and Gollub (1985) have studied chaotic excitation of surface waves in a fluid. The surface wave chaos was excited by a 16-Hz vertical amplitude frequency; 2048 points were sampled with a sampling time of 1.5 s or around 300 orbits. Using the embedding space technique, they measured both the correlation dimension (\(d_{c}=2.20\pm 0.04\)) and the information dimension (\(d_{I}=2.22\pm 0.04\)), both of which reached asymptotic values when the embedding space dimension was 4 or greater. Holzfuss and Mayer-Kress (1986) have examined the probable errors in estimating dimensions from a time series data set. The three methods studied involved the correlation dimension, averaged point wise dimension, and the averaged radius method of Termonia and Alexandrowicz (1983). They tested each on a set of 20,000 points from a quasiperiodic motion on a 5-torus, which consists of a time history with five incommensurate frequencies. Using the pseudo-phase-space method for embedding dimensions of 2-20, they found that the averaged pointwise dimension had the smallest standard deviation of the three. The average was taken over 20% of the reference points, and curves that did not show scaling behavior over a significant portion of the range of \(r\) were rejected. ### Fractal dimension of strange attractors Figure 7.21: Calculation of fractal dimension for chaotic flow of fluid between two rotating cylinders: Taylor–Couette flow (see Chapter 4). [From L’vov et al. (1981) with permission of Elsevier Science Publishers, copyright 1981.] ### Multifractals #### Fractals Within Fractals As we have seen above, the fractal dimension measures the way in which a distribution of points fills a geometric space on the average. But, what if the distribution is highly inhomogeneous? Can a set of points have a distribution of fractal dimensions? A set of points with multiple fractal dimensions is not only possible, but is also common in a number of experimental as well as simple mathematical maps. [See Feder (1988) or Falconer (1990) for a more complete discussion of multifractals.] One remarkable example is the observation of multifractal properties of a Poincare map from a quasiperiodic route to chaos in a Rayleigh-Benard thermal convection experiment reported by Jensen et al. (1985). Equally intriguing about this example is the ability of the circle map (Chapter 3) to quantitatively predict the correct distribution of fractal dimensions. In a discussion of multifractal properties of a distribution of points, one must distinguish between the amplitude or _measure_ of the distribution and the geometric set or so-called _support_ of the distribution. To illustrate these ideas we describe two examples of simple dynamical systems that generate multifractals and show another in Figure 7-22. #### Example 1. Imagine a uniform mass distribution along a line [0, 1]. The iteration rule that creates a multifractal set of points is as follows: Divide the line into two segments, say [0, 1\(\}\)) and (1\(\}\), 1], and redistribute the mass so that \(P\) is distributed uniformly on the right segment and (1 - _P_) is distributed uniformly on the left segment. As this process is iterated, the original uniform mass distribution becomes highly inhomogeneous. The dimension of the support, which remains a continuous line, is unity, yet the distribution clearly has a fractal nature to it. For example, by picking some small interval of this distribution and rescaling the abscissa and ordinate scales, one can recover the overall distribution; that is, the distribution obeys the following scaling relation: \[f(x)\ =\ \lambda^{B}f(bx)\] (7-5.1) #### Example II. Take the same example as in Example I, but instead of redistributing the mass over the whole line, distribute the mass elements \(P\) andFigure 7.22: Construction of a binomial distribution function with two length scales (see also Feder (1988), Chapter 6). (1 - _P_) over the left third and right third, respectively, as in the Cantor set. After iterating this rule, both the distribution and the geometric support on which it lives will look fractal. In fact, the fractal dimension of the support is \(D_{0}=\log 2/\log 3\). But, clearly \(D_{0}\) does not describe the measure or distribution of mass itself. A third example with two length scales is shown in Figure 7-22. The above examples are called "binomial multiplicative processes." Suppose we denote the fraction of the mass on the \(i\)th line segment as \(\mu_{i}\). Then for Example I with equal length segments \(\delta=(\underline{i})^{n}\), one can show that \(\mu_{i}\) has the following form after \(n\) iterations: \[\mu_{i}\,=\,P^{k}(1\,-\,P)^{(n\,-\,k)},\ \ \ \ \ k\,=\,0,\,1,\,2,\,\ldots\,\,n\] Now define \(k\,=\,\xi n\) (\(0\,\leq\,\xi\,\leq\,1\)) so that \[\mu_{i}(\xi)\,=\,[P^{\xi}(1\,-\,P)^{(1\,-\,\xi)}]^{n}\] To examine the multifractal properties, look at the set of \(\mu_{i}(\xi)\) for \(\xi\,=\,\) constant, and count the number of cells or segments with the same \(\xi\) value or the same measure \(\mu_{i}(\xi)\equiv\mu_{\xi}\). Then for \(n\) very large, the number of cubes of length \(\delta=(\underline{i})^{n}\) to cover the set of line segments of \(equal\,\,measure\,\,\mu_{\xi}\) is assumed to scale as \[N(\xi)\,\sim\,\delta^{-\,d(\xi)}\] One can show that \(d=f(\xi)\) for equal line segments is given by \[f(\xi)\,=\,\,-\,[\xi\,\ln\,\xi\,+\,(1\,-\,\xi)\,\ln(1\,-\,\xi)]/\ln\,2\] (see Figure 7-23 for Example I; also see Feder, 1988). It is remarkable that a simple binomial process leads to a continuous distribution of fractal dimensions. Note that the maximum value of \(f(\xi)\) equals the fractal dimension of the entire support, which remains the original line element, [0, 1]. This description, however, is not suitable for experiments, so a change of variables is performed using \[\mu_{\xi}=\delta^{\alpha}\ \ \ {\rm or}\ \ \ \alpha\,=\,\log\,\mu_{\xi}/\log\,\delta\] where \(\delta\) is the length of the line segment after the \(n\)th iteration. To determine \(f(\alpha)\) experimentally, one uses the moments of the distribution \(\mu_{i}(\alpha)\). Remember that it is assumed that there is a probability or mass measure which is approximated by partitioning the domain of support into cells of size 8. For a given 8, one can calculate a probability moment function \[\chi(q,\delta)=\sum_{i}\,\mu_{i}(\delta)^{q}\] (7-5.5) (e.g., see Halsey et al., 1986). These moments generate a set of dimensions by assuming a scaling relation \[\chi(q,\delta)\sim\delta^{-\tau(q)}\] (7-5.6) or \[\tau(q)=\,-\lim_{\delta\to 0}\frac{\log\chi(q,\delta)}{\log\delta}\] (7-5.7) The motivation for taking moments and introducing another variable \(q\) is not obvious at a first reading, but suffice it to say that taking moments of the distribution gives one more information. Thus, using (7-5.5) and (7-5.6), one finds a spectrum of dimensions \(\tau(q)\). If one treats \(q\) as a continuous variable, it has been shown (e.g., see Feder, 1988) that one can derive the \(f(\alpha)\) curve from the implicit equations \[\begin{array}{rl}\alpha(q)=&-\frac{d\tau}{dq}\\ f(\alpha)=&q\alpha(q)\,+\,\tau(q)\end{array}\] (7-5.8) Figure 7-23: Distribution of fractal dimensions \(d(\xi)\) [Eq. (7-5.3)] for a binomial distribution function in Example I. (From Feder, 1988.) Some authors introduce another symbol, \(D_{q}\), which is related to \(\tau(q)\): \[D_{q}=\frac{\tau(q)}{q\,-\,1}\] (7-5.9) Then it has been shown that \(D_{0}\) is the fractal dimension of the support (our box-counting dimension), \(D_{1}\) is the information dimension, and \(D_{2}\) is the correlation dimension (see Hentschel and Procaccia, 1983). #### Multifractals, Quasiperiodicity and the Circle Map To illustrate the application of multifractals to dynamics consider the application to the circle map \[\theta_{n\,+\,1}=\,\theta_{n}\,+\,\Omega\,-\,\frac{K}{2\pi}\sin(2\pi\theta_{n})\] (7-5.10) When the winding number \(\Omega\) is chosen as the so-called "golden mean" \(\Omega=(\sqrt{5-1})/2\), then the critical value for chaos is \(K=1\). Then iteration of this map shows an inhomogeneous distribution of points around the circle whose probability measure (i.e., the mass density of Figure 7-24: Spectrum of fractal dimensions for the circle map (7-5.10) at the critical value \(K=1\) and golden mean winding number. (From Halsey et al., 1986.) points) has multi-fractal dimensions. Application of the above formuli, in fact, yields a spectrum of dimensions shown in Figure 7-24. One notes that the maximum value of \(f(x)\) is \(D_{0}=1\), which is the dimension of the support, i.e., the circle. ##### Experimental Multifractals in Dynamics One of the criteria for selection of topics in this book was the applicability of new mathematical methods in dynamics to experiments. The example of a periodically forced Rayleigh-Benard convection problem is a beautiful example of the application of multifractals to fluid dynamics. In this work, first published by Jensen et al. (1985), a parallelepiped of mercury \(0.7\,\times\,0.7\,\times\,1.4\) cm\({}^{3}\) was subjected to a thermal gradient at the same time periodically forced by pumping a small amount of current through the fluid in the presence of a magnetic field. The idea was to generate a quasiperiodic motion based on a natural convection oscillation (0.23 Hz), and the driven oscillation was chosen so that the ratio of frequencies was at the "golden mean." The measured state variable was obtained from a thermal probe placed on the bottom plate of the cell. A time series was generated by taking a Poincare map synchronous with the forcing period. The resulting set of data \(\{...\,\,T_{n-1},\,T_{n},\,T_{n+1}\,...\}\) was plotted as a return map \(T_{n}\) versus \(T_{n+1}\) (shown in Figure 7-25) which shows a characteristic of the cross section of a torus. What is not characteristic, however, is the inhomogeneous distribution of the density around the section of the torus. To avoid spurious bunching effects due to a projection of Figure 7-25: Experimental return map for periodically forced Rayleigh–Benard thermal convection showing the toroidal nature of the attractor. [From Jensen et al. (1985).] the map onto the plane, actual calculations were carried out in a three-dimensional space (\(T_{n}\), \(T_{n+1}\), \(T_{n+2}\)). To analyze the multifractal nature of this distribution, a set of cubes of size \(\delta\) was used to cover the attractor and the probability of being in each cube was measured. The results of calculation of the spectrum of fractal dimensions \(f(\alpha)\) are shown in the same manner (see Figure 7-26) as for the _circle map_. What is truly remarkable are the identical spectrum curves for both the circle map and the Rayleigh-Benard convection experimental data. It suggests that there is something "universal" in the way dynamical systems make the transition from quasiperiodic motion to chaotic motion. Another application of multifractal or interwoven sets of different fractal dimensions has been published by Meneveau and Sreenivasan (1987), who applied the theory to fully developed turbulence behind the wake of a circular cylinder (see also Sreenivasan, 1991). The same fractal mathematics was also applied to a nonlinear electronic solid-state device using a crystal of \(p\)-type Ge by Gwinn and Westervelt (1987). The crystal was biased with a dc voltage to operate in a region of negative resistance where it achieved a stable limit cycle. A second sinusoidal signal was applied to the circuit so that the ratio Figure 7-26: Spectrum of fractal dimensions \(f(\alpha)\) for the periodically forced thermal convection experiment of Figure 7-25. [From Jensen et al. (1985).] of the limit cycle frequency to the driven frequency was the 'golden mean." Excellent agreement with the _f_(_a_) spectrum of the circle map was also obtained. ### Optical Measurement of Fractal Dimension All the methods for calculating the fractal dimension of strange attractors discussed above require the use of a powerful micro- or minicomputer. From an experimental point of view, however, it is natural to ask whether the fractal properties in dynamical systems can be directly measured using _analog devices_ in the same way that other dynamical properties such as velocity or acceleration are measured. For general, multiple-degree-of-freedom systems, the answer is not known; but for simple nonlinear problems, the fractal dimension of a two-dimensional Poincare map can be measured using optical techniques (Lee and Moon, 1986). This method is based on an optical interpretation of the correlation function (7-2.4). The use of the scattering of waves to measure fractal dimension of material fractals in three dimensions is described by several authors (e.g., see Schaefer and Keefer, 1984a,b, 1986). A diagram illustrating this optical method for planar fractals embodied in Poincare maps is shown in Figure 7-27. We recall that the correlation function involves counting the number of points in a cube Figure 7-27: Experimental setup of the optical method for measuring fractal dimension. [From Lee and Moon (1986) with permission of Elsevier Science Publishers, copyright 1986.] or ball surrounding each point in the fractal set of points. The optical method uses a parallel processing feature to perform all the sums at once. Light coming from one film creates a disk of light on another film. If each film is an identical copy of the Poincare map of the strange attractor, the total light emanating from the second film is proportional to the correlation function. By changing the distance between the two films in Figure 7-27, the radius of the small circles changes and one can obtain the correlation sum as a function of the radius \(r\). A plot of log _C_(_r_) versus log \(r\) then yields the fractal dimension of the Poincare map \(D\). If the map is a time-triggered Poincare map, the dimension of the attractor is 1 + \(D\). ### An Optical Parallel Processor for the Correlation Function A sketch of the experimental setup is shown in Figure 7-27, displaying the optical path of light in this method. The method makes use of two properties of classical optics. First, if light is passed through a small aperture of diameter \(D\) in the region of Fraunhofer diffraction (if \(l\) is the wavelength, \(D\) >=> _l_), then light will cast a circle of radius \(r\), with uniform intensity, on a plane located at a distance \(L\) from the aperture. This radius is given by \(r\) = 1.22L_l_/_D_. In our method, the aperture organates from a small dot on the negative of a planar Poincare map and the small circle of light falls on an identical copy of this negative located at a distance \(L\) (Figure 7-27). Second, for incoherent light, the amount of light that emanates from the second negative is proportional to the number of small dots or circles within the circle of illumination. The total amount of light passing through both films is thus proportional to the correlation function _C_(_r_). To calculate or vary \(r\), we simply measure and vary \(L\), the distance between the two negatives. To make these ideas more concrete, let Ph(**x**, _r_) be the radiant flux behind film #2 due to the flux Ph_in_(**x**) entering the circular aperture at **x** on film #1: \[\Phi({\bf x},r) = n(x,r){\bf A}\,{\Phi_{in}({\bf x}) \over \pi r^{2}}\] where _n_(**x**, _r_) = \(z_{H}\)(_r_ - |**x** - **x**j|) is the number of apertures located within the circle of light illuminated by the flux in the aperture at **x**, and \(A\) is the area of the aperture of a point on film #1. One can see that \(\Phi\) depends on both \(n\) and \(r\) explicitly. However, we would like a measure of \(n\) alone. Using the linear relation between \(r\) and \(L\), we define an adjusted radiant flux \(\Phi^{*}=(r/r_{0})^{2}\Phi\), where \(r\) is the radius of the illuminated area when \(L=L_{0}\) (\(L_{0}\) is a convenient reference distance). Summing over all points in film #1, we obtain \[\sum_{k-1}^{N}\Phi^{*}(x,r)=\left(\frac{r}{r_{0}}\right)^{2}\sum\Phi({\bf x},r) =\frac{A}{\pi r_{0}^{2}}\sum\Phi_{in}({\bf x})n(r)\] (7-6.2) When the incident light intensity is uniform over film #1, we find \[\left(\frac{L}{L_{0}}\right)^{2}\sum_{k-1}^{N}\Phi_{0}({\bf x}_{k},r)\approx \sum_{k-1}^{N}n(r)\approx C(r)\] (7-6.3) The maps can be obtained either from a numerical solution of a third-order system of equations or from experimental data. The light passing through film #2 was focused onto a photocell for the light flux measurement. A light filter (orange-amber color filter) was used at the Figure 7-28: Radiant flux versus distance between two films of Poincaré maps on a log–log scale for data from the vibration of a buckled beam. [From Lee and Moon (1986) with permission of Elsevier Science Publishers, copyright 1986.] light source to optimize the photocell response around 6328 A. The dot size on the negatives was less than 0.2 mm; thus \(D/\lambda\simeq 300\), which satisfies the Fraunhofer diffraction criterion. The output voltage from the photocell contained a lot of noise. To extract the signal from the noise, a mechanical light chopper and a lock-in amplifier were used in the signal processing. The chopper was operated at approximately 100 Hz to avoid power-line noise. The radiant flux behind film #2 was measured at the photocell as a function of the distance between films, and the adjusted radiant flux (7-4.2) versus \(L\) was plotted on a log-log scale as shown in Figure 7-28. Theoretically, the slope of this curve should give the fractal dimension (7-2.5). The data were obtained from a Runge-Kutta simulation of the forced two-well potential equation (7-4.6). The 4000 points were generated by taking a Poincare map synchronous with the driving frequency. The adjusted radiant flux output was measured at approximately 200 values of \(L\). However, only the linear section of log \(C\) versus log \(L\) is plotted in Figure 7-28. A comparison of the optically measured fractal dimension with those calculated from the numerical data of Moon and Li (1985a) is shown in Table 7-3 for several values of the damping. The results, as one can see, are remarkably good. \begin{table} \begin{tabular}{c c c} \hline & Numerical Poincaré Map [Eq. (7-4.6)] & Measured \\ \hline Damping & Calculated\({}^{a}\) & Measured \\ \hline 0.075 & 1.565\({}^{h}\) & 1.558 \\ 0.105 & 1.393 & 1.417 \\ 0.135 & 1.202 & 1.162 \\ \hline & Experimental Poincaré Map & \\ \hline Phase Angle & Calculated\({}^{a}\) & Measured \\ \hline 0\({}^{a}\) & 1.741\({}^{h}\) & 1.628\({}^{c}\) & 1.678 \\ 45\({}^{o}\) & 1.751 & 1.627 & 1.671 \\ 90\({}^{o}\) & 1.742 & 1.638 & 1.631 \\ 135\({}^{o}\) & 1.748 & 1.637 & 1.676 \\ 180\({}^{o}\) & 1.730 & 1.637 & 1.635 \\ \hline \end{tabular} \({}^{a}\) Moon and Li (1985a). \({}^{b}\) Based on four smallest log \(r\) points in log \(C\) versus log \(r\). \({}^{c}\) Based on seven smallest log \(r\) point in log \(C\) versus log \(r\). \end{table} Table 7: Optically Measured Fractal Dimension for Computer-Simulated and Experimental Poincaré MapsA comparison of the optical and numerical methods for experimental Poincare maps for the buckled beam is also shown in Table 7-3. In this set of tests, the phase of the Poincare map trigger was changed. The optical measurement of fractal dimension confirms the results of the numerical method, namely, that the dimension is independent of the phase of the map. This implies that the dimension of the strange attractor itself is \(1\,+\,D\), where \(D\) is the planar map dimension. ### Fractal basin boundaries #### Basins of Attraction In most physical _linear_ systems, there is just one possible motion for a given input. For example, the response of a linear mass-spring-damper system to an initial impulse force is just a decaying response, where the mass eventually comes to rest. Such a system has but one attractor, namely, the equilibrium point. However, in nonlinear systems, it is possible for more than one outcome to occur depending on the input parameters such as force level or initial conditions. For example, the system may have more than one equilibrium position or it may have more than one periodic or nonperiodic motion as in certain self-excited systems. Equilibrium positions and periodic or limit cycle motions are called _attractors_ in the mathematics of dissipative dynamical systems. The range of values of certain input or control parameters for which the motion tends toward a given attractor is called a _basin of attraction_ in the space of parameters. If there are two or more attractors, then the transition from one basin of attraction to another is called a _basin boundary_ (see Figure 7-29). In classical problems, we expect the basin boundary to be a smooth, continuous line or surface as in Figure 7-29. This implies that when the input parameters are away from the Figure 7-29: Sketch of two dynamic attractors in phase space and the boundary between their basins of attraction in initial condition space. boundary, small uncertainties in the parameters will not affect the outcome. However, it has been discovered that in many nonlinear systems, this boundary is nonsmooth. In fact it is fractal--hence the term _fractal basin boundary_. The existence of fractal basin boundaries has fundamental implications the behavior of dynamical systems. This is because small uncertainties in initial conditions or other system parameters may lead to uncertainties in the outcome of the system. Thus predictability in such systems is not always possible (see the papers by Grebogi et al., 1983b, 1985a,b, 1986). ### Sensitivity to Initial Conditions: Transient Motion in a Two-Well Potential Before we examine a problem with a fractal basin boundary, it is instructive to look at a case where the basin boundary is smooth, but the outcome is sensitive to initial conditions. This is the case of the _transient_ dynamics of a particle with damping. This one-degree-of-freedom example is a simple model for the postbuckling behavior of an elastic beam. The equation of motion for this problem is \[\begin{equation*}\ddot{x}+\gamma\dot{x}-{\frac{1}{2}}x(1-x^{2})=0\end{equation*}\] Unlike the related problem with periodic forcing, the complete dynamics can be described in a two-dimensional phase plane (_x_, \(y\), = \(\dot{x}\)_). The displacement and time have been normalized such that the two stable equilibrium positions in the phase plane are (+-1, 0) and the undamped natural frequency is one radian per second. The control parameters are the damping \(g\) and the initial conditions _x_(0) = \(x\)0, \(\dot{x}(0)=y_{0}\). Although there are three equilibrium positions, \(x\) = 0, +-1, only the latter two are stable and thus we will have _two competing basins of attraction_. Dowell and Pezeshki (1986) have examined the basins of attraction for this problem as illustrated in Figure 7-30. They subdivided the basins into how many times the particle orbits cross the \(x\) = 0 axis before settling down to \(x\) = +-1. One can see that for large initial conditions there are alternating bands where the particle will eventually go to the left or right attractor. Although these boundaries are smooth, the size of the bands approaches zero as the damping \(g\) - 0. Thus, if there is some finite uncertainty in the initial conditions as denoted by the circle of radius \(e\) in Figure 7-30, one has no certainty of which attractor the particle will go toward if \(e\) > \(e\)0(_g_), where lim \(\epsilon_{0}\to 0\) as \(\gamma\to 0\). For finite damping, we can obtain certainty of the end state only if we have accurate enough information about the initial state. In the next example, we will show a fractal basin boundary where the outcome is always uncertain no matter how small \(\epsilon\) is; that is, \(\epsilon_{0}\,=\,0\). ##### Fractal Basin Boundary: Forced Motion in a Two-Well Potential In this section, we will examine the periodic forcing of a particle in a two-well potential: \[\dot{x} = y\] (7-7.2) \[\dot{y} = -\gamma y\,+\,\tfrac{1}{2}x(1\,-\,x^{2})\,+f_{0}\text{cos}\;\omega t\] As discussed in earlier chapters, the dynamics of the particle can be described in a three-dimensional phase space (\(x\), \(y\), \(z=\omega t\)). In the earlier discussions, however, we focused on chaotic motions for this system. Here we will only consider motions which are _periodic_ about Figure 7-30: Basins of attraction for the unforced, damped motion of a particle in a two-well potential. The numbers indicate the number of times the trajectory crosses \(x\,=\,0\) before going to one of the two equilibrium points at \(x\,=\,\pm\,1\). [From Dowell and Pezeshki (1986).] Figure 7-32: Fractal-like basins of attraction for the forced, two-well potential problem for forcing amplitude above the Melnikov criterion (7-7.3). [From Moon and Li (1985b) with permission of the American Physical Society, copyright 1985.] either the left or right equilibrium positions, \(x=\pm 1\). Thus, the attractors in this problem may be considered limit cycles. [If we take a Poincare map of the asymptotic motion, we will have a finite set of points near one of the equilibrium positions (\(\pm 1\), \(0\)).] Here we do not distinguish between period-1 or period-2 subharmonics. We assume that the forcing \(f_{0}\) is small enough to avoid chaotic vibrations and high-period subharmonics. In this example, we fix \(\gamma,f_{0}\), and \(\omega\) and vary the initial conditions. The results are shown in Figures 7-31\(-\)7-33 and are obtained from numerical simulation using a fourth-order Runge-Kutta integration algorithm (see Moon and Li, 1985b for details). The results in Figure 7-31 show that when \(f_{0}\) is small enough, the basin boundary is smooth, but when \(f_{0}\) is greater than some critical value, the boundary becomes fractal-looking as shown in Figure 7-32. (This figure is based on integration of \(400\,\times\,400\) initial conditions.) To ascertain whether this boundary is fractal, we have taken a small region of initial condition space and have expanded this region. The Figure 7-33: Enlargement of a small rectangular region of initial condition space in Figure 7-23 showing fractal-like structure on a finer scale. [From Moon and Li (1985b) with permission of the American Physical Society, copyright 1985.] results are shown in Figure 7-33. Thus, we see that on a finer and finer scale the boundary shows evidence of fractal structure. These results have important implications for classical dynamics insofar as predictability goes. See also Color Plates 5 and 6. Two other examples of basin boundary calculations are illustrated in Color Plates 7 and 8 and on the jacket for a particle in a three-well and four-well potential. These problems were described in Chapter 6 (see also Li and Moon, 1990a,b). In the three-well potential problem the particle has one degree of freedom and is excited by a periodic force. The two photos in Color Plates 7 and 8 show the evolution of the basin boundaries as the force level is increased. The four-well problem is a two-dimensional one. The fractal nature of the basins of attraction is shown on the jacket for a force level high enough to produce homoclinic orbits in the Poincare map. ### Homoclinic Orbits: A Criterion for Fractal Basin Boundaries Although the main theme of this book has been chaotic dynamics, the results of the previous section demonstrate that one of the properties of chaotic dynamics, namely, parameter sensitivity and unpredictability, may also be characteristic of certain nonchaotic motions. This prospect stirs terror in the computers of those engineers involved in numerical simulation of nonlinear systems. In such systems, the output of a calculation may be sensitive to small changes in variables such as initial conditions, control parameters, round-off errors, and numerical algorithm time steps. This lack of robustness may exist even when the problem is a transient one or has a periodic output. First, we expect that those systems most susceptible to fractal basin boundary behavior will be those with multiple outcomes, such as multiple equilibrium states or periodic motions. For example, if we consider the impact of an elastic-plastic arch (see Symonds and Yu, 1985 and Poddar et al., 1986) or periodic excitation of a rotor or pendulum, there are at least two possible outcomes. In the case of the arch, the end state could be either the arch bend up or down. In the case of the rotor, one could have rotation clockwise or counterclockwise. The second clue to establishing the possibility of fractal basin boundaries is more subtle and requires more mathematical intuition. We have seen in Chapters 1, 3, and 6 that nonlinear systems which tend to stretch and fold regions of phase space in what are called _horseshoe maps_ have a certain element of sensitivity to initial conditions as well as a variety of subharmonics solutions. As discussed in Chapter 6, it was shown that horseshoe map properties result when the Poincare map associated with the flow in phase space develops homoclinic points in dissipative nonlinear systems. A criterion was derived by Holmes (see Guckenheimer and Holmes, 1983) using a method by Melnikov [Eq. (6-3.20)]. In the case of the forced motion of a particle in a two-well potential, it turns out that this criterion gives a very good indication of fractal basin boundaries even when the motion is _not_ chaotic. The criterion for the equation of motion (7-7.2) is given by \[f_{0}>\frac{\gamma\sqrt{2}}{3\pi\omega}\cosh\biggl{(}\frac{\pi\omega}{\sqrt{2} }\biggr{)}\] Evidence for this conclusion is given in Figure 7-34 (e.g., see Moon and Li, 1985b). This figure summarizes the results of many calculations of basin boundaries similar to those in Figures 7-31-7-33. Below the Holmes-Melnikov criterion the numerically calculated basin boundary appears to be smooth, whereas above the criterion curve the boundary appears fractal. The connection between homoclinic orbits and fractal basin boundaries is not entirely a mystery especially if we examine the results in Figure 7-35. In this figure, we have superimposed two calculations. Figure 7-34: Homoclinic orbit criterion (7-7.3) for the two-well potential problem with fractal-like and smooth basin boundary observation fron numerical studies. [From Moon and Li (1985b) with permission of the American Physical Society, copyright 1985.] The first is the basin boundary for the two-well potential for a force amplitude just below the Holmes-Melnikov curve. We can see that the boundary has developed a long finger as compared with that in Figure 7-31 for a smaller force. The second calculation in Figure 7-35 is the determination of the stable and unstable manifolds of the Poincare map which emanate out of the saddle point near the origin. The first observation is that the basin boundary is _identical_ to the stable manifold of the Poincare map. The second observation is that the unstable manifolds, shown as the dashed curves, are just touching the stable manifolds. This is to be expected because at the criterion the two manifolds touch and form homoclinic points. In theory, beyond this criterion, the two manifolds of the Poincare map must touch an infinite number of times, which results in an infinite folding of the stable manifold and hence an infinite folding of the basin boundary and the resulting fractal properties. The idea that basins of attraction of different motions can become intertwined is not a new concept in nonlinear dynamics, as can be seen in the classic book by Hayashi (1953) on nonlinear oscillations. Figure 7-35: Superimposed plots of the basins of attraction of the forced, two-well potential problem and the associated stable and unstable manifolds of the Poincaré map at the critical force level (7-7.3). [From Moon and Li (1985b) with permission of the American Physical Society, copyright 1985.] Professor Hayashi's book illustrates the intertwining of three basins of attraction, each associated with a particular subharmonic motion of the oscillator. These diagrams were obtained by Hasashi and his co-workers using analog computers, and they showed how a small change in initial conditions could switch the output from one attractor to another. Although this knowledge was available in the 1950s, and perhaps earlier, the relationship of these basin boundary diagrams to fractals and chaos was not made until around 1980. (See also the discussion in Chapter 4 on Hayashi, Ueda, and the "Japanese attractor.") The above discussion assumes the existence of only two attractors. However, even in the two-well potential problem, within each well there may be two or more attractors; for example, there could be two subharmonic solutions in the vicinity of each well. When there are more than two attractors, it is possible for one basin boundary to become fractal and another basin boundary to remain smooth. This also suggests that there could be multihomoclinic tangencies or homoclinic criteria. A discussion of multiple coexisting attractors and basin boundaries is presented in a paper by Battelino et al. (1988) in which they treat the forced motion of two-coupled Van der Pol oscillators. Two other studies on multiple basin boundaries have been presented in two Cornell University dissertations by G.-X. Li (1984, 1987) (see also Li and Moon 1990a,b). These studies examine a particle in three- and four-well potentials. A brief discussion of multiple homoclinic orbit criteria for these problems was presented in Chapter 6. Color plates of basins of attraction for both the three-well and four-well potential problems are shown in Color Plates 7 and 8 and on the jacket. In these multiple-well potential problems, mixed fractal and smooth basin boundaries can arise when there are more than one saddle point in the Poincare map. Thus, the outflow trajectory (unstable manifold) of one saddle can intersect either its own inflow trajectory or that of another saddle. Each such entanglement is bound to generate a horseshoe map structure that in turn produces the fractal basin boundary. However, these multiple entanglements may occur for different values of the control parameter and hence the possibility of multiple homoclinic tangency criteria, each leading to greater and greater sensitivity to initial conditions. ### Fractal Basin Boundaries and Robust Design In the design of most practical engineering devices, the system is usually assumed to operate near one stable dynamic attractor. Thus, a design which is sensitive to either initial conditions or control param eters is not robust. But how does one quantify robustness? Professor J. M. T. Thompson has attempted to answer this question in a series of papers relating to the dynamic capsize of ships (Thompson, 1989a,b; Thompson et al., 1990; Soliman and Thompson, 1989). These studies are based on a model of a particle in a one-well potential with a one-sided escape barrier: (see Figure 6-17 and Color Plate 4) \[\begin{array}{l}\ddot{x}\,+\,B\dot{x}\,+\,x\,-\,x^{2}\,=\,F\sin\omega t\end{array}\] (7-7.4) For zero forcing, \(F=0\), this system has a saddle point at \(x=1\) and a stable spiral attractor at \(x=0\). The basin of attraction for \(x=0\), in fact, is determined by the stable manifold of the saddle point in the phase plane (\(x\), \(\dot{x}\)). For small enough forcing, \(F\neq 0\), there is a saddle point in the Poincare map, and the inflow curve to this saddle (stable manifold) also defines the basin of attraction. This boundary can be found numerically by iterating the Poincare map backwards in time for a set of initial conditions lying along the stable eigenvector of the saddle point of the map. A set of four such basin boundaries are shown in Figure 7-36 for the one-well potential for four different forcing levels (Thompson, 1989). As the force increases, the fractal tongues invade the basin. Thompson then defines robustness in terms of the degree of erosion of area of the basin of attraction. This is illustrated in Figure 7-37. One can see that even when \(F\) is increased past the homoclinic tangle value calculated from Melnikov's theory (Eq. (6-3.28b)), the Figure 7-36: A set of basin boundaries for the periodically forced one-well potential oscillator. [From Thompson (1989b).] basin area is robust until a critical point at which the safe area erosion is accelerated by a small increase in \(F\). These ideas and other so-called safety integrity measures for dynamical systems show how the concept of fractal geometry can be used to quantify intuitive features of design of nonlinear engineering devices (see Soliman and Thompson, 1989). ## Dimension of Basin Boundaries and Uncertainty Yorke and co-workers at the University of Maryland have produced numerous studies of basin boundaries, fractals, and chaos. In one study they have shown that the fraction \(\phi\) of uncertain initial conditions in the phase space as a function of the radius of uncertainty \(\epsilon\) is related to the fractal dimension of the basin boundary (e.g., see McDonald et al., 1985) \[\phi \approx \epsilon^{D - d}\] where \(D\) is the dimension of the phase space and \(d\) is the capacity fractal dimension of the basin boundary. When the boundary is smooth, \(d=D-1\) or \[\phi \sim \epsilon\] Figure 7-37: Basin boundary area function as the forcing amplitude is increased. _Insets:_ (_a_) \(F=0.0725\); (_b_) \(F=0.0872\). [From Thompson et al. (1990).] For example, if the relative uncertainty in initial conditions were \(\varepsilon=0.05\), then the uncertainty of the outcome as a fraction of all initial conditions would be \(\delta\approx 22\%\) when \(d=1.5\) and \(D=2\). A technique for calculating \(d\) for basin boundaries is described in a number of the Maryland group papers. The technique differs from that for trajectories because the boundary points are never given but are formed from the set of points that lie in neither of two attracting sets. Such fractal sets have been labeled 'fat fractals.' [See Grebogi et al. (1985c) for a discussion of fat fractals and their application to basin boundary calculations.] ### Transient Decay Times: Sensitivity to Initial Conditions In the preceding discussion we described how the development of a fractal basin boundary leads to uncertainty about which attractor the system will approach as \(t\to\infty\). However, one may also be interested in how much time it takes to approach the attractor. Pezeshki and Dowell (1987) have calculated an initial-condition-transient-time plot for the two-well potential as shown in Figure 7-38. In this diagram each point is coded in color or shade to represent the transient time to approach a periodic orbit around either the left or right potential well. The two wells are not distinguished, only the transient times. They observed fractal-looking patterns when the forcing amplitude was above the homoclinic orbit criterion (7-5.3). This means that, given some uncertainty in initial conditions, both the transient decay time and the particular attractor are unpredictable for certain nonlinear problems. ### Fractal Boundaries for Chaos in Parameter Space We have seen how small changes in initial conditions can dramatically change the type of output from a dynamical system. It is natural to ask whether a similar sensitivity exists in the other parameters that control the dynamics, such as forcing amplitude or frequency or the damping or resistence in a circuit. One example is discussed here--a fractal experimental boundary between chaotic and periodic motions in a forced one-degree-of-freedom oscillation. When two or more types of motions are possible in a system, one usually determines the range of parameters for which one or another type of motion will exist. In the case of the forced motion of a particle in a two-well potential (see Chapters 2 and 6), it is of great interest to know when chaotic motions or periodic motions will occur when the * [19] A. B. K. input force is periodic. The equation that describes this oscillation is by now familiar to the reader [Eq. (7-7.2)]. In this problem, we have used a nondimensionalization procedure to eliminate all but three parameters (\(\gamma,f,\omega\)). As discussed in Chapter 6, both Holmes (1979) and Moon (1980a) derived criteria relating (\(\gamma,f,\omega\)) for when chaotic motion would occur. These relations [Eqs. (6-3.27) and (6-3.46)] have the form \[f > F(\omega,\gamma)\] Fixing the nondimensional damping \(\gamma\), both criteria are smooth curves in the (\(f,\omega\)) plane as shown in Figure 7-39. When these criteria are compared with experimental data (see Moon, 1984b), however, two differences are obvious: The theoretical criteria are lower bounds, and the experimental criterion looks ragged and may therefore be fractal. Figure 7-39: Fractal-like boundary between chaotic and periodic motion in the forcing-amplitude–frequency plane. Experimental data are from the vibration of a buckled beam. [From Moon (1984b) with permission of the American Physical Society, copyright 1984.] The experiments were carried out on the now familiar buckled, steel, cantilever beam placed above two permanent magnets (Figure 2-2_b_). The elastic beam, magnets, and support are placed on an electromagnetic shaker which drives the system at a given amplitude \(A\)0 and frequency \(o\). The nondimensional force in (7-7.2) is related to this forcing amplitude by \[f_{0} = - \,A_{0}\omega^{2}\] The experiments were carried out by fixing the forcing frequency and slowly increasing the driving amplitude of the shaker. With the beam vibrating initially with periodic motion about one of the buckled equilibrium positions, the amplitude was increased until the tip of the beam jumped out of the initial potential well. To determine whether the motion was chaotic or periodic, Poincare maps were used. To motion was measured by strain gauges attached to the beam at the clamped end, and the strain versus strain rate served as the phase plane. Poincare maps of these signals were synchronized at the driving frequency. Chaos was determined when the finite set of points of the Poincare map (as observed on an oscilloscope; see Chapter 5) became unstable and a Cantor-set-like pattern appeared on the screen. At least five sets of data for chaotic boundaries were taken for different beam-magnet configurations, and all showed a nonsmooth behavior. In the data shown in Figure 7-31, approximately 70 frequencies were sampled between 4 and 9 Hz. To determine if the boundary between chaotic and periodic motions is fractal, the fractal dimension of the set of experimental points was measured. First, we connected the points with straight line segments. Second, we used the caliper method to measure the length of the boundary as a function of caliper size. This is the same method described by Mandelbrot (1977) to measure the fractal dimension of the coastline of various countries. Thus, we are approximating the experimental boundary by \(N\) line segments, each of length \(e\). As we decrease the caliper size, \(e\) (the number of line segments needed to approximate the curve) increases. The total length is then \[L = N(\varepsilon )_{\varepsilon }\] For a nonfractal curve, \(N \simeq \varepsilon^{- 1}\) or \(N = \lambda/\varepsilon\); thus \(\lambda\) becomes a measure of the length of the boundary. However, for fractal curves, such as the Koch curve, \(N = \lambda\varepsilon^{- D}\), where \(e\) is small and \(D\) is not an integer. Thus by measuring \(L\) versus \(\varepsilon\), \[L=\lambda_{\varepsilon}e^{1-D}\] we can obtain the fractal dimension by measuring the slope of the log \(L\) versus log \(\varepsilon\) curve, or \[D=\lim_{\varepsilon\to 0}\left(1+\frac{\log L}{\log(1/\varepsilon)}\right)\] One can show that this procedure is equivalent to the idea of covering the set of points with small squares as discussed in the definition of the capacity fractal dimension [Eq. 7-1.2]. The results of this series of measurements are shown in Figure 7-40 for two sets of data. The lengths of the boundary curves appear to increase with decrease in caliper size, and they imply a fractal dimension of between 1.24 and 1.28. Thus, there is convincing evidence that the boundary curve between periodic and chaotic regimes in the parameter space of (\(f\), \(\omega\)) is fractal. It should be noted, however, that while the single-mode description of the chaotic elastic beam [Eq. (7-7.2)] agrees very well with the experimental results insofar as Poincare maps are concerned, the actual experiment has infinitely many degrees of freedom which one hopes do not influence the low-frequency behavior. However, it may be possible that higher modes Figure 7-40: Calculation of the fractal dimension of the chaos boundary of Figure 7-39. could influence or are even essential to the fractal nature of the boundary curve in Figure 7-39. Further research on this question is necessary to provide a clear answer. In any event, these results suggest that a clear-cut criterion for chaos may not be possible. The apparent fractal nature of the criterion boundary may be inherent in many systems, and one may have to settle for upper or lower bounds on the chaotic regimes. ### Application of Fractals in the Physical Sciences We have seen in this chapter the way in which fractals enter dynamics and the dynamic basis of creating fractals. However, there are many applications of this modern branch of mathematics where dynamics is not the central issue. This is especially the case of the characterization of geometric forms in the natural and manufactured world by the use of fractal concepts. A very good introduction to some of these applications may be found in the book by Feder (1988). In this section we describe in brief a few of these applications to geology, fluid mechanics, materials characterizations, and fractal mechanics. This section is written to show the broader applications of fractals in physics besides dynamics. But we also present these examples of fractal physical forms to suggest that in each there may have been a dynamic, chaotic process that created them which is yet to be discovered. #### Geology One of the early applications of fractals was the observation that measurement of some geological features such as coastlines or rivers depends on the size of one's measurement instrument (Mandelbrot, 1982); the coastlines of Britain and Norway are clear examples. In this way of thinking, the conventional definition of the length of a natural geological feature may not be applicable because the length, \(L\), depends on the basic unit of the caliper, \(\varepsilon\), that is \[L\ =\ N(\varepsilon)\varepsilon\] As discussed in the previous sections, for fractal-like geological features, the number of caliper lengths \(N(\varepsilon)\) may be proportional to \(\varepsilon^{-D}\), where for \(D=1\) the curve is not fractal. Thus, the length-caliper size relation takes the form \(L=\lambda\ \varepsilon^{1-D}\), where \(D\) is the fractal dimension. Parts of the British coastline boasts a fractal dimension of \(D=1.2\), whereas Norway's ragged shoreline has a dimension \(D=1.5\). In another geological application, Feder (1988) describes the consequences of multiple branching of rivers for which the relation between drainage area \(A\) and the length of the longest branch of the river \(L\) suggests a fractal scaling \(L=\beta A^{1/D}\). In the states of Virginia and Maryland in the United States, Hach (1957) found that \(D=1.2\). Again, we see that there must have been underlying geologic dynamical processes that created these fractal features in the earth's surface topology. A recent discussion of fractals and chaos in the geological sciences may be found in the book by Turcotte (1992). ### Viscous Fingering: Hele-Shaw Flow The case of fluid flow between two plates was studied by Hele-Shaw in 1898. Recent interest in this experiment revolves around the fractal-like interface between a gas under pressure and the fluid in the cell [see Feder (1988) and Chapter 4; also see Homsy (1987) for a review]. A sketch of the geometry is shown in Figure 7-41\(a\), and a typical picture of the fractal viscous fingers that develop is shown in Figure 7-41\(b\). In the theory for this phenomenon, inertial forces are small, and the equilibrium involves a balance of viscous stress, surface tension, and applied pressure. The basic mechanism involves an instability of a gas-fluid interface in which a wrinkled surface becomes more stable than a flat surface, which leads to a dendritic-type growth pattern. These problems are of importance, for example, in understanding fluid flow in porous material such as oil recovery processes. The physical picture of the development of the fractal figure in the fluid in the Hele-Shaw cell looks like a dynamic model call _DLA-diffusion-limited aggregation_. The _DLA_ model involves particles that move in a random way until they reach some surface where they cluster. _DLA_ models are used to model many growth phenomena in fractal physics, including electrochemical dissolution processes, dielectric breakdown (Murat and Aharony, 1986), and possibly fracture of metals (Louis et al., 1986). In experiments that simulate flow through porous media, a viscous fluid such as epoxy is placed between two plates with a monolayer of small glass beds (\(\sim\)1 mm diameter). The flow of air into this cell produces fractal patterns similar to those of _DLA_ models (Maloy et al., 1985) as shown in Figure 7-42. It is interesting that this ostensibly static, deterministic experiment ### Application of Fractals in the Physical Sciences Figure 7-41: (_a_) Sketch of experiment for viscous fingering of a fluid in a Hele-Shaw cell. (_b_) Fractal-like fingering of a thin fluid layer under pressure in a Hele-Shaw cell. [From Homsy (1987) © 1987 IEEE.]should be modeled by a dynamic process with underlying randomness. The questions which arise in this phenomena are not unlike those that are surfacing in the discussion of spatiotemporal chaos (see Chapter 8). ##### Fractal Materials and Surfaces In classical continuum mechanics one takes for granted certain scaling relations such as area-length relations \(A\sim L^{2}\) or mass-length behavior \(M\sim L^{3}\). However, there are certain classes of materials such as silica gels, polymers, or porous solids which at some length scales do not follow the classic mass-length relation but instead behave as \[M\,\sim\,L^{D}\] where \(1\leq D\leq 3\) for chain-like molecular structures, or \(2\leq D\leq 3\) for Figure 7.42: Fractal pattern from air displacing liquid epoxy in a glass sphere porous medium monolayer. (From Maloy et al. (1985).) plate-like structures. The value of the fractal dimension, \(D\), for different materials has been measured in a number of laboratories (e.g., see Schaefer and Keefer, 1984a,b, 1986) using the scattering of light, x-rays, or neutrons. In the classical theory of linear wave scattering from a geometric object (e.g., a sphere or cylinder), one usually assumes an incident plane wave field \(u_{I}=Ae^{i(\omega t-kx)}\) and then adds a scattered wave field \(u_{s}=e^{i\omega t}S(\mathbf{r};\,k)\), where \(\mathbf{r}\) is the position vector from the center of the scatterer. For uncorrelated wave sources (e.g., nonlaser light), one can then calculate the energy scattered out of the incident wave field by the scatterer. One of the first such calculations was that by Lord Rayleigh, who, in 1871, showed that the scattering cross section \(\sigma\) for wavelengths \(2\pi/k>\!>L\) (\(L\) is the size of the scatterer) satisfied a power law: \[\sigma\,\sim\,(kL)^{-4}\] Using a technique called _small-angle x-ray scattering_ (SAXS), it has been found that for fractal objects the scattering intensity \(I\) depends on a noninteger power law (Schaefer and Keefer, 1984a,b), that is \[I\,\sim\,\kappa^{-x}\] where \(\kappa=(4\pi/\lambda)\)sin(\(\theta\)/2), \(\lambda\) is the wavelength of the incident wave, \(\theta\) is the scattering angle, and \(x\) is called the _Porod exponent_. For mass fractals where \(M\sim L^{D}\), the scattering exponent \(x\) equals \(D\). For porous materials, \(x\) is found to be a function of the porosity. ##### Fracture of Solids If turbulence remains one of the unsolved problems in the physics of fluids, then fracture and fatigue are its counterpart in the physics of solids. However, unlike turbulence, the process of fracture has not received much attention from the new dynamicists except for a few examples. One of these is a study by Mandelbrot et al. (1984) in which fractured surfaces in steel were characterized using fractal measures (see also Feder, 1988). In this experimental work, the fractured surface was coated with a nickel layer and then polished to expose islands of steel surrounded by a nickel sea. In the characterization of the surface, the perimeter of these islands and their area were assumed to be related by a noninteger exponent. Using a series of maraging steel specimens, each heat-treated at different temperatures and fractured by a short time impact force, they found a distribution of fractal dimensions of the surface from 2.10 to 2.28. From these data they found that the impact energy required to fracture the steel was inversely proportional to this fractal dimension of the surface. Explanation of this result in terms of a dynamical model or molecular physics is still wanting. In a more recent study of dynamic crack propagation in solids by a group at the University of Texas, Fineberg et al. (1991) have observed the dynamics of crack propagation in the brittle plastic polymethyl methacrylate (PMMA). These cracks were observed to propagate at speeds of up to 600 m/s. The computer visualization of surface profilometer data reveals a complex fractal-looking surface, whereas the crack velocity time history shows an erratic chaotic-looking behavior. Such studies may lead to nonlinear dynamic models that may give clues to the often unpredictable dynamics of fracture and fatigue. Not all fractured surfaces are created under dynamic impact. When a metallic structure is taken through a stress cycle many times (102-107 cycles), small flaws in the material can sometimes develop first into microcracks and then link up into a catastrophic failure. In these problems, inertial effects are often not very large. However, one may be able to view the advance of a crack as an iterated dynamic map. For example, suppose a crack in a thin plate is described by a position vector \({\bf r}_{n}=(X_{n},\,Y_{n})\), where \(X_{n},\,Y_{n}\) are the Cartesian components at the end of the \(n\)th stress cycle. While \(X_{n},\,Y_{n}\) may grow without bounds, we can assume that the incremental crack advance displacements are bounded; that is, suppose we define \(X_{n}=X_{n-1}\,+\,u_{n},\,Y_{n}=Y_{n-1}\,+\,\upsilon_{n}\). Then it might be possible that a function exists which relates \((u_{n},\upsilon_{n})\) to \((u_{n+1},\,\upsilon_{n+1})\), that is, \[u_{n+1} = F(u_{n},\upsilon_{n})\] \[\upsilon_{n+1} = G(u_{n},\upsilon_{n})\] Of course, this is just speculation at this time. But, careful observation of crack tip advance often exhibits an unpredictable time history. It will remain for future research to see if such models can be found. The fractal nature of fractured surfaces gives credence to the belief in an underlying nonlinear dynamic model [see also Lung (1986), Markworth et al. (1988), and Russel et al. (1991)]. ## Problems * Consider the construction of a Cantor set that starts with a uniformly dense distribution of points on a line and begins by
## SPATIOTEMPORAL CHAOS _Down beneath the spray, down beneath the whitecaps, that beat themselves to pieces against the grow, there were jet-black invisible waves, twisting and coiling their bodies. They kept repeating their patternless movements, concealing their incoherent and perilous whims._ Yukio Mishima _The Sound of Waves_ ### 8.1 INTRODUCTION If the histroy of events is written in time, then the history of time is often written in space: contrails behind a jet, the wake of a passing boat, fissures after an earthquake, tracks in the sand from a snake or a worm. These are common experiences of spatial patterns that record some dynamic events. The phonograph cylinder of Edison is an obvious example of how dynamic data can be stored in spatial patterns and, of course, in the stroke of an artist's brush. If temporal patterns are regular and periodic we should expect to see regular spatial patterns. However, if temporal events are chaotic, how does this manifest itself in space? In some physical systems, all the particles are spatially coherent even if they behave chaotically in time, whereas in the case of fluid turbulence, one has both spatial and temporal complexity. Until recently (circa 1987), most of the research on chaos was confined to temporal dynamics. In fact, all the previous discussion in this book has been about temporal dynamics only. But for physical systems, described by the partial differential equations of physics, one must deal simultaneously with both space and time. In fact, the lackof any discussion of spatial patterns so far means that we have made implicit assumptions about the spatial or modal distribution in the physical phenomenon. The study of spatiotemporal chaotic dynamics is still in the exploratory stage. It has not generated the kind of generic tools and results that can be applied to different physical problems in the same way that temporal nonlinear dynamics can. The field spans a wide range of physical problems, ranging from surface waves in a stationary fluid, electrohydrodynamics instabilities in liquid crystals, and solid-state plasmas in Ge crystals to complex twists and knots in an elastic tape or yarn. A simple experiment in spatiotemporal chaos may be performed by pointing a video camera at the video display terminal (TV) it is connected to (see e.g., Crutchfield, 1988 and Peitgen et al., 1992, p21-27). And, of course, the mother of all spatiotemporal chaos is the fully developed turbulence we are familiar with in everything from weather patterns to flows through jet engines. One of the strategies of many physical scientists with regard to the problem of turbulence, however, is not to tackle it head on (although jet engine designers must deal directly with the problem), but to study the transition from low-dimensional dynamics to high-dimensional phase space behavior by looking at the development of increasingly complex spatial patterns. At present a clear definition of spatial 'chaos' or complexity is not universally accepted. For some it represents an increase in the fractal dimension of the dynamic attractor associated with increasing number of coupled spatial modal functions. For others the measure involves a loss of spatial correlation or an increase in spatial entropy (e.g., see Kaneko, 1990 and Dowell and Virgin, 1990). An example of spatiotemporal complexity can be seen in the generation of surface wave patterns in a fluid excited by harmonic excitation. This problem, which was described earlier in Chapter 6, has been studied by Gollub and Ramshankar (1990) and involves a shallow layer of fluid in a container under vertical excitation. The problem goes back to Michael Faraday in 1831 (see Gollub and Ramshankar, 1991 for a review). The phenomenon can sometimes be observed in a coffee cup when placed on a vibrating surface. For high enough frequencies, short wavelength wave patterns appear. For certain excitation amplitude and frequency parameters the wave pattern can be regular (Figure 8-1), but for other parameters the patterns can become more complex and can change in time. The surface wave patterns appear to suffer defects similar to those found in solid crystals such as disclinations and dislocations. One of the principal questions that scientists want to answer in these spatiotemporal studies is how different patterns are selected when many are possible and how they evolve in time. Similar patterns can be seen in convective flow patterns in a shallow fluid layer heated from below. Another example is a thin layer of nematic liquid crystal with an applied electric field (Ranberg et al., 1989). Many other experiments are beginning to appear that attempt to quantify spatiotemporal dynamics. For example, an Italian group has studied spatial patterns in a Rayleigh-Benard convection cell using a laser scanning measurement system (Rubio et al., 1989). They observe two types of spatiotemporal regimes representing localized oscillations and traveling wave-type patterns. (See also Figure 1-1.) In another work, Tam and Swinney (1990) have investigated spatiotemporal patterns in a reaction diffusing system. The system is based Figure 8.1: Photographs of surface wave patterns due to vibration of a fluid in a circular container. (_a_) \(\varepsilon=0.07\); (_b_) \(\varepsilon=0.11\); (_c_) \(\varepsilon=0.14\); (_d_) \(0.35\). [From Gollub and Ramshankar (1991). \(\varepsilon\) is the relative forcing level beyond the flat surface instability.] on the Belousov-Zhabotinsky reaction in a Couette cylindrically shaped reactor. ### Spatial Chaos--The Wave Guide Paradigm As a simple example of how spatial chaos can arise naturally in physical systems, we consider the model in Figure 8-2\(a\). Here a nonlinear oscillator is connected to a linear nondispersive semi-infinite wave guide such as a taut string or an electrical transmission guide. If the oscillator is weakly connected to the wave guide, then chaotic motions of the oscillator in time \(W(t)\) will be spatially stored in the wave guide in the form of linear right, running waves \(u(x,\,t)=f(x\,-\,c_{\text{o}}t)\). Boundary conditions between the oscillator and the wave guide can result in the temporal history stored as information in the spatial wave \[u(x,\,t)=A\,\cdot\,W\left(t\,-\,\frac{x}{c_{\text{o}}}\right)\] (8-1.1) ### Spatial Chaos--The Edison Phonograph Model Thomas Edison invented a device that stored information on the surface of a cylinder in response to the motion of a needle. By moving the cylinder in the axial direction, a spatial record of the temporal history of the oscillation needle could be recorded. In many mechanical systems a similar mechanism is involved such as the cutting of metal from a cylindrical workpiece on a machine lathe. As we have seen from the work of Grabec (1988) (see Chapter 4), the vibration of the tool can leave a record in the machined surface. Another example involves roller bearings. This model, however, involves "writing over" the past deformation history on the surface of the bearing (Figure 8-2_b_). Thus, one can imagine that chaotic motions in the device attached to the bearing can be recorded in the spatial deformations on the surface of the bearing due to plasticity effects. Aside from physical experiments in _continuous_ media, another strategy has been to look at _discrete_ mathematical and computational models with a large number of coupled cells. Geometrically, these take the form of either (a) periodic chains of identical oscillators or one-dimensional maps or (b) two- and three-dimensional lattice structures. The models also range from those with discretized space, time, and state variables (cellular automata) to coupled cell maps with discretized space and time, to coupled differential equations with only space discretized. These models are discussed in Section 8.3. ### Spatial Complexity in Static Systems #### The Twisted Elastica and the Spinning Top The central idea of this section is that spatially complex patterns can be found in familiar systems in static equilibrium. Our example focuses on a long, thin, flexible, tapelike continuum called in the classical literature the _elastica_ (e.g., see Love, 1922). Such stringlike objects are familiar as audio, film, or videotape and as yarn, wire, or measuring tape. Spatial complexity is often familiar in such objects in the form of despooled film tape or in the form of twisted yarn or fishing line. Macromolecular structures may also exhibit such complexity. Such complex static patterns in space may have exact analogs in the chaotic dynamic orbits of a top or pendulum in phase space. The analogy between the temporal dynamics of a rotating body and the static deformations of a long, thin elastic body goes back more than a century to Kirchhoff (1859) (see also Love, 1922). The simplest version of this analogy is that between a pendulum and a buckled rod. The equations of motion of a planar pendulum (Figure 8-3), written in terms of the angular displacement \(\theta\) and the angular momentum \(L\), are given by \[\begin{array}{rcl}J\,\frac{d\theta}{dt}&=&L\\ \frac{dL}{dt}&=&-r_{c}mg\,\sin\,\theta\end{array}\] where \(J\) is the moment of inertia about the point of rotation, \(r_{c}\) is the distance to the center of mass, \(m\) is the mass, and \(g\) is the gravity constant. The equations of static equilibrium for a buckled elastic rod, written in terms of the slope angle \(\theta\) and bending moment \(M\), have the same precise form as those of the pendulum, that is, \[\begin{array}{rcl}D\,\frac{d\theta}{ds}&=&M\\ \frac{dM}{ds}&=&-P\,\sin\,\theta\end{array}\]where \(D\) is the bending modulus and \(P\) is the compressive end load on the rod. If all the parameters in either (8-2.1) or (8-2.2) are constant in time or space, then the solutions can be found in terms of elliptic integrals (Love, 1922), and no chaos exists as in Figure 5-2. However, it is known that a pendulum under external or parametric time periodic forcing may exhibit chaotic dynamics (Koch and Levey, 1985). This suggests that if the bending modulus \(D\) in (8-2.2) were to vary periodically in space (e.g., \(D=D_{0}+D_{1}\)cos \(ks\)), then spatially chaotic equilibrium solutions may be found for the buckled elastica. This idea has been studied by several authors since the first edition of this book, including Mielke and Holmes (1988), Thompson and Virgin (1988), and El Naschie (1990), and El Naschie and Kapitaniak (1990). The first authors presented a detailed mathematical study, whereas the second and third authors presented numerical and qualitative experimental evidence for spatial chaos in the buckled elastica. The extension of this analogy to a rigid body spinning in three dimensions and a thin elastic tape twisted in space is straightforward and may be found in the classic text on elasticity by Love (1922). To sketch the analogy we note that the angular momentum of a rigid body referred to its center of mass may be written in terms of the components of its angular velocity vector \(\omega=(\omega_{1},\,\omega_{2},\,\omega_{3})\); \[{\bf L}\;=\;J_{1}\omega_{1}{\bf e}_{1}\;+\;J_{2}\omega_{2}{\bf e}_{2}\;+\;J_{3 }\omega_{3}{\bf e}_{3}\] (8-2.3) where \(\{{\bf e}_{1},\,{\bf e}_{2},\,{\bf e}_{3}\}\) are an orthogonal triad of principal axes of the inertial matrix and \(\{J_{1},\,J_{2},\,J_{3}\}\) are three principal inertias. Under applied Figure 8-3: Analogy between the temporal dynamics of the pendulum and the spatial deformation of the elastica. moments (which may vary in time) \(\{M_{1},\,M_{2},\,M_{3}\}\), the equations of motion take the form \[\begin{array}{l}\frac{d}{dt}\,(J_{1}\omega_{1})\,=\,(J_{2}\,-\,J_{3})\omega_{2} \omega_{3}\,+\,M_{1}\\ \\ \frac{d}{dt}\,(J_{2}\omega_{2})\,=\,(J_{3}\,-\,J_{1})\omega_{1}\omega_{3}\,+\, M_{2}\\ \\ \frac{d}{dt}\,(J_{3}\omega_{3})\,=\,(J_{1}\,-\,J_{2})\omega_{1}\omega_{2}\,+\, M_{3}\\ \end{array}\] These equations are called _Euler's equations (e.g., see Goldstein, 1988). When the \(J_{i}\)_ are constants and \(M_{i}=0\), the solution is known in terms of three constants of the motion. Two of these are the kinetic energy and the angular momentum. However, if either \(M_{i}\) is periodic in time or one of the principal inertias \(J_{i}\) varies in time (e.g., \(J_{2}=J_{0}\) + \(A\) cos \(\theta t\)), then chaotic motions are possible as in the suspected tumbling of one of the moon's of Jupiter, Hyperion (see Chapter 4). The analogous equations for the spatial deformation of a long, thin elastic rod or tape (Figure 8-4) are governed by equations for the internal bending moment, \(\mathbf{G}\), produced by bending stresses on the cross section and its relation to the curvatures of the centerline of the rod (\(\kappa_{1}\), \(\kappa_{2}\), \(\tau\)). The curvatures (\(\kappa_{1}\), \(\kappa_{2}\)), as one recalls from analytic Figure 8-4: Three-dimensional deformation patterns in elastica-type structure. geometry, are inversely proportional to the radius of bending, while the torsion \(\tau\) is a measure of the twist about the centerline of the rod. In analogy to the spinning top, the bending moment is written in components of the three principal geometric axes of the cross section, one of which lies along the centerline \(\{\mathbf{e}_{1},\,\mathbf{e}_{2},\,\mathbf{e}_{3}\}\), that is, \[\mathbf{G}\,=\,A(\kappa_{1}\,-\,\kappa_{10})\mathbf{e}_{1}\,+\,B(\kappa_{2}\,- \,\kappa_{20})\mathbf{e}_{2}\,+\,C(\tau\,-\,\tau_{0})\mathbf{e}_{3}\] (8-2.5) where \(A\), \(B\) are bending moduli, \(C\) is the torsion modulus, and (\(\kappa_{10},\,\kappa_{20},\,\tau_{0}\)) are the initial curvatures when the elastica is moment-free. The resulting equations of equilibrium are simplified under the assumption of zero net force at each cross section: \[\frac{d}{ds}\,A(\kappa_{1}\,-\,\kappa_{10})\,=\,(B\,-\,C)\kappa_{2}\tau\] \[\frac{d}{ds}\,B(\kappa_{2}\,-\,\kappa_{20})\,=\,(C\,-\,A)\kappa_{1}\tau\] (8-2.6) \[\frac{d}{ds}\,C(\tau\,-\,\tau_{0})\,=\,(A\,-\,B)\kappa_{1}\kappa_{2}\] One can see that these equations have the same structure as those of the spinning top. Thus, several possibilities for spatial chaos follow the analogy to (8-2.4). First, one of the bending moduli could be periodic in space, that is, \(B\,=\,B_{0}\,+\,b\,\cos\,ks\). Or, one could have an initial periodically varying curvature, that is, \(\kappa_{20}\,=\,\kappa_{0}\cos\,ks\). In either case, \(2\pi/k\) is the wavelength of the disturbance and is assumed to be larger than the geometric scale of the cross section. In order to get a description of deformation in space, however, one must add to these equations the so-called Frenet-Seret equations of differential geometry: \[\frac{d\mathbf{e}_{i}}{ds}=\,\mathbf{\Omega}\,\times\,\mathbf{e}_{i},\qquad \Omega\,=\,(\kappa_{1},\kappa_{2},\tau)\] (8-2.7) Finally, to get the position of the centerline of the elastica \(\mathbf{r}\) in its deformed shape, another relation is required: \[\frac{d\mathbf{r}}{ds}=\,\mathbf{e}_{3}\] (8-2.8)With a periodic disturbance in space, these equations constitute a four-dimensional phase-space system (\(\kappa_{1}\), \(\kappa_{2}\), \(\tau\), \(ks\)). Thus, it is natural to use a Poincare section synchronized with the spatial disturbance \(ks_{i}=2\pi i\,+\,\phi_{0}\). The resulting three-dimensional map is still difficult to visualize. However, this system is identical to that of a Hamiltonian dynamical system. In our case the conserved quantity is the moment vector, \({\bf G}\) along the rod. This relation can be used to eliminate one of the variables in the Poincare map, so that a two-dimensional map is possible. An example of the numerical integration of these equations is shown in Figures 8-5 and 8-6 (see Davies and Moon, 1992). Figure 8-5 shows a Poincare map where several different solutions are possible for the same bending moment. One can see that both a quasiperiodic and a chaotic spatial solution (diffuse set of points) can exist. This case is for a spatially varying bending modulus. Spatial twisting deformations of the elastica are shown in Figure 8-6 and in the color plates (CP-12-14). These show that incredible spatial complexity is possible in the elastica. One point should be noted here. These numerical solutions require some attention to computational errors that can arise. Namely, one has to adjust the integration step size in order to keep the bending moment \({\bf G}\cdot{\bf G}\) constant along the rod. To complete the analogy with the spinning top, one should imagine Figure 8-5: Spatial Poincaré map of the deformation of a long elastica with spatially periodic change in cross section. oneself riding on a small cart that travels on the twisted tape with constant velocity. Then the chaotic rotations of the cart will be precisely those of a spinning top when one of its inertias varies in time. These chaotic solutions are thought to be related to the homoclinic tangling of stable and unstable manifolds that emanate form saddle points of the Poincare maps (e.g., see Mielke and Holmes, 1988; also see Chapter 6). Again, as we have tried to emphasize from the beginning in Chapter 1, evolutionary laws that create horseshoes in the phase space seem to be one of the principal mechanisms for creating spatial as well as temporal chaos in physical systems. Another example of the role of horseshoe maps and spatial chaos is discussed in Section 8.4 (see subsection entitled "Chaotic Mixing of Fluids"). ### 8.3 Coupled Cell Models A classic model of approximating a spatially continuous medium is the use of coupled cells or lattice models (e.g., see Brillouin, 1946). Figure 8.6: Spatial complex twisting deformations of the elastica. In inertial models a periodic array of masses is assumed to interact with neighboring masses and one constructs an infinite set of coupled ordinary differential equations. In linear models one derives a relationship between frequency and wavelength, the so-called dispersion relations, and a superposition principal is valid. Nonlinear coupled lattice models have been used to describe such nonlinear phenomena as solitons and shock waves (e.g., see Tasi, 1990 or Toda, 1989), and as models for nonlinear electrical transmission lines. Interest has recently been revived in such models to explore spatiotemporal chaos. There are three basic mathematical types of coupled cell models: 1. Cellular automata (Wolfram, 1984, 1986) 2. Coupled maps (Crutchfield and Kaneko, 1987) 3. Coupled differential equations (Umberger et al., 1989; Moon et al., 1991) Figure 8.7: Coupled cell model with discretized space and time variables and a state space of a finite set of symbols (0, 1) using the rule following (8-3.1) and randomly chosen initial conditions. ### Coupled Automata This model is depicted in Figure 8-7, for a one-dimensional chain of cells with nearest neighbor interaction. Space and time are discretized, as is the state variable. In fact, the state variable is only allowed to take on a finite set of states represented by a set of symbols. For example, in a binary symbol set, the state variable could take on either 0 or 1, black or white, L or R. To effect a dynamic on the lattice, a rule must be chosen governing the new state variable at time \(n\,+\,1\) at lattice site \(\alpha\), in terms of the old state variable at time \(n\). We can see at once that two sets of integers are needed for space and time. A nearest neighbor law might take the form \[A_{n\,+\,1}^{\alpha}\,=\,F(A_{n}^{\alpha},A_{n}^{\alpha\,+\,1},A_{n}^{\alpha\,- \,1})\] (8-3.1) For example, for a binary symbol pair (R, L) one could adopt a rule given in the following table called the 122 rule in Wolfram (1986). Wolfram (1986) and others have shown that such symbol dynamics on a lattice can exhibit very complex patterns and behavior as illustrated in Figure 8-7. For example, a spatially simple initial row of symbols can generate a chaotic-looking pattern. Or a randomly chosen initial row can generate a regular pattern after several iterations. While simple in concept, the major drawback to these models is a lack of rigorous connection between physical laws in the continuum one is trying to model (such as a turbulent fluid) and the symbol rule that generates the iterated coupled symbol map. ### Coupled Maps In this model, one discretizes space and time, but allows the state variable to take on continuous values: \[x_{n\,+\,1}^{\alpha}\,=\,F(x_{n}^{\alpha}\,,\,x_{n}^{\alpha\,+\,1},x_{n}^{\alpha \,-\,1})\] (8-3.2) For example, a popular model is to assume that each cell is governed by a simple one-dimensional map such as the logistic map with coupling to nearest neighbors: \[x_{n\,+\,1}^{\alpha}\,=\,(1\,-\,G)f(x_{n}^{\alpha})\,+\,\frac{\varepsilon}{ \zeta}\,[f(x_{n}^{\alpha\,+\,1})\,+\,f(x_{n}^{\alpha\,-\,1})]\] (8-3.3) where \(f(x)\,=\,1\,-\,ax^{2}\). This so-called coupled map lattice law has a diffusive interaction between cells (Kaneko, 1989). Using such models one is able to define Lyapunov exponents, entropy measures, correlation functions, mutual information, and other thermodynamic quantifies of spatiotemporal chaos (Kaneko, 1989). ### Coupled Differential Equations These models are identical to the classical studies of Brillouin (1946) or Toda (1981), but new tools have been used to try to characterize spatiotemporal dynamics, especially chaotic-looking spatial patterns. One such study is the work of the group at the University of Maryland (Umberger et al., 1989). The model is shown in Figure 8-8 and is similar to the earthquake model of Carlson and Langer (1989) in Chapter 4. The Maryland group uses a chain of Duffing oscillators, similar to a set of buckled beams, or a set of two-well potential oscillators, \[\ddot{x}_{\alpha}\,=\,-\,\gamma\dot{x}_{\alpha}\,+\,\frac{\delta}{z}\,x_{ \alpha}[a\,-\,x_{\alpha}^{2}]\,+\,f\cos\omega t\,+\,\varepsilon D[x_{\alpha}]\] (8-3.4) where the coupling operator is defined as \[D[x_{\alpha}]\,=\,x_{\alpha\,+\,1}\,-\,\,2x_{\alpha}\,+\,x_{\alpha\,-\,1}\]* [19] A. B. B. (Note we are using the subscript \(\alpha\) to denote spatial position in the chain.) ##### Chain of Toda Oscillators Another example of a chain of coupled oscillators with a nonlinear force interaction between masses is a model used to describe anharmonic intermolecular forces in a crystal lattice (Toda, 1981). This model has received both numerical and analytical study by Geist and Lauterborn (1988). The equations take the form \[\begin{array}{l}d_{i}=v_{i}-v_{i+1}\\ M\dot{v}_{i}=\exp(d_{i-1})-\exp(d_{i})+\gamma[v_{i+1}-2v_{i}+v_{i-1}]+F_{i}(t) \end{array}\] (8-3.5) where \(F_{i}=0\) for \(i\neq i_{0}\) and \(F_{i_{0}}=A\) sin \(\omega_{0}t\). In this model, \(d_{i}\) represents the distance between neighboring masses, \(v_{i}\) is the velocity of the \(i\)th mass, and \(\gamma\) is a damping constant. A periodic force is applied to one of the masses in the chain. Numerical integration of these equations for 15 masses is shown in Figure 8-9 from the paper of Geist and Lauterborn (1988). The spatial complexity in Figure 8-9 is clearly evident. ##### Experiments on Coupled Lattice Chaos A few experiments are beginning to emerge using coupled cell lattices. In electrical circuits one should look at the papers of Purwins et al. (1987, 1988). Another two-dimensional lattice based on cellular neural networks has been studied by Chua and Yang (1988). In the following we describe a mechanical experiment based on coupled masses on a taut string (Moon et al., 1991). The physical problem is shown in Figure 8-10. Eight small aluminum spheres sit on a string under tension. The string is fixed at one end and periodically excited at the other end with an electromagnetic shaker. The nonlinearity in this case is represented by an amplitude constraint, so that if the masses exceed a certain amplitude they will impact with a fairly rigid wall. This problem is not entirely academic, because in high-speed printers a chain of masses with typeface characters is moved across the paper. The unique character of this experiment is the signal processing. The format of the data output was designed to provide a link to cellular automata. That is, if the \(k\)th mass does not hit the constraint within a certain time period, a 0 would be stored in the \(k\)th register, whereas if it hits the wall a 1 would be stored. The time interval chosen was the first quarter cycle of each forcing cycle. Thus, this represented a finite Poincare window. Finally, eight masses were chosen so that the eight symbols of 0's or 1's could be coded into a binary number. Thus, at the end of each Poincare time window, a binary number \(S_{n}\) from 0 to 255 would code the spatial impact pattern. In this way a large number of statistical data could be obtained for both space and time. #### Spatial Return Maps One of the features of this experiment was the use of a spatial pattern return map. After each Poincare window the spatial pattern number Figure 8.10: Sketch of experiment with eight coupled masses on a string and an amplitude constraint. Figure 8.9: Numerical integration of the dynamics of 15 coupled masses with Toda potential forces (8-3.5). [From Geist and Lauterborn (1988).] \(S_{n}\) was plotted versus the previous pattern number \(S_{n-1}\). In this way one could track the changes in the impact pattern as one varied, say, the forcing amplitude or frequency. The results of these experiments are shown in Figures 8-11 and 8-12. Figure 8-11a shows the name of the bead that hits. Figure 8-11b shows the pattern history. The next figure (Figure 8-12) shows the pattern number return map. For example, for only two masses hitting, one can see that there are two different spatial patterns which alternate. For more masses hitting, one can also see that many spatial patterns are involved. Along with the measurements, a numerical simulation was performed (Figure 8-13) (see Moon et al., 1991). Using the numerical data an entropy measure was used (e.g., see Crutchfield and Kaneko, 1987). The entropy is based on probability measures \(\{P_{i}\}\), that is, \[S=-\sum P_{i}\log P_{i}\] (8-3.6) where \(P_{i}\) measures the probability that one of the 256 configuration patterns would occur. In the numerical experiment, 4300 cycles of data were taken and the first 600 were discarded. Of the remaining 3700 cycles, 330 groups of 400 cycles were used to calculate \(P_{i}\), each averaged over 330 sets of spatiotemporal data. The resulting entropy was then plotted as a function of the mass-wall gap to driving amplitude ratio (Figure 8-13). One can see that entropy increases as the gap is made smaller, which seems to measure the increase in complexity of the spatiotemporal impact patterns. These experiments and others are still exploratory. We are still looking for a new "Feigenbaum number" that will relate these observations in one experiment to some universal mathematical model. In Kuhn's theory of scientific revolutions, we are still looking for the right spatiotemporal paradigm that will unite these otherwise disparate experiments. ### Lagrangian Chaos #### Chaotic Mixing of Fluids In looking at the beautiful color pictures of fractal basin boundaries (see color plates), one is struck with the similarity to mixing of paints of different colors. Whereas the bending and stretching formations that are responsible for temporal, fractal dynamics can only be seen Figure 8.11: Space–time symbol plots for a periodically driven string with eight masses (Figure 8.10). (\(a\)) Two-bead impact. (\(b\)) Multiple-bead impact. [From Moon et al. (1991).] Figure 8.12: Spatial pattern number return map for the driven string with eight masses (Figures 8.10 and 8.11). (\(a\)) Periodic impact. (\(b\)) Chaotic impact. [From Moon et al. (1991).] in abstract phase space, in mixing of fluids, the folding and stretching can be seen in physical space directly. Flow patterns in fluids can be visualized in two ways. One can fix attention to one location **r** and then describe the velocity as it changes in time **v**(**r**; _t_). Alternatively, one can fix one's eye on a single fluid particle and follow its position in space **r**(_t_), with velocity **V** = (_x_, \(y\), _z_). The fixed spatial reference flow description is called _Eulerian_, whereas the particle-based description is called _Lagrangian_. In general, one obtains three equations relating the particle velocity to the spatial velocity functions: \[\begin{array}{rcl}\dot{x}&=&V_{x}(x,y,z,t)\\ \dot{y}&=&V_{y}(x,y,z,t)\\ \dot{z}&=&V_{z}(x,y,z,t)\end{array}\] (8-4.1) The similarities of these equations to the Lorenz equations (1-3.9) are clear, and chaotic trajectories are known to exist as can be seen in any turbulent flow. However, in most turbulent flows the spatial (Eulerian) patterns often change in time with as much complexity as the chaotic particle trajectories. What is remarkable, however, is that it is possible for the Eulerian velocity patterns to be regular (i.e., either Figure 8-13: Entropy measure for complexity in space–time patterns in a string with eight masses (Figure 8-10). [From Moon et al. (1991).] stationary in 3-D or periodically time-varying in 2-D) whereas the individual particle trajectories are chaotic. These sets of problems are important in chemical and other related technologies where mixing, stirring, or advection are important. Also, in fluid or gaseous combustion, mixing of fuel and oxygen is important (e.g., see Gouldin, 1987). In fact, these problems constitute examples where chaos is desirable. Modern studies of chaotic mixing using dynamical systems ideas have been done by Aref and Balachandar (1986), Ottino (1989a), and Chaiken et al. (1987), to name a few of the principal researchers. A very readable description can be found in an article by Ottino (1989b), and a review of chaotic advection of fluids may be found in Aref (1990). To illustrate the basic ideas, we describe the phenomenon as it occurs in 2-D physical fluid flow (as contrasted with 'flows' in phase space). In particular we consider the case of incompressible fluid (such as water or oil) where the divergence of the velocity field is zero, that is, \[\nabla \cdot \nu = 0\] In this case the velocity in two dimensions can be described by a scalar function called a _stream function_, _ps_(_x_, \(y\), _t_): \[\nu = \nabla \times \psi \epsilon_{z}\quad\text{or}\quad v_{x} = \frac{\partial \psi }{\partial y},\quad\upsilon_{y} = - \frac{\partial \psi }{\partial x}\] _ps_ is then found by solving the momentum equation of fluid mechanics. When _ps_ is independent of time, we have a stationary flow pattern in space. Equations (8-4.3) are then used to describe the particle paths, that is, \[\begin{array}{l} {\dot{x} = \frac{\partial \psi (x,y,t)}{\partial y}} \\ {\dot{y} = - \frac{\partial \psi (x,y,t)}{\partial x}} \\ \end{array}\] These equations are precisely the same as those for a single particle with position \(q\) = \(x\) and momentum \(p\) = \(y\) and an energy or Hamiltonian function _H_(_q_, _p_) = _ps_. (See also Eq. (6-3.18).) Of course, a steady 2-D flow cannot produce chaos, so one introduces a time disturbance by slowly varying the flow pattern periodically in time, that is, \(\psi(x,\,y,\,t\,+\,T)=\psi(x,\,y,\,t)\). One now has the analog of a periodically forced oscillator without dissipation. Two examples have received extensive study in the literature, namely Stokes' flow (Figure 8-14) and flow in a rectangular cavity (Figure 8-15). #### Stokes Flow In this problem, an incompressible, viscous fluid flows between two rotating cylinders whose centers are displaced (Figure 8-14). It can be shown that when the rotation rates of the cylinders are steady, then an exact solution can be found when viscous forces dominate the Figure 8-15: Sketch of geometry of fluid flow experiment in a rectangular cavity with moving walls. [From Leong and Ottino (1989).] inertial forces. In these solutions, particles travel in closed orbits in the plane. However, if one allows one of the cylinders to have a slow time-periodic change in rotation, then chaotic orbits of the particles can occur (see Color Plates 10, 11). Analysis of this periodic cylinder rotation problem has been given by Aref and Balachandran (1986) and Chaiken et al. (1987). A beautiful experimental study of the Stokes problem chaotic mixing has been presented by Chaiken et al. (1986). [See also Tabor (1989, Section 4.8) for a description of this experiment.] In either numerical or experimental studies, one can see the effect of folding and stretching in chaotic mixing. An example from the paper by Chaiken et al. (1987) for the Stokes flow problem is shown in Figure 8-16. An initially short straight segment of fluid particles is stretched and folded after several cycles of the periodic rotation of the inner cylinder. ### Rectangular Cavity A fine experimental study of fluid mixing in a rectangular cavity with moving walls has been reported by Leong and Ottino (1989) (see Figure 8-16: Experimental picture of stretching and folding of an initially straight segment of fluid particles in a Stokes flow field. [From Chaiken et al. (1987).] Figure 8-15). In this study a viscous fluid is confined in a rectangular cavity in which two of the walls can move. If the velocity of the walls is steady, then the streamlines form time independent closed orbits. However, in the experiments the authors slowly varied periodically the wall velocity resulting in complex particle motions which exhibit the stretching and folding shown in the Stokes flow problem. In these experiments, one can tag an initial group of fluid particles with a different color. After several cycles of wall motions, one can observe complex folded patterns in the tracer particles as shown in Figure 8-17. A discussion of the role of symmetry and chaotic mixing for this problem is discussed in the paper by Franjione et al. (1989). ### Three-Dimensional Problems and Turbulence If the Eulerian flow field \(V(x,\,y,\,z)\) has a three-dimensional character, then the particle trajectory can become chaotic without the need for time-periodic boundary conditions. One example of this is the flow in a twisted pipe, which has been studied by Jones et al. (1989). In this problem a viscous fluid flows in a circular pipe, whose centerline is bent in a semicircular arc. However, the next planar section of the pipe is twisted out of the plane relative to the previous section. Iterating this idea in space, these authors obtained a two-dimensional mapping of the fluid particle position as it passes from one circularly bent section to another. Iteration of this 2-D mapping then leads to stochastic or chaotic orbits of the fluid particle. A finite number of such twisted pipe segments may be useful as a practical device for mixing fluids. It should be noted, however, that these examples of Langrangian Figure 8-17: Complex mixing patterns for an oscillatory fluid flow process. [From Leong and Ottino (1989).] fluid chaos constitute a limited class of fluid motions and cannot be understood as "solving" the general problem of fluid turbulence (Aref, 1990) which remains still a distant but slightly closer target. ##### Chaotic Mixing in Plastic Material Mixing is usually associated with fluids. But one can also discover mixing problems in more solid materials. Solids are distinguished from fluids because they can resist shear. However, some solid materials will yield or flow when the shear stress exceeds some critical value. In the somewhat academic example discussed here, it is shown how chaotic trajectories of particles under alternating shear deformation in a plastic material can lead to spatial complexity that is similar to the folding and stretching processes in the mixing of fluids. (Such elasto-plastic mixing may have taken place in the earth's mantle over a geological time scale.) Consider a sheet of plastic material in which we apply a body tangential torque distribution centered about point A in Figure 8-18 (Feeny et al., 1992). The body force is such that the shear stress is a constant value: \(\tau_{r\theta}=\kappa\). In the theory of plasticity the material will flow when the shear stress reaches a yield value \(\tau_{0}\). The strain rate \(\dot{\gamma}\) is directed related to the shear stress so that \(\dot{\gamma}=\lambda\tau_{0}\). It can also be shown that the strain Figure 8-18: Sketch of torsional deformation mechanics of a thin plastic sheet. rate is related to the velocity field so that, if \(v\) is the circumferential deformation rate, under circular symmetry \[\dot{\gamma}=\frac{dv}{dr}-\frac{v}{r}=\lambda\tau_{0}\] (8-4.5) We assume that the shearing body force is applied within a radius normalized to unity, so that \(v(r=1)=0\). Then the velocity field is given by \[\omega(r)=\lambda\tau_{0}r\,\ln\,r\] (8-4.6) In terms of the angular deformation, one can then write \[\begin{split}\theta&=\tau\ln r,\qquad 0\leq r\leq 1\\ \theta&=0,\qquad\qquad 1<r<\infty\end{split}\] (8-4.7) where the parameter \(\tau\) is a measure of both the yield stress and the time of application of the body torque. To obtain a chaotic mixing deformation, we use an adaptation of the "blinking vortex" model of Aref (1984). We first apply the torque at point A and then apply it at B and so on, alternating the deformation process in a periodic way. This process leads to an iterated map \[\begin{split}\theta_{\alpha}^{n\,+\,1}&=\,\theta_{ \alpha}^{n}+\tau\ln r_{\alpha}^{n}\\ r_{\alpha}^{n\,+\,1}&=\,r_{\alpha}^{n}\end{split}\] (8-4.8) where \(\alpha=\) A or B and the superscripts \(n\), \(n\,+\,1\) indicate the value of the variable at the \(n\)th and (\(n\,+\,1\))st cycles. This map can be shown to be area-preserving. In the fluid blinking vortex model, one has two co-rotating vortices in which one turns on while the other turns off in a periodic manner. Three types of graphic results are presented. In the first we show what happens to a line element in the plastic sheet after 8 or 16 iterations of the cycles (Figure 8-19). It is clear that there are stretching and folding operations in the two-dimensional space which in temporal dynamics would lead to horseshoes and homoclinic tangles. The second graphic looks at a few initial points in the plane after many iterations of the map (Figure 8-20). This is effectively a Poincare map of the alternating plastic deformation process. Here we see structures that remind one of Hamiltonian temporal dynamics with quasiperiodic orbits and stochastic (diffuse) orbits that give evidence for chaos. (Similar to Figure 8-5 or Figure 3-35.) The final graphics are some color plates (CP-15,16). Here we assign different colors to four quadrants of the plastic sheet and look at the patterns after several deformation cycles. These pictures graphically Figure 8-19: Numerical experiments of chaotic mixing due to alternating torques on a plastic material showing the deformation of a line element after 8 and 16 iterations (8-4.8). [From Feeny et al. (1992).] show the folding and stretching processes that occur in these mixing problems and are similar to those observed experimentally in fluid mixing problems. Although his example is artificial, the availability of an explicit map allows one to look at the spatial mixing in a straightforward way. It also shows how simple velocity fields can lead to spatially complex particle trajectories. Furthermore, it suggests that processes such as forging of metals, kneading of baker's dough, geomechanical deformations, and deformation of clay in the making of pottery may have in them mechanisms for chaotic mixing. ## Problems * **8-1**: Give six examples of how the history of temporal events is written in spatial patterns. * **8-2**: _Periodic Chain Structures_. As a prelude to understanding non-linear dynamics in periodic chain oscillator structures, examine the propagation of harmonic waves in a linear chain of equal masses \(m\) and springs of stiffness \(k\). If \(\omega\) is the frequency and \(\kappa\) is the wave number (\(\lambda=2\pi/\kappa\) is the wavelength), then show that in a linear chain \(\omega\) and \(\kappa\) must be related by a dispersion relation Figure 8.20: Poincaré map for several initial conditions for the alternating torque plasticity mixing model (8-4.8) showing quasiperiodic and stochastic orbits. [From Feeny et al. (1992).]
_Ante mare et terras et, quod legit omnia, caelum_ _Unus erat toto naturae vultus in orbe,_ _Quem dxere Chaos, rudis indigestaque moles_ _Nec quicquam nisi pondus iners congestaque_ _eodem_ _Non bene iunctarum discordia semina rerum,_ Ovid ## Introduction It seems appropriate to begin a book which is entitled "Deterministic Chaos" with an explanation of both terms. According to the Encyclopaedia Britannica the word _"chaos"_ is derived from the Greek "xqoc" and originally meant the infinite empty space which existed before all things. The later Roman conception interpreted chaos as the original crude shapeless mass into which the Architect of the world introduces order and harmony. In modern usage which we will adopt here, chaos denotes a state of disorder and irregularity. In the following, we shall consider physical systems whose time dependence is _deterministic,_ i. e. there exists a prescription, either in terms of differential or difference equations, for calculating their future behavior from given initial conditions. One could assume naively that deterministic motion (which is, for example, generated by continuous differential equations) is rather regular and far from being chaotic because successive states evolve continuously from each other. But it was already discovered at the turn of the century by the mathematician H. Poincare (1892) that certain mechanical systems whose time evolution is governed by Hamilton's equations could display chaotic motion. Unfortunately, this was considered by many physicists as a mere curiosity, and it took another 70 years until in 1963 the meteorologist E. N. Lorenz found that even a simple set of three coupled, first order, nonlinear differential equations can lead to completely chaotic trajectories. Lorenz's paper, the general importance of which is recognized today, was also not widely appreciated until many years after its publication. He discovered one of the first examples of _deterministic chaos_ in dissipative systems. In the following, deterministic chaos denotes the irregular or chaotic motion that is generated by nonlinear systems whose dynamical laws uniquely determine the time evolution of a state of the system from a knowledge of its previous history. In recent years - due to new theoretical results, the availability of high speed computers, and refined experimental techniques - it has become clear that this phenomenon is abundant in nature and has far-reaching consequences in many branches of science (see the long list in Table 1 which is far from complete). We note that nonlinearity is a necessary, but not a sufficient condition for the generation of chaotic motion. (Linear differential or difference equations can be solved by Fourier transformation and do not lead to chaos.) The observed chaotic behavior in time is neither due to external sources of noise (there are none in the Lorenz equations) nor to an infinite number of degrees of freedom (in Lorenz's system there are only three degrees of freedom) nor to the uncertainty associated with quantum mechanics (the systems considered are purely classical). The actual source of irregularity is the property of the nonlinear system of separating initially close trajectories exponentially fast in a bounded region of phase space (which is, e. g., three-dimensional for Lorenz's system). It becomes therefore practically impossible to predict the long-time behavior of these systems, because in practice one can only fix their initial conditions with finite accuracy, and errors increase exponentially fast. If one tries to solve such a nonlinear system on a computer, the result depends for longer and longer times on more and more digits in the (irrational) numbers which represent the initial conditions. Since the digits in irrational numbers (the rational numbers are of measure zero along the real axis) are irregularly distributed, the trajectory becomes chaotic. Lorenz called this _sensitive dependence on the initial conditions_ the butterfly effect, because the outcome of his equations (which describe also, in a crude sense, the flow of air in the earth's atmosphere, i. e. the problem of the weather forecasting) could be changed by a butterfly flapping his wings. This also seems to be confirmed sometimes by daily experience. The results described above immediately raise a number of fundamental questions : Can one predict (e. g. from the form of the corresponding differential equations) whether or not a given system will display deterministic chaos? Can one specify the notion of chaotic motion more mathematically and develop quantitative measures for it? What is the impact of these findings on different branches in physics? \begin{table} \begin{tabular}{l} \hline \hline Forced pendulum [1] \\ Fluids near the onset of turbulence [2] \\ Lasers [3] \\ Nonlinear optical devices [4] \\ Josephson junctions [5] \\ Chemical reactions [6] \\ Classical many-body systems (three-body problem) [7] \\ Particle accelerators [8] \\ Plasmas with interacting nonlinear waves [9] \\ Biological models for population dynamics [10] \\ Stimulated heart cells (see Plate IV at the beginning of the book) [11] \\ \hline \hline \end{tabular} \end{table} Table 1: Some nonlinear systems which display deterministic chaos. (For numerals, see “References†on page 247.)Does the existence of deterministic chaos imply the end of long-time predictability in physics for some nonlinear systems, or can one still learn something from a chaotic signal? The last question really goes to the fundaments of physics, namely the problem of predictability. The shock which was associated with the discovery of deterministic chaos has therefore been compared to that which spread when it was found that quantum mechanics only allows statistical predictions. Those questions mentioned above, to which some answers already exist, will be discussed in the remainder of this book. It should be clear, however, that there are still many more unsolved than solved problems in this relatively new field. The rest of the introduction takes the form of a short survey which summarizes the contents of this book. Fig. 1 shows that one has to distinguish between deterministic chaos in dissipative systems (e. g. a forced pendulum with friction) and conservative systems (e. g. planetary motion which is governed by Hamilton's equations). The first six chapters are devoted to dissipative systems. We begin with a review of some representative experiments in which deterministic chaos has been observed by different methods. As a next step, we explain the mechanism which leads to deterministic chaos for a simple model system and develop quantitative measures to characterize a chaotic signal. This allows us to distinguish different types of chaos, and we then show that, up to now, there are at least three routes or transitions in which nonlinear systems can become chaotic if an external control parameter is varied. Interestingly enough, all these routes can be realized experimentally, and they show a fascinating universal behavior which is reminescent of the universality found in second-order equilibrium phase transitions. (Note that the transitions to chaos in dissipative systems only occur when the system is driven externally, i. e. is open.) In this context, universality means that there are basic properties of the system (such as critical exponents near the transition to chaos) that depend only on some global features of the system (for example, the dimensionality). Figure 1: Classification of systems which display deterministic chaos. (We consider in the following only classical dissipative systems, i. e. no quantum systems with dissipation.) The most recent route to chaos has been found by Grossmann and Thomae (1977), Feigenbaum (1978), and Coullett and Tresser (1978). They considered a simple difference equation which, for example, has been used to describe the time dependence of populations in biology, and found that the population oscillated in time between stable values (fixed points) whose number doubles at distinct values of an external parameter. This continues until the number of fixed points becomes infinite at a finite parameter value, where the variation in time of the population becomes irregular. Feigenbaum has shown, and this was a major achievement, that these results are not restricted to this special model but are in fact _universal_ and hold for a large variety of physical, chemical, and biological systems. This discovery has triggered an explosion of theoretical and experimental activity in this field. We will study this route in chapter three and show that its universal properties can be calculated using the functional renormalization group method. A second approach to chaos, the so-called intermittency route, has been discovered by Manneville and Pomeau (1979). Intermittency means that a signal which behaves regularly (or laminarly) in time becomes interrupted by statistically distributed periods of irregular motion (intermittent bursts). The average number of these bursts increases with the variation of an external control parameter until the motion becomes completely chaotic. It will be shown in chapter four that this route also has universal features and provides a universal mechanism for 1/\(f\)-noise in nonlinear systems. Yet a third possibility was found by Ruelle and Takens (1971) and Newhouse (1978). In the seventies they suggested a transition to turbulent motion which was different from that proposed much earlier by Landau (1944, 1959). Landau considered turbulence in time as the limit of an infinite sequence of instabilities (Hopf bifurcations) each of which creates a new basic frequency. However, Ruelle, Takens, and Newhouse showed that after only two instabilities in the third step the trajectory becomes attracted to a bounded region of phase space in which initially close trajectories separate exponentially such that the motion becomes chaotic. These particular regions of phase space are called _strange attractors_. We will explain this concept in Chapter 5, where we will also discuss several methods of extracting information about the structure of the attractor from the measured chaotic time signal. The Ruelle-Takens-Newhouse route is (as are the previous two routes) well verified experimentally, and we will present some experimental data which show explicitly the appearance of strange attractors in Chapter 6. To avoid the confusion which might arise by the use of the word turbulence, we note that what is meant here, is only turbulence in _time._ The results of Ruelle, Takens, and Newhouse also concern the _onset_ of turbulence or chaotic motion in time. It is in fact one of the _aims_ (but not yet the result) of the study of deterministic chaos in hydrodynamic systems to understand the mechanisms for _fully developed_ turbulence, which implies irregular behavior in _time and space._ We now come to the second branch in Fig. 1, which denotes chaotic motion in conservative systems. Many textbooks give the incorrect impression that most systems in classical mechanics can be integrated. But as mentioned above, Poincare (1892) was already aware that, e. g., the nonintegrable three-body problem of classical mechanics can lead to completely chaotic trajectories. About sixty years later, Kolmogorov (1954), Arnold (1963), and Moser (1967) proved, in what is now known as the KAM theorem, that the motion in the phase space of classical mechanics is neither completely regular nor completely irregular, but that the type of trajectory depends sensitively on the chosen initial conditions. Thus, stable regular classical motion is the exception, contrary to what is implied in many texts. A study of the long-time behavior of conservative systems, which will be discussed in Chapter 7, is of some interest because it touches on such questions as : Is the solar system stable? How can one avoid irregular motion in particle accelerators? Is the self-generated deterministic chaos of some Hamiltonian systems strong enough to prove the ergodic hypothesis? (The ergodic hypothesis lies at the foundations of classical statistical mechanics and implies that the trajectory uniformly covers the energetically allowed region of classical phase space such that time averages can be replaced by the average over the corresponding phase space.) Finally, in the last chapter we consider the behavior of quantum systems whose classical limit displays chaos. Such investigations are important, for example, for the problem of photodissociation, where a molecule is kicked by laser photons, and one wants to know how the incoming energy spreads over the quantum levels. (The corresponding classical system could show chaos because the molecular forces are highly nonlinear.) For several examples we show that the finite value of Planck's constant leads, together with the boundary conditions, to an almost-periodic behavior of the quantum system even if the corresponding classical system displays chaos. Although the difference between integrable and nonintegrable (chaotic) classical systems is still mirrored in some properties of their quantum counterparts (for example in the energy spectra), many problems in this field remain unsolved. **Abstract** We study the behavior of the \(\beta\)-function in the \Experiments and Simple Models In the first part of this chapter, we review some experiments in which deterministic chaos has been detected by different methods. In the second part, we present some simple systems which exhibit chaos and which can be treated analytically. ### 1.1 Experimental Detection of Deterministic Chaos In the following section, we will discuss the appearance of chaos in four representative systems. ### 1.2 Driven Pendulum Let us first consider the surprisingly simple example of a periodically driven pendulum. Its equation of motion is \[\ddot{\theta}\,+\,\gamma\,\dot{\theta}\,+\,\sin\theta\,=\,A\,\cos\left(\omega \,t\right) \tag{1.1}\] where the dot denotes the derivative with respect to time \(t\), \(\gamma\) is the damping constant, and the right hand side describes a driving torque with amplitude \(A\) and frequency \(\omega\). (The coefficients of \(\ddot{\theta}\) and \(\sin\theta\) have been normalized to unity by choosing appropriate units for \(t\) and \(A\)). This equation has been numerically integrated for different sets of parameters (\(A\), \(\omega\), \(\gamma\)), and Fig. 2 shows that the variation of the angle \(\theta\) with time simply "looks chaotic" if the amplitude \(A\) of the driving torque reaches a certain value \(A_{\rm c}\). This is a possible, but rather imprecise criterion for chaos. Before we proceed to improvements, three comments are in order. First, we would like to recall the well-known fact that the linearized version of the pendulum equation can be integrated exactly and does not lead to chaos. The emergence of chaos in the Figure 2: Transition to chaos in a driven pendulum. a) Regular motion at small values of the amplitude \(A\) of the driving torque. b) Chaotic motion at \(A=A_{c}\) (note the different scales for \(\theta\)). c) and d) Regular and irregular trajectories in phase space (\(\dot{\theta}\), \(\theta\)) which correspond to a) and b). e) Phase diagram of the driven pendulum (\(\gamma=0.2\), \(\theta(0)=0\), \(\dot{\theta}(0)=0\)). Black points denote parameter values (\(A\), \(\omega\)) for which the motion is chaotic. (After Bauer, priv. comm.) solutions of eq. (1.1) is, therefore, due to the nonlinear term \(\sin\theta\). Second, it follows from Fig. 2 b and 2 d that chaos sets in if the pendulum is driven over the summit where the system displays sensitive dependence on initial conditions (a tiny touch determines, at \(\theta=\pi\), whether the pendulum makes a left or right turn). Third, we would like to point out that as a function of the parameters \(A\) and \(\omega\), the behavior of the pendulum switches rather wildly between regular and chaotic motion, as shown in Fig. 2e. ### Rayleigh-Benard System in a Box Chaotic motion means that the signal displays an irregular and aperiodic behavior in time. To distinguish between multiply periodic behavior (which can also look rather complicated) and chaos, it is often convenient to Fourier-transform the signal \(x(t)\): \[x(\omega)\,=\,\lim_{T\to\infty}\int\limits_{0}^{T}d\,t\,{\rm e}^{i\omega t}\,x( t). \tag{1.2}\] For multiply periodic motion, the power spectrum \[P(\omega)\,\equiv\,|x(\omega)|^{2} \tag{1.3}\] consists only of discrete lines of the corresponding frequencies, whereas chaotic motion (which is completely aperiodic) is indicated by broad noise in \(P(\omega)\) that is mostly located at low frequencies. Such a transition from periodic motion to chaos is presented in the second line of Table 2 which shows the power spectrum of the velocity of the liquid in the \(x\)-direction for a Benard experiment. In the Benard experiment, a fluid layer (with a positive coefficient of volume expansion) is heated from below in a gravitational field, as shown in Fig. 3. The heated fluid at the bottom "wants" to rise, and the cold liquid at the top "wants" to fall, but these motions are opposed by viscous forces. For small temperature differences \(\Delta\,T\), viscosity wins; the liquid remains at rest and heat is transported by uniform heat conduction. This state becomes unstable at a critical value \(R_{u}\) of the Rayleigh number \(R\) (which is proportional to \(\Delta\,T\), see Appendix A), and a state of stationary convection rolls develops. If \(R\) increases, a transition to chaotic motion is observed beyond a second threshold \(R_{c}\). In order to avoid the appearance of complex spatial structures, actual experiments to detect chaos (in time) in a Rayleigh-Benard system are usually performed in a small cell (see Fig. 3c). The boundary conditions limit the number of rolls, that is the number of degrees of freedom that are counted by the number of Fourier components needed to describe the spatial structure of the fluid pattern. Besides \(\Delta\,T\), the observed dynamical behavior depends sensitively on the liquid chosen and on the linear dimensions (_a, b, c_) of the box (see, for example, Libchaber and Maurer, 1982). Table 2 shows the power spectrum of the velocity in the \(x\)-direction, measured via the Doppler effect in light scattering experiments (Swinney and Gollub, 1978; see also plate I [at the beginning of the book] for a set of interferometric pictures of a Benard cell). To describe the Benard experiment theoretically, Lorenz truncated the complicated differential equations which describe this system (see Appendix A) and obtained the equations of the so-called Lorenz model : \[\dot{X} = -\sigma X+\sigma Y \tag{4a}\] \[\dot{Y} = rX-Y-XZ\] (4b) \[\dot{Z} = XY-bZ \tag{4c}\] \begin{table} \begin{tabular}{l l} \hline \hline System & Equation of Motion & Indication \\ \hline Pendulum & \(\ddot{\theta}+\gamma\dot{\theta}+g\sin\theta=A\cos\omega t\) & Signal \\ & \(x=\theta\), \(y=\dot{\theta}\), \(z=\omega t\) & \\ & \(\dot{x}=y\) & \\ & \(\dot{y}=-\gamma y-g\sin x+A\cos z\) & \\ & \(\dot{z}=\omega\) & \\ \hline Benard & \(\dot{x}=-\sigma x+\sigma y\) & \\ Experiment & \(\dot{y}=rx-y-xz\) & \\ & \(\dot{z}=xy-bz\) & \\ \hline \(\dot{\alpha}\)O & & \\ \hline Belousov- & \(\dot{\tilde{x}}=\tilde{F}(\tilde{x},\lambda)\) & \\ Zhabotinsky & \(\tilde{x}=[c_{1},c_{2},\ldots c_{d}]\) & \\ Reaction & \(\mathrm{Ce_{2}(SO_{4})_{3}}\) & \\ \(\dot{\tilde{\mathrm{C}}e^{4+}}\) & & \\ \hline Henon-Heiles & \(H=\frac{1}{2}\sum\limits_{i=1}^{2}(p_{l}^{2}+q_{i}^{2})+\) & Poincaré Map \\ & \(+\ q_{1}^{2}\,q_{2}-\frac{1}{3}\ q_{2}^{3}\) & \\ & \(\dot{\tilde{p}}=-\frac{\partial H}{\partial\tilde{q}}\), \(\dot{\tilde{q}}=\frac{\partial H}{\partial\tilde{p}}\) & \\ \hline \hline \end{tabular} \end{table} Table 2: Detection of chaos in simple systems. where \(\sigma\) and \(b\) are dimensionless constants which characterize the system, and \(r\) is the control parameter which is proportional to \(\Delta T\). The variable \(X\) is proportional to the circulatory fluid flow velocity, \(Y\) characterizes the temperature difference between ascending and descending fluid elements, and \(Z\) is proportional to the deviations of the vertical temperature profile from its equilibrium value. A numerical analysis of this apparently simple set of nonlinear differential equations shows that its variables can exhibit chaotic motion above a threshold value \(r_{c}\) (see Appendix B). It should be noted, however, that the Lorenz equations describe the Benard experiment only in the immediate vicinity of the transition from heat conduction to convection rolls because the spatial Fourier coefficients retained by Lorenz only describe simple rolls. The chaos found by Lorenz in eqns. (1.4a-c) is, therefore, different from the chaos seen in the experimental power spectrum in Table 2. To describe the experimentally observed chaos, many more spatial Fourier components have to be retained. ### Stirred Chemical Reactions Another system in which chaotic motion has been studied experimentally in great detail is the Belousov-Zhabotinsky reaction. In this chemical process, an organic molecule (e. g. malonic acid) is oxidized by bromate ions; the oxidation is catalyzed by a redox system (Ce\({}^{4-}\)/Ce\({}^{3-}\)). The reactants, which undergo 18 elementary reaction steps (see Epstein et al., 1983), are : \[\text{Ce}_{2}(\text{SO}_{4})_{3},\text{ NaBrO}_{3},\text{ CH}_{2}(\text{COOH})_{2},\text{ and }\text{H}_{2}\text{SO}_{4}\text{.}\] It is not our aim to describe these reactions in detail but to demonstrate that stirred chemical reactions provide convenient model systems to study the onset of chaos. Fig. 4 shows how a chemical reaction is maintained in a steady state away from equilibrium by continuously pumping the chemicals into a flow reactor where they are stirred to ensure spatial homogeneity. For example, the reaction \[\text{A}\ +\ \text{B}\ \overset{k_{1}}{\underset{k_{2}}{\rightleftharpoons}}\text{C} \tag{1.5}\] ### Experimental Detection of Deterministic Chaos Figure 3: The Rayleigh-Bénard instability: a) and b) Transition from heat conduction to convection rolls in an infinitely extended two-dimensional fluid layer. c) Experiments to detect deterministic chaos in time are performed in a “match boxâ€. is described by the equations: \[\dot{c_{\Lambda}}\,=\,\,-\,k_{1}\,c_{\Lambda}\,c_{\rm B}\,+\,k_{2}\,c _{\rm C}\,-\,r\,[c_{\Lambda}\,-\,c_{\Lambda}\,(0)] \tag{1.6a}\] \[\dot{c_{\rm B}}\,=\,\,-\,k_{1}\,c_{\Lambda}\,c_{\rm B}\,+\,k_{2} \,c_{\rm C}\,-\,r\,[c_{\rm B}\,-\,c_{\rm B}\,(0)]\] (1.6b) \[\dot{c_{\rm C}}\,=\,\,k_{1}\,c_{\Lambda}\,c_{\rm B}\,-\,k_{2}\,c _{\rm C}\,-\,r \tag{1.6c}\] where eq. (1.6a) can be interpreted as follows. The concentration \(c_{\Lambda}\) decreases due to collisions between A and B (which generate C), increases due to decays of C (into A and B), and decreases if the flow rate \(r\) increases since for \(k_{1}\,=\,\,k_{2}\,=\,\,0\), eq. (1.6a) can be integrated to \(c_{\Lambda}\,(t)\,-\,c_{\Lambda}\,(0)\,-\,\exp\,(-rt)\). Generalizing, the reactions of \(M\) chemicals of concentrations \(c_{i}\) can be described by a set of first-order nonlinear differential equations \[\dot{c_{i}}\,=\,g_{i}\{c_{j}{}^{\dagger}\,-\,r\,[c_{i}\,-\,c_{i}\,(0)]\, \equiv\,F_{i}\{c_{j},\,\lambda\} \tag{1.7}\] where the function \(g_{i}\{c_{j}\}\) involves nonlinear terms of the form \(c_{i}^{2}\) and \(c_{i}\,c_{j}\) if three-body collisions are neglected. The reactions can be studied as a function of the set of control parameters \(\lambda\,=\,\,\{c_{i}\,(0),\,k_{j},\,r\}\) that involves the initial concentrations \(\{c_{i}\,(0)\}\), the temperature dependent reaction velocities \(\{k_{j}\}\) and the flow rate \(r\). Since \(r\) influences all individual reactions and can be easily manipulated by changing the pumping rate of the chemicals, it is usually used as the only control parameter. Let us now come back to the Belousov Zhabotinsky reaction. The variable which signals chaotic behavior in this system is the concentration \(c\) of the Ce\({}^{4+}\) ions. It is measured by the selective light absorption of these ions. The mean residence time of the substances in the open reactor (i. e. \(r^{-1}\)) acts as an external control parameter corresponding to \(\Delta\,T\) in the previous experiment. Table 2 shows a transition to chaos in this system which is detected via the change in the autocorrelation function \[C(\tau)\,=\,\lim_{\tau\to\infty}\,\frac{1}{T}\,\int\limits_{0}^{\tau}\,{\rm d} \,t\,\hat{c}(t)\,\hat{c}(t\,+\,\tau);\,\,\,\,\,\hat{c}(t)\,=\,c(t)\,-\,\lim_{ \tau\to\infty}\,\frac{1}{T}\,\int\limits_{0}^{\tau}\,{\rm d}\,t\,c(t)\,. \tag{1.8}\]This function measures the correlation between subsequent signals. It remains constant or oscillates for regular motion and decays rapidly (mostly with an exponential tail) if the signals become uncorrelated in the chaotic regime (Roux et al., 1981). It should be noted that the power spectrum \(P(\omega)\) is proportional to the Fourier transformation of \(C(\tau)\) \[P(\omega)\,=\,|\,\hat{c}(\omega)|^{2}\,\propto\,\lim_{\Gamma\,\to\,\infty}\, \int\limits_{0}^{\Gamma}\mathrm{d}\tau\,\mathrm{e}^{\,\iota\omega\,\tau}\,C(\tau) \tag{1.9}\] that is, both quantities contain the same information. Eq. (1.9) can be derived by the usual rules for Fourier transformations if one continues \(\hat{c}(t)\) periodically in \(T\) so that \(\hat{c}(t)\,=\,\hat{c}(t\,+\,n\,T)\) and \(n\) is an integer which leads to \[\hat{c}(\omega)\,=\,\lim_{\Gamma\,\to\,\infty}\int\limits_{0}^{\Gamma}\mathrm{d }t\,^{\,i\omega t}\,\hat{c}(t)\,=\,\lim_{\Gamma\,\to\,\infty}\,\int\limits_{- \,\Gamma}^{\Gamma}\mathrm{d}t\,\cos\,(\omega\,t)\,\hat{c}(t)\,\,. \tag{1.10}\] ### Henon-Heiles System Let us finally have a look at a simple nonintegrable example from classical mechanics that displays chaotic motion. In 1964 Henon and Heiles numerically studied the canonical equations of motion of the Hamiltonian \[H\,=\,\frac{1}{2}\,(p_{i}^{2}\,+\,p_{2}^{2})\,+\,\frac{1}{2}\,(q_{i}^{2}\,+\,q _{i}^{2})\,+\,q_{i}^{2}\,q_{2}\,-\,\frac{1}{3}\,q_{i}^{3}\,. \tag{1.11}\] This equation describes, in Cartesian coordinates \(q_{1}\) and \(q_{2}\), two nonlinearly coupled harmonic oscillators and, in polar coordinates \((r,\theta)\), a single particle in a noncentrosymmetric potential \[V(r,\theta)\,=\,\frac{r^{2}}{2}\,+\,\frac{r^{3}}{3}\,\sin\,(3\,\theta) \tag{1.12}\] that is obtained from \(1/2\,(q_{i}^{2}+q_{i}^{2})+q_{i}^{2}\,q_{2}\,-\,1/3\,q_{2}^{3}\) via \(q_{1}\,=\,r\cos\,\theta\) and \(q_{2}\,=\,r\sin\,\theta\) (see Fig. 5). ### Experimental Detection of Deterministic Chaos Figure 5: Equipotential lines \(V(r,\theta)\,=\,\) const. of the Henon-Heiles system (eq. 1.12) in polar coordinates. Their investigation was motivated by empirical evidence that a star moving in a weakly disturbed cylindrically symmetric potential should have, in addition to the energy \(E\), another constant of the motion \(I\). This would imply that, for bounded motion, the trajectory of the Henon Heiles system in phase space \[\vec{x}(t)\ =\ [p_{1}\left(t\right),p_{2}\left(t\right),q_{1}\left(t\right),q_{2} \left(t\right)] \tag{1.13}\] where \(p_{1},p_{2}\) are the momenta, is confined (via \(E\left[\vec{x}(t)\right]=\) const. and \(I\left[\vec{x}(t)\right]=\) const.) to a two-dimensional closed surface. In order to check this proposal, Henon and Heiles followed a method introduced by Poincare (1893) and plotted the points in which the trajectory \(\vec{x}(t)\) cuts the \((p_{2},q_{2})\) plane. If the motion would be confined to a two-dimensional manifold, these points should form closed curves corresponding to the cut of the two-dimensional closed surface with the \((p_{2},q_{2})\) plane. The last line in Table 2 shows that, at low energies, different initial conditions in the Henon Heiles system indeed lead to closed curves in the Poincare map. However for high enough energy (which acts as control parameter for this system) the lines decay, and the points in the Poincare map of the Henon Heiles model become plane-filling. This indicates, according to Fig. 6, highly irregular chaotic motion in phase space and the absence of an additional constant of the motion \(I\). Figure 6: Qualitatively different trajectories can be distinguished by their Poincaré sections: a) chaotic motion; b) approach of a fixed point; c) cycle; d) cycle of period two. To summarize: 1. We have presented four possible criteria for chaotic motion: * The time dependence of the signal "looks chaotic". * The power spectrum exhibits broadband noise. * The autocorrelation function decays rapidly. * The Poincare map shows space-filling points. In all four criteria, chaos is indicated by a qualitative change. Later, we will introduce some more quantitative measures to characterize deterministic chaos. 2. A common feature of the systems listed in Table 2 is that they can be characterized by low-dimensional first-order differential equations \[\dot{\vec{x}}\,=\,\vec{F}(\vec{x},\lambda);\;\;\;\vec{x}\,=\,(x_{1}\,\ldots\,x _{d})\] (1.14) that are autonomous (i. e. \(\vec{F}\) does not contain the time explicitly) and nonlinear (\(\vec{F}\) is a nonlinear function of the \(\{x_{j}\}\)). These equations lead to chaotic motion if an external control parameter \(\lambda\) (which can be the amplitude of the driving torque for the pendulum or the temperature difference \(\Delta T\) in the Lorenz model, etc.) is varied. One distinguishes between _conservative systems_, for which a volume element in phase space \(\{\vec{x}|\}\) only changes its shape but retains its volume in the course of time (an example is the Henon-Heiles Hamiltonian system for which the Liouville theorem holds) and _dissipative systems,_ for which volume elements shrink as time increases (see also Chapter 6). It is often convenient to study the flow described by the equations of motion (1.14) via the corresponding \((d\,-\,1)\)-dimensional Poincare map \[\vec{x}(n\,+\,1)\,=\,\vec{G}\,[\vec{x}(n),\,\lambda];\;\;\;\vec{x}(n)\,=\,[x_{ 1}\,(n),\ldots\,x_{d-1}\,(n)] \tag{1.15}\] that is generated by cutting the trajectory in \(d\)-dimensional phase space with a \((d-1)\)-dimensional hyperplane (see Fig 6) and by denoting the points which are generated with increasing time by \(\vec{x}(1),\vec{x}(2)\ldots\) etc. The classifications "conservative" and "dissipative" can then be generalized from flows to maps (see Chapter 5, eqns. (5.6a, b)). Let us finally comment on the way in which we shall proceed with our description of real physical systems. One can generally distinguish several levels of description as shown in Fig. 7. A typical example of such a reduction process is given in Appendix A where the Navier-Stokes equations (which already represent a coarse grained description of molecular motion) are, for the boundary conditions of a Benard experiment, reduced to the three differential equations of the Lorenz model which lead in turn to _different_ Poincare maps (see. Figs. 49, 67) corresponding to different parameter values. Another example has been given by Haken (1975) who reduced the quantum mechanical equations for a single mode laser to a system of three rate equations (which is equivalent to the Lorenz system) by concentrating on macroscopic photon densities and using the adiabatic approximation ("Slaving principle"). In the following, we shall not be concerned with the details of this reduction process since the step from microscopic equations to differential equations for macroscopic variables has already been covered in several excellent books (Haken 1982, 1984), and the reduction of differential equations to Poincare maps can be done numerically. It should also be clear that this reduction of a many-constituent system to a map, which describes only a few degrees of freedom, is not always possible; a counterexample would be fully developed spatio-temporal turbulence. Nevertheless, since it has been found experimentally effective for many physical systems (see the following chapters), we shall in the remainder of this book concentrate mostly on the last level in Fig. 7 where the dynamics of a system has been reduced to a one- or two-dimensional Poincare map. We shall use these maps as starting points for our description of chaotic systems in the same sense as one uses the (coarse grained) Ginzburg-Landau Hamiltonian to derive universal properties of second-order phase transitions (Wilson and Kogut 1974). It will then be shown that only some general features of these maps (such as, for example, the existence of a simple maximum) determine how chaos emerges. The various "routes to chaos" differ in the way in which the signal behaves before becoming completely chaotic. Although universal features of several routes to chaos have been discovered and verified experimentally it should be stated explicitly that it is presently practically impossible to theoretically predict, for example, from the Navier Stokes equations with given boundary conditions the route to chaos for a given experimental hydrodynamic system. This situation could be compared to ordinary second-order phase transitions where one knows a lot about universality classes and critical exponents (for example, of magnetic systems) but where it is still a formidable and often unsolved problem to predict the transition temperature of a given magnet (Ma 1976). However, this limitation should not disappoint us. The beauty of physics reveals itself only after asking the right questions, and it seems, from the results summarized in this book (see especially Figure 7: Hierarchy for the levels of description of dynamical systems. table 12 on page 12) that it is equally so for dynamical systems where the question about universal features has led to the discovery of a beautiful unifying pattern behind different phenomena in this field. ### 1.2 The Periodically Kicked Rotator One of the simplest dynamical systems which displays chaotic behavior in time is the periodically kicked damped rotator shown in Fig 8. Its equation of motion is \[\ddot{\varphi}\,+\,\Gamma\dot{\varphi}\,=\,F\,\equiv\,Kf(\varphi)\,\sum\limits _{n\,=\,0}^{\infty}\delta\,(t\,-\,n\,T),\,\,\,\,\,\,\,\,n\,\,\,\,\mbox{integer} \tag{1.16}\] where the dot denotes the time derivatives; \(\Gamma\) is the damping constant, \(T\) is the period between two kicks, and we normalize the moment of inertia to unity. If we make the substitutions \(x=\varphi\), \(y=\dot{\varphi}\), \(z=l\), equation (1.16) can be rewritten as a system of first-order nonlinear autonomous differential equations \[\dot{x}\,=\,y \tag{1.17a}\] \[\dot{y}\,=\,-\,\Gamma y\,+\,Kf(x)\,\sum\limits_{n\,=\,0}^{\infty }\delta\,(z\,-\,n\,T)\] (1.17b) \[\dot{z}\,=\,1\,\,. \tag{1.17c}\] These can be reduced to a two-dimensional map for the variables \((x_{n},y_{n})\,=\,\lim\limits_{\epsilon\to 0}\,[x\,(n\,T\,-\,\epsilon),y\,(n\,T\,-\, \epsilon)]\) by integration. The general solution of (1.17b) for \((n\,+\,1)\,T\,-\,\epsilon\,>\,t\,>\,n\,T\,-\,\epsilon\) is \[y\,(t)\,\,\,=\,y_{n}\,\mbox{e}^{-\,\Gamma(t\,-\,n\,T)}\,+\,K\sum\limits_{m\,=\, 0}^{\infty}f(x_{m})\,\int\limits_{n\,T\,-\,\epsilon}^{t}\mbox{d}t^{\prime}\, \mbox{e}^{\,\Gamma(t^{\prime}\,-\,n\,)}\,\delta\,(t^{\prime}\,-\,m\,T)\,\,. \tag{1.17}\] Figure 8: Rotator kicked by a force \(F\). This yields \[y_{n\,-\,1}\;=\;{\rm e}^{-r\,T}[y_{n}\,+\,Kf(x_{n})]\] (1.18a) and by integrating (1.17 a) using (1.18a) we obtain : \[x_{n\,-\,1}\;\;=\;x_{n}\,+\;\frac{1\;-\;{\rm e}^{-r\,T}}{\Gamma}\;[y_{n}\,+\,Kf( x_{n})]\;. \tag{1.18b}\] Equations (1.12 a) and (1.12 b) are the main results of this section. They reduce the initial set of three-dimensional differential equations to a two-dimensional discrete map, which yields a stroboscopic picture of the variables. Below, we list several important limits of this two-dimensional map which will be discussed in some detail in the following sections. ### Logistic Map This is a one-dimensional quadratic map defined by \[x_{n\,-\,1}\;=\;r\,x_{n}(1\;-\,x_{n}) \tag{1.19}\] where \(r\) is an external parameter, and the range of \(x_{n}\) is changed from a circle to the interval [0,1]. It can be obtained from (1.18 b) in the strong damping limit (\(\Gamma\to\infty\)) if \(K\to\infty\), so that \(\Gamma/K\,=\,1\) and \(f(x_{n})\,=\,(r\,-\,1)\,x_{n}\,-\,r\,x_{n}^{2}\,\). ### Henon Map This can be considered as a two-dimensional extension of the logistic map (Henon, 1976) : \[x_{n\,-\,1}\;=\;1\;-ax_{n}^{2}\;+\;y_{n} \tag{1.20a}\] \[y_{n\,-\,1}\;=\;bx_{n} \tag{1.20b}\] where \(a\) and \(|\,b\,|\,\leq\,1\) are external parameters. To obtain this map from (1.18 a\(\,-\)b), we rewrite these equations as \[y_{n\,-\,1}\;=\;{\rm e}^{-r\,T}\,[y_{n}\,+\,Kf(x_{n})]\] (1.21 a) \[x_{n\,-\,1}\;=\;x_{n}\,+\,\frac{{\rm e}^{\,r\,T}\,-\,1}{\Gamma}\;y_{n\,-\,1}\] (1.21 b)and solve (I.21 b) for \(y_{n+1}\): \[y_{n+1}\,=\,(x_{n+1}\,-\,x_{n})\,\Gamma/({\rm e}^{\,\Gamma\,\Gamma}\,-\,1). \tag{1.22}\] If we put \(y_{n+1}\) and \(y_{n}\) back into (I.21 a), this becomes for \(T\,=\,1\): \[x_{n+1}\,+\,{\rm e}^{-\,\Gamma}x_{n-1}\,=\,(1\,+\,{\rm e}^{-\,\Gamma})\,x_{n}\, +\,\frac{1\,-\,{\rm e}^{\,-\,\Gamma}}{\Gamma}\,Kf(x_{n}). \tag{1.23}\] Choosing \[\frac{1\,-\,{\rm e}^{-\,\Gamma}}{\Gamma}\,\,Kf(x_{n})\,\equiv\,-(1\,+\,{\rm e }^{-\,\Gamma})\,x_{n}\,-\,1\,+\,ax_{n}^{2}\,;\,\,\,\,b\equiv\,-{\rm e}^{-\, \Gamma} \tag{1.24}\] equation (1.23) yields \[x_{n+1}\,=\,1\,\,-ax_{n}^{2}\,+\,\,bx_{n-1} \tag{1.25}\] which is equivalent to (I.20a-b). (Our derivation holds only for \(b\,<\,0\), but the map is mathematically defined for \(\,-1\,\leq\,b\,\leq\,1\).) ### Chirikov Map This is simply the map of an undamped (\(\Gamma\,\rightarrow\,0\)) rotator that is kicked by an external force \(Kf(x_{n})\,=\,\,-K\,\sin x_{n}\) (Chirikov, 1979). In this limit eq. (I.18 a-b) reduce to \[p_{n+1}\,=\,p_{n}\,-\,K\sin\theta_{n} \tag{1.26a}\] \[\theta_{n+1}\,=\,\theta_{n}\,+\,p_{n+1} \tag{1.26b}\] where we have chosen \(T\,=\,1\) and introduced the conventional notation \(x_{n}\,=\,\theta_{n}\) and \(y_{n}\,=\,p_{n}\). We shall see in the following chapters that despite the apparent simplicity of all three maps, their iterates exhibit extremely rich and physically interesting structures. **Abstract** We study the behavior of the \(\beta\)-function in the \ ## Chapter 2 Piecewise Linear Maps and Deterministic Chaos The nonlinear Poincare maps introduced in the previous chapter still lead to a rather complicated dynamical behavior (as we shall see in Chapter 3). In this section, we therefore study some simple one-dimensional piecewise linear maps. Although these maps are not directly connected to physical systems, they are extremely useful models which, in part one of this section, allow us to explain the mechanism which leads to deterministic chaos. In the second part, we will introduce three quantitative measures which characterize chaotic behavior and calculate these quantities explicitly for a triangular map. Finally, in Section 3 we show that the iterates of certain one-dimensional maps can display deterministic diffusion. ### 2.1 The Bernoulli Shift Let us consider the one-dimensional map \[x_{n+1}=\sigma(x_{n})\equiv 2x_{n}\mod 1\,;\,\,\,\,\,n=0,\,1,\,2\ldots \tag{2.1}\] which is shown in Fig. 9. If we start with a value \(x_{0}\), the map generates a sequence of iterates \(x_{0}\), \(x_{1}=\sigma\left(x_{0}\right)\), \(x_{2}=\sigma\left(x_{1}\right)=\sigma\left(\sigma\left(x_{0}\right)\right)\ldots\) In order to investigate the properties of this sequence we write \(x_{0}\) in binary representation : \[x_{0}=\sum\limits_{v=1}^{\infty}a_{v}\,2^{-v}\,\,\triangleq\,\left(0,a_{1}\,a_ {2}\,a_{3}\ldots\right)\] (2.2) where \(a_{v}\) has the values zero or unity. For \(x_{0}<1/2\), we have \(a_{1}=0\), and \(x_{0}>1/2\) implies \(a_{1}=1\). Therefore, the first iterate \(\sigma\left(x_{0}\right)\) can be written as \[\sigma\left(x_{0}\right)=\left\{\begin{array}{ll}2x_{0}&\mbox{for}\quad a_{1 }=0\,\diagdown\\ 2x_{0}-1&\mbox{for}\quad a_{1}=1\end{array}\right.\,\,=\,\,\left(0,a_{2}\,a_{3} \,a_{4}\ldots\right)\] (2.3) i. e. the action of \(\sigma\) on the binary representation of \(x\) is to delete the first digit and shift the remaining sequence to the left. This is called the _Bernoulli shift_. The Bernoulli property of \(\sigma\left(x\right)\) demonstrates : 1. The sensitive dependence of the iterates of \(\sigma\) on the initial conditions. Even if two points \(x\) and \(x^{\prime}\) differ only after their \(n\)th digit \(a_{n}\), this difference becomes amplified under the action of \(\sigma\), and their \(n\)th iterates \(\sigma^{n}(x)\) and \(\sigma^{n}(x^{\prime})\) already differ in the first digit because \(\sigma^{n}(x)=\left(0,a_{n}\ldots\right)\) where \(\sigma^{2}(x)=\sigma\left[\sigma\left(x\right)\right]\), etc. 2. The sequence of iterates \(\sigma^{n}(x_{0})\) has the same random properties as successive tosses of a coin. To see this, we attach to \(\sigma^{n}(x_{0})\) the symbol \(R\) or \(L\) depending on whether the iterate is contained in the right or left part if the unit interval. If we now prescribe an arbitrary sequence \(R\)\(L\)\(L\)\(R\)\(\ldots\), e. g. by tossing a coin, we can always find an \(x_{0}\) for which the series of iterates \(x_{0}\), \(\sigma^{1}(x_{0})\), \(\sigma^{2}(x_{0})\)\(\ldots\) generates this sequence. This mechanism of generating deterministic chaos is also quite universal. Its two basic ingredients are the stretching and backfolding property of the map. Initially, for \(x_{0}<1/2\) say, \(x_{0}\) becomes stretched after each iteration by a factor 2 (see Fig. 11). But for \(n>n_{0}\) with \(2^{n_{0}}\cdot x_{0}\geq 1\), the second branch of \(\sigma\left(x\right)\) becomes important, and \(x_{n}\) is folded back to the unit interval as shown in Fig. 11. For a general nonlinear map of the unit interval onto itself, the combination of stretching and backfolding (due to the restriction to [0, 1]) drives the iterates of an initial point repeatedly over the unit interval and leads to chaotic motion. Let us briefly comment on the possible physical consequences of this stretching property of nonlinear maps. The initial conditions (i. e. the \(x_{0}\)) of a physical system can only be determined with finite precision. This "arbitrarily" small but finite error becomes exponentially amplified (\(\sigma^{n}\left(x_{0}\right)=2^{n}x_{0}\bmod 1\)) via the nonlinear evolution equation. Such an equation thus acts like a microscope which makes the limits of our precision in physical measurements visible. Can we, therefore, anticipate that the concept of the continuum with its distinction of rational and irrational numbers is non-physical and all physical variables will be quantized? (The Heisenberg uncertainty relation, which limits the precision of our observations for conjugate variables, has also been found in a gedanken experiment in which one tries to measure the location and the momentum of an electron via a light microscope with arbitrary accuracy.) This and related questions, and speculations, are discussed in an interesting article by J. Ford in "Physics Today", April 1983. ### 2.1 The Bernoulli Shift Figure 10: Emergence of ergodicity by a Bernoulliy shift in irrational numbers. Characterization of Chaotic Motion In this section, we introduce the Liapunov exponent as well as the invariant measure and the correlation function as quantitative measures to characterize the chaotic motion which is generated by one-dimensional Poincare maps. ### 2.2 Liapunov Exponent We have already seen in the previous section that adjacent points become separated under the action of a map \[x_{n+1}\,=\,f(x_{n}) \tag{2.5}\] which leads to chaotic motion. The Liapunov exponent \(\lambda\) (\(x_{0}\)) measures this exponential separation as shown in Fig. 12. From Fig. 12 one obtains : \[\varepsilon\,\mathrm{e}^{N\lambda(x_{0})}\,=\,\,|f^{N}(x_{0}\,+\,\varepsilon) \,-f^{N}(x_{0})| \tag{2.6}\] which, in the limits \(\varepsilon\to 0\) and \(N\to\infty\), leads to the correct formal expression for \(\lambda\) (\(x_{0}\)): \[\lambda\,(x_{0})\,=\,\lim_{N\to\infty}\,\lim_{\varepsilon\to 0}\,\frac{1}{N}\,\log\,\,\left|\,\frac{f^{N}(x_{0}\,+\,\,\varepsilon)\,-f^{N}(x_{0})}{\varepsilon}\,\right| \tag{2.7}\] \[\,=\,\lim_{N\to\infty}\,\frac{1}{N}\,\log\,\,\left|\,\frac{\mathrm{d }f^{N}(x_{0})}{\mathrm{d}x_{0}}\,\right|\,.\] This means that \(\mathrm{e}^{\lambda(x_{0})}\) is the average factor by which the distance between closely adjacent points becomes stretched after one iteration. The Liapunov exponent also measures the average loss of information (about the position of a point in [0, 1]) after one iteration. In order to see this, we use in (2.7) the chain rule Figure 12: Definition of the Liapunov exponent. \[\frac{\mathrm{d}}{\mathrm{d}x}\,f^{2}\left(x\right)\,\bigg{|}_{x_{0}} =\,\frac{\mathrm{d}}{\mathrm{d}x}\,f[f(x)]\,\bigg{|}_{x_{0}}=\,f^{ \prime}\,[f(x_{0})]\,f^{\prime}(x_{0}) \tag{2.8}\] \[=\,f^{\prime}\,(x_{1})\,f^{\prime}\,(x_{0})\,;\quad x_{1}\,\equiv \,f(x_{0})\] to write the Liapunov exponent as \[\lambda\left(x_{0}\right) =\,\lim_{N\rightarrow\infty}\,\frac{1}{N}\,\log\,\left|\,\frac{ \mathrm{d}}{\mathrm{d}x_{0}}\,f^{N}(x_{0})\,\right|\,=\,\lim_{N\rightarrow\infty}\,\frac{1}{N}\,\log\,\left|\,\prod_{i=0}^{N-1}f^{\prime}(x_{i})\,\right|\] \[=\,\lim_{N\rightarrow\infty}\,\frac{1}{N}\,\sum_{i=0}^{N-1}\,\log \,\left|\,f^{\prime}(x_{i})\,\right|\,. \tag{2.9}\] As a next step, we discuss the loss of information after one iteration with a linear map. We separate [0, 1] into \(n\) equal intervals and assume that a point \(x_{0}\) can occur in each of them with equal probability \(1/n\). By learning which interval contains \(x_{0}\), we gain the information \[I_{0}\,=\,-\,\sum_{i=1}^{n}\,\frac{1}{n}\,\mathrm{Id}\,\frac{1}{n}\,=\, \mathrm{Id}\,n \tag{2.10}\] where \(\mathrm{Id}\) is the logarithm to the base 2 (see Appendix F). If we decrease \(n\), the information \(I_{0}\) is reduced, and it becomes zero for \(n\,=\,1\). It is shown in Fig. 13 that a linear map \(f(x)\) changes the length of an interval by a factor \(a\,=\,\left|\,f^{\prime}(0)\,\right|\). The corresponding decrease of resolution leads to a loss of information after the mapping: \[\Delta I\,=\,-\,\sum_{i=1}^{n}\,\frac{a}{n}\,\mathrm{Id}\,\frac{a}{n}\,+\, \sum_{i-1}^{n}\,\frac{1}{n}\,\mathrm{Id}\,\frac{1}{n}\,=\,-\,\mathrm{Id}\,a\, =\,-\,\mathrm{Id}\,\left|\,f^{\prime}\,(0)\,\right| \tag{2.11}\] Generalizing this expression to a situation where \(\left|\,f^{\prime}\,(x)\,\right|\) varies from point to point and averaging over many iterations lead to the following expression for the mean loss of information: \[\overline{\Delta I}\,=\,-\lim_{N\rightarrow\infty}\,\frac{1}{N}\,\sum_{i=0}^{ N-1}\,\mathrm{Id}\,\left|\,f^{\prime}\,(x_{i})\,\right| \tag{2.12}\] ### Characterization of Chaotic Motion Fig. 13: Increase of an interval \(1/n\) by a linear map. which is, via (2.9), proportional to the Liapunov exponent: \[\lambda\left(x_{0}\right)\,=\,\left(\log\,2\right)\,\cdot\,\left|\,\overline{ \Delta T}\,\right|\,. \tag{2.13}\] This relation between the Liapunov exponent and the loss of information is a first step towards characterization of chaos in a coordinate-invariant way, as will be explored on a deeper level in Chapter 5. By way of an example, we now calculate the Liapunov exponent for the triangular map, \[\Delta\left(x\right)\,=\,r\left(1\,-\,2\,\left|\,\frac{1}{2}\,-\,x\,\right|\right) \tag{2.14}\] shown in Fig. 14. The function \(\Delta\left(x\right)\) serves as a useful model because, for \(r\,>\,1/2\), it generates chaotic sequences \(x_{0}\), \(\Delta\left(x_{0}\right)\), \(\Delta\left[\Delta\left(x_{0}\right)\right]\,\ldots\), and due to its simple form, all quantities that characterize the chaotic state can be calculated explicitly. In order to get acquainted with this map, we first consider its fixed points and their stability for different values of \(r\). Generally, a point \(x^{*}\) is called a fixed point of a map \(f(x)\) if \[x^{*}\,=\,f(x^{*}) \tag{2.15}\] i. e. the fixed points are the intersections of \(f(x)\) with the bisector. A fixed point is locally stable if all points \(x_{0}\) in the vicinity of \(x^{*}\) are attracted to it, i. e., if the sequence of iterates of \(x_{0}\), \[x_{0},\,x_{1},\,x_{2}\,\ldots\,x_{n},\,\ldots\,\equiv\,x_{0},\,f(x_{0}),\,f[f( x_{0})]\,\ldots\,f[\underbrace{f\ldots f(x_{0})\,\ldots]}_{n},\,\ldots \tag{2.16}\] _converges to \(x^{*}\)_. The analytical criterion for local stability is \[\left|\,\frac{\mathrm{d}}{\mathrm{d}x^{*}}\,f(x^{*})\,\right|\,<\,1 \tag{2.17}\]because the distance \(\delta_{n}\,tox^{*}\) shrinks as \[\delta_{n+1}\,=\,|x_{n+1}\,-\,x^{*}|\,=\,|f(x_{n})\,-\,x^{*}|\] \[=\,|f(x^{*}\,+\,\delta_{n})\,-\,x^{*}|\,=\,\left|\,\frac{\mathrm{d}}{\mathrm{d} x^{*}}\,f(x^{*})\right|\,\cdot\,\delta_{n} \tag{2.18}\] Fig. 15 a shows that for \(r\,<\,1/2\) the origin \(x\,=\,0\) is the only stable fixed point to which all points [0, 1] are attracted. For \(r\,>\,1/2\) two unstable fixed points emerge. Fig. 15b shows how, for \(r\,=\,1\), the iterates of \(x_{0}\) and \(x_{0}^{\prime}\) move away from the "fixed points" \(x_{1}\,=\,0\) and \(x_{2}\,=\,2/3\), respectively. In the following, we shall consider only the case \(r\,=\,1\), which is respresentative for \(r\,>\,1/2\). What can we say about a sequence of iterates if there are no stable fixed points? First of all we notice that points, which are close together, become more and more separated during the first iterations, as shown in Fig. 16. If we plot the \(n\)th iterate \(\Delta^{n}(x)\), we see from Fig. 16 that again it is piecewise linear and has the slope \(\left|\,\frac{\mathrm{d}}{\mathrm{d}x}\,\Delta^{n}(x)\,\right|\,=\,2^{n}\), except for the countable set of points \(j\,\cdot\,2^{-n}\) Fig. 16: a) Separation of points by iteration with \(\Delta(x)\) and b) the \(n\)th iterate \(\Delta^{n}(x)\). Fig. 15: a) Stable fixed point at \(x^{*}\,=\,0\) for \(r\,<\,1/2\); b) two unstable fixed points for \(r\,=\,1\). where \(j=0\). 1... 2*. Therefore, the separation of "almost all" points \(x_{0}\), \(x_{0}+\varepsilon\) grows exponentially with \(n\) after \(n\) iterations, and the Liapunov exponent becomes (independent of \(x_{0}\)): \[\lambda\ =\ \log\ 2. \tag{2.19}\] For the general triangular map (2.14), the Liapunov exponent simply becomes \(\lambda\ =\ \log\ 2\ r\), and for \(r>1/2\) we have \(\lambda>0\); i. e., we lose information about the position of a point in [0, 1] after an iteration, whereas \(r<1/2\) implies \(\lambda<0\), and we gain information because all points are attracted to \(x^{*}=0\). The Liapunov exponent changes sign at \(r=1/2\) and, therefore, acts like an "order parameter", which indicates the onset of chaos, as shown in Fig. 17. To make the analogy to critical phenomena even closer, we observe that \(\lambda=\ \log\ 2\ r\) scales with a power law near the "critical point" \(r_{c}=1/2\). \[\lambda\ \propto(r\ -\ r_{c}). \tag{2.20}\] This shows that even the simple transition to chaos in the triangular map displays some features that are remiscent of an equilibrium phase transition. As we have already mentioned before, we will investigate this aspect more generally in Chapter 3. It should also be noted here that the definition of the Liapunov exponent can be extended to higher dimensional maps. This will be treated in Chapter 5, where we will also discuss the relation between the Liapunov exponent and the Kolmogorov entropy and its possible connection to the Hausdorff dimension. But before we come to these problems, we will first investigate the question of how the iterates of a one-dimensional map are distributed over the unit interval. ### Invariant Measure The invariant measure \(\rho\left(x\right)\) determines the density of the iterates of a unimodular map \[x_{n\ -\ 1}=f(x_{n}),\ \ \ \ \ x_{n}\varepsilon\ [0,\ 1],\ \ \ \ n\ =\ 0,\ 1,\ 2,\ \ldots \tag{2.21}\] over the unit interval and is defined via \[\rho\left(x\right)\ \equiv\ \lim_{n\ -\ \infty}\ \frac{1}{N}\ \sum_{i=0}^{N}\ \delta\left[x\ -f^{\ast}\left(x_{0}\right)\right]. \tag{2.22}\] Figure 17: The Liapunov exponent for the triangular map as a function of \(r\) in the vicinity of \(r_{c}\). If \(\rho\left(x\right)\) does not depend on \(x_{0}\), the system is called ergodic (see also page 202). For this case, eq. (2.22) allows us to write "time averages" over a function \(g\left(x\right)\) as averages over the invariant measure. \[\lim_{N\rightarrow\infty}\,\frac{1}{N}\,\sum_{i\,=\,0}^{N}\,g\left(x_{i}\right) \,\equiv\,\lim_{N\,\rightarrow\infty}\,\frac{1}{N}\,\sum_{i\,=\,0}^{N}\,g\left[ f^{i}\left(x_{0}\right)\right]\,=\,\int_{0}^{\,\dagger}\,\mathrm{d}x\,\rho\left(x \right)g\left(x\right)\,. \tag{2.23}\] This is the one-dimensional analog of the thermodynamic average in classical statistical mechanics which allow us (if the motion in phase space is ergodic) to replace the time average by an ensemble average over a stationary distribution \(\bar{\rho}\): \[\lim_{T\rightarrow\infty}\,\frac{1}{T}\,\int_{0}^{\,\,T}\,\mathrm{d}t\,A\, \left[\vec{x}\left(t\right)\right]\,=\,\int\mathrm{d}\vec{x}\bar{\rho}\left( \vec{x}\right)A\left(\vec{x}\right)\,. \tag{2.24}\] Here \(A\) is a function of the time-dependent vector \(\vec{x}\,=\,\left[\bar{\rho}\left(t\right),\bar{\sigma}\left(t\right)\right]\) which is composed of the coordinates \(\bar{\sigma}\) and momenta \(\bar{\rho}\) which follow Hamilton's equations, \[\dot{q}_{i}\,=\,\frac{\partial H}{\partial p_{i}}\,,\quad\dot{p}_{i}\,=\,-\, \frac{\partial H}{\partial q_{i}} \tag{2.25}\] and \(\bar{\rho}\) is, for example, the microcanonical distribution \(\bar{\rho}=\delta\left[H\left(\vec{x}\right)\,-\,E\right]\) for an isolated system of energy \(E\). Note, however, that our one-dimensional example corresponds to a dissipative system (see e. g. Chapter 1, eq. (1.15)) whereas Hamilton's equations (2.25) describe a conservative model. For Hamiltonian systems, the dynamical behavior of a general density distribution \(\rho\left(\vec{x},\,t\right)\) in phase space is described by Liouville's equation: \[\dot{\rho}\left(\vec{x},\,t\right)\,=\,\,-\,i\,\mathrm{L}\,\rho\left(\vec{x}, \,t\right) \tag{2.26}\] where \[\mathrm{L}\,=\,i\left[\,\frac{\partial H}{\partial\bar{\rho}}\,\,\frac{ \partial}{\partial\bar{\sigma}}\,-\,\frac{\partial H}{\partial\bar{\sigma}}\, \,\frac{\partial}{\partial\bar{\rho}}\,\right] \tag{2.27}\] is the Liouville operator. The corresponding evolution equation for our one-dimensional model whose time evolution is given by the map (2.21) can be derived as follows. If we have a point \(x_{0}\), it evolves to \(f(x_{0})\) after one iteration. This means that a delta-function distribution \(\delta\left(x\,-\,x_{0}\right)\) evolves after one time step to \(\delta\left[x\,-\,f(x_{0})\right]\) which can be written as \[\delta\left[x\,-\,f(x_{0})\right]\,=\,\int_{0}^{\,\,\dagger}\,\mathrm{d}y \,\delta\left[x\,-\,f(y)\right]\delta\left(y\,-\,x_{0}\right). \tag{2.28}\] Generalizing this to the evolution of an arbitrary density \(\rho_{n}\left(x\right)\) at time \(n\) we obtain the so-called Frobenius-Perron equation \[\rho_{n\,-\,1}\left(x\right)\,=\,\int\limits_{0}^{1}\mathrm{d}y\,\delta\left[x\,- \,f(y)\right]\rho_{n}(y) \tag{2.29}\] which governs the time evolution of \(\rho_{n}\left(x\right)\). The invariant measure \(\rho\left(x\right)\) has to be stationary because eq. (2.23) makes sense only if \(\rho\left(x\right)\) is independent of time \(n\), that is, \(\rho\left(x\right)\) is an eigenfunction of the Frobenius-Perron operator with eigenvalue 1: \[\rho\left(x\right)\,=\,\int\limits_{0}^{1}\mathrm{d}y\,\delta\left[x\,-\,f(y) \right]\rho\left(y\right). \tag{2.30}\] Formally, this equation has many solutions (e.g. \(\delta\left(x\,-\,x^{\bullet}\right)\) where \(x^{\bullet}\,=\,f(x^{\bullet})\) is an unstable fixed point). But fortunately, only one of the solutions is physically relevant, namely that one which is, for example, obtained by solving eq. (2.30) on a computer. In the presence of weak random noise (which is caused by rounding errors in the computer or physical fluctuations in real systems), the probability to hit an unstable repelling fixed point \(x^{\bullet}\) is zero, and therefore such spurious solutions are automatically eliminated (Eckmann and Ruelle, 1985). In the following, the invariant measure \(\rho\left(x\right)\), always means the physically relevant invariant measure which is stable if a small random noise is added to the system. Let us consider again, as an example, the triangular map at \(r\,=\,1\): \[\Delta\left(x\right)\,=\,\begin{bmatrix}2x&\text{for}&x\,\leq\,\frac{1}{2}\\ \\ 2\left(1\,-\,x\right)&\text{for}&x>\,\frac{1}{2}\end{bmatrix} \tag{2.31}\] In this case, eq. (2.30) becomes: \[\rho\left(x\right)\,=\,\frac{1}{2}\left[\rho\left(\frac{x}{2}\right)\,+\,\rho \left(1\,-\,\frac{x}{2}\right)\right] \tag{2.32}\] which has the obvious normalized solution \(\rho\left(x\right)\,=\,1\). We can also show that this solution is unique. Starting from an arbitrary normalized distribution \(\rho_{0}\left(x\right)\), and operating on it \(n\) times with (2.29), yields \[\rho_{n}\left(x\right)\,=\,\frac{1}{2^{n}}\sum\limits_{j\,=\,1}^{2^{n-1}} \left[\rho_{0}\left(\frac{j\,-\,1}{2^{n-1}}\,+\,\frac{x}{2^{n}}\right)\,+\, \rho_{0}\left(\frac{j\,}{2^{n-1}}\,-\,\frac{x}{2^{n}}\right)\right] \tag{2.33}\] which converges towards \[\rho\left(x\right)\,=\,\lim\limits_{n\,-\,\infty}\rho_{n}\left(x\right)\,=\, \frac{1}{2}\left[\int\limits_{0}^{1}\mathrm{d}x\rho_{0}\left(x\right)\,+\, \int\limits_{0}^{1}\mathrm{d}x\rho_{0}\left(x\right)\right]\,=\,1\,\,. \tag{2.34}\] This means that, for the triangular map at \(r\,=\,1\), the chaotic sequence of iterates \(x_{0}\), \(f(x_{0})\), \(f(f(x_{0}))\)... uniformly covers the interval \([0,\,1]\), and the system is ergodic. As in the case of the Liapunov exponent, we will later study invariant density for more complicated maps and show that it is not always a constant. ### Correlation Function The correlation function \(C(m)\) for a map (2.21) is defined by \[C(m)\,=\,\lim_{N\to\infty}\,\frac{1}{N}\sum_{i\,=\,0}^{N-1}\hat{x}_{i\,+\,m} \hat{x}_{i} \tag{2.35}\] where \[\hat{x}_{i}\,=\,f^{i}(x_{0})\,-\,\bar{x}\,;\quad\,\bar{x}\,=\,\lim_{N\,\to\, \infty}\,\frac{1}{N}\sum_{i\,=\,0}^{N-1}f^{i}(x_{0}). \tag{2.36}\] From this definition follows that \(C(m)\) yields another measure for the irregularity of the sequence of iterates \(x_{0}\), \(f(x_{0})\), \(f^{2}(x_{0})\ldots\) It tells us, how much the deviations, of the iterates from their average value, \[\hat{x}_{i}\,=\,x_{i}\,-\,\bar{x} \tag{2.37}\] that are \(m\) steps apart (i. e. \(\hat{x}_{i\,+\,m}\) and \(\hat{x}_{i}\)) know about each other, on the average. If the invariant measure \(\rho(x)\) for a given map \(f(x)\) is known, \(C(m)\) can be written in the following form: \[C(m)\,=\,\int_{0}^{1}\,\mathrm{d}x\rho\,(x)x\,f^{m}\,(x)\,-\,\left[\int_{0}^{1 }\mathrm{d}x\rho\,(x)x\right]^{2}\,. \tag{2.38}\] Here, we used the commutative property of the iterates, \[x_{i\,+\,m}\,=\,f^{i\,+\,m}(x_{0})\,=\,f^{i}f^{m}(x_{0})\,=\,f^{m}f^{i}(x_{0}). \tag{2.39}\] We, therefore, find for the example of the triangular map: \[C(m)\,=\,\int_{0}^{1}\,\mathrm{d}x\,x\,\Delta^{\,m}\,(x)\,-\,\left[\int_{0}^{1 }\,\mathrm{d}x\,x\right]^{2} \tag{2.40a}\] \[\,=\,\int_{-1/2}^{1/2}\,\mathrm{d}y\,y\,\Delta^{\,m}\bigg{(}y\,+\,\frac{1}{2} \bigg{)}\,+\,\frac{1}{2}\int_{-1/2}^{1/2}\mathrm{d}y\,\Delta^{\,m}\bigg{(}y\,+ \,\frac{1}{2}\bigg{)}\,-\,\frac{1}{4}\] \[\,=\,\frac{1}{12}\,\delta_{m\,0} \tag{2.40b}\] i. e. the sequence of iterates is delta-correlated. This result follows because a) \(\Delta^{n}(y\ +\ 1/2)\) is symmetric about \(y=0\); therefore, the first integral in (2.40b) vanishes for \(m\!>0\), and b) the second integral is independent of \(m\), as shown in Fig. 18. To summarize: We have found for a general one-dimensional map that a sequence \(x_{0}\), \(f(x_{0})\ \ldots\)\(f^{n}(x_{0})\ \ldots\) can be characterized a) by a Liapunov exponent, which tells us how adjacent points become separated under the action of \(f\); b) by the invariant density, which serves as a measure of how the iterates become distributed over the unit interval; and c) by the correlation function \(C(m)\), which measures the correlation between iterates that are \(m\) steps apart. For the triangular map, the Liapunov exponent is \(\lambda=\log 2r\), which changes its sign at \(r=1/2\). It, therefore, serves as an order parameter for the onset of chaos. For \(r=1\), the chaotic state is characterized by a constant stationary density \(\rho(x)=1\) and delta-correlated iterates, i. e. \(C(m)=(1/12)\ \delta_{m,0}\). ### 2.3 Deterministic Diffusion In this section, we show that the iterates of certain one-dimensional periodic maps diffuse. This diffusion indicates that the reduced map generates chaotic motion. One normally associates diffusion with the Brownian motion of a particle in a liquid. Its equation of motion, in the case of high friction where the acceleration term \(\propto\ddot{x}\) can be neglected, is \[\dot{x}\ \propto\ \xi(t). \tag{2.41}\] The \(\xi(t)\) are random forces, which are generated by the thermal agitation of the molecules. If one assumes, as usual, that the \(\xi(t)\) are Gaussian-correlated, \[\langle\xi(t)\rangle\ =\ 0\ ;\ \ \ \langle\xi(t)\,\xi(t^{\prime})\rangle\ \propto\ \delta(t\ -t^{\prime}) \tag{2.42}\] Figure 18: The first and second iterates \(\Delta^{1,2}(y\ +\ 1/2)\) are symmetric about \(y=0\); the triangular areas are independent of \(m=1\), \(2\). one obtains from eqns. (2.41) and (2.42): \[\langle x(t)\rangle\ =\ 0\quad\mbox{and}\quad\langle x^{2}(t)\rangle\ \propto t. \tag{2.43}\] This means that the squared distance from the origin increases linearly with time if the particle is kicked by random forces (in contrast to \(x^{2}\propto t^{2}\) for a constant force \(k\propto\dot{x}\)). One can show, with a little more effort, that (2.43) also remains valid (for \(t\to\infty\)) if the acceleration term is retained (see, for example, Haken's book on _Synergetics_ (1982)). Let us now have a look at the piecewise linear periodic map \[x_{\tau+1}\ =\ F(x_{\tau})\ =\ x_{\tau}\ +\ f(x_{\tau})\ ;\quad\tau\ =\ 0,1,2,\ \ldots \tag{2.44}\] where \(f(x_{\tau})\) is periodic in \(x_{\tau}\), \[f(x_{\tau}\ +\ n)\ =\ f(x_{\tau})\,\ \ \ n\ =\ 0,\ \pm 1,\ \pm 2\ \ldots\, \tag{2.45}\] shown in Fig. 19. One sees that the trajectory moves slowly away from the origin. Now we will show that this motion is in fact diffusive. However, this diffusion is not generated by random forces (as in the case of Brownian motion discussed above), but rather because the trajectory loses its "memory" within one or several boxes due to chaotic motion. To substantiate this statement, we calculate \(\langle x^{2}\rangle\) explicitly for the map (2.44). Figure 19: Piecewise linear periodic map with a diffusive trajectory (after Grossmann, 1982). We decompose the coordinate of a trajectory into the box number \(N_{\tau}\) and the position \(y_{\tau}\in[0,\,1]\) within a box (Grossmann, 1982): \[x_{\tau}\,=\,N_{\tau}\,+\,y_{\tau}. \tag{2.46}\] The map (2.44) then becomes \[N_{t+\,1}\,+\,y_{\tau+\,1}\,=\,F(N_{\tau}\,+\,y_{\tau})\,=\,N_{\tau}\,+\,y_{ \tau}\,+\,f(y_{\tau}) \tag{2.47}\] which is equivalent to the coupled dynamical laws: \[N_{\tau+\,1}\,-\,N_{\tau}\,=\,[y_{\tau}\,+\,f(y_{\tau})]\,=\, \varDelta\,(y_{\tau}) \tag{2.48a}\] \[y_{\tau+\,1}\,=\,y_{\tau}\,+\,f(y_{\tau})\,-\,[y_{\tau}\,+\,f(y_ {\tau})]\,=\,g(y_{\tau}) \tag{2.48b}\] where \([z]\) denotes the integer part of \(z\). Fig. 20 shows the function \(\varDelta\,(y_{\tau})\), which is an integer number, describing the magnitude of the jump, and \(g(x_{\tau})\) gives the remaining part of the coordinate at \(\tau\,+\,1\). Using (2.48a), the distances to the origin can be written as \[N_{t}\,=\,\sum\limits_{\tau=\,0}^{t-1}(N_{\tau+\,1}\,-\,N_{\tau})\,=\,\sum \limits_{\tau=\,0}^{t-1}\varDelta\,(y_{\tau})\,\quad\text{for}\quad N_{0}\,=\,0. \tag{2.49}\] Fig. 20: Decomposition of a piecewise linear map. This yields for the mean squared distance: \[\langle N_{t}^{2}\rangle\;=\;\sum\limits_{t,\lambda}^{t-1}\langle\Delta\left(y_{t} \right)\Delta\left(y_{\lambda}\right)\rangle \tag{2.50}\] where the average \(\langle\ldots\rangle\) is over all initial conditions \(y_{0}\), and we assumed for simplicity \(\langle N_{t}\rangle=0\). For the case that the motion generated by \(g\left(y\right)\) is so chaotic that there are no correlations among the \(y_{\tau}\), i.e. \[\langle\Delta\left(y_{\lambda}\right)\Delta\left(y_{\tau}\right)\rangle\; \propto\;\delta_{\lambda,\tau} \tag{2.51}\] one finds from (2.47): \[\lim_{t\to\infty}\frac{\langle N_{t}^{2}\rangle}{t} = \lim_{t\to\infty}\frac{1}{t}\sum\limits_{t\to 0}^{t-1}\langle \Delta^{2}\left(y_{\tau}\right)\rangle \tag{2.52}\] \[= \int{\rm d}y\rho\left(y\right)\Delta^{2}\left(y\right)\;. \tag{2.53}\] The step from (2.52) to (2.53) is only possible is \(g\left(y\right)\) has an invariant density that obeys \[\rho\left(y\right)\;=\;\int{\rm d}x\delta\left[g\left(x\right)\;-\;y\right] \rho\left(x\right)\;. \tag{2.54}\] Eq. (2.53) means that \(\langle N_{t}^{2}\rangle\) increases linearly with \(t\), i. e. \[\langle N_{t}^{2}\rangle\;=\;2Dt\quad\mbox{for}\quad t\;\gg\;1 \tag{2.55}\] with a diffusion coefficient \[D\;\equiv\;\frac{1}{2}\int{\rm d}y\rho\left(y\right)\Delta^{2}\left(y\right)\;. \tag{2.56}\] It should be clear from the derivation that diffusion occurs as long as the \(y_{\tau}\)'s are sufficiently uncorrelated such that the two sums in (2.50) contract to a single sum. (For completely correlated motion of the \(y_{\tau}\)'s, \(\langle N_{t}^{2}\rangle\) becomes proportional to \(t^{2}\).) This means that the mere presence of diffusion for a periodic map indicates chaotic motion which destroys correlations within one box. We will generalize and use this characterization of chaos to some extent in Chapter 8, where we discuss area-preserving maps. Let us finally derive a simple scaling law for the diffusion coefficient that has a purely geometric origin. If the intervals \(\delta\), through which the trajectories can move from cell to cell, are small enough (such that one can neglect the variation of \(\rho\) in this region, i. e., \(\rho\left(x\in\delta\right)=\bar{\rho}\)), then eq. (2.56) can be written as \[D\;=\;\frac{1}{2}\;\bar{\rho}\delta \tag{2.57}\]because \(\Delta^{2}\) has only the values zero or unity. Fig. 21 shows that \(D\) scales like \[D\,\propto\,(a\,-\,1)^{1/z} \tag{2.58}\] if the map \(f(x)\) has a maximum (and minimum) of order \(z\). Universal Behavior of Quadratic Maps In this chapter, we study the logistic map \[x_{n+1}=f_{r}(x_{n})\equiv rx_{n}(1-x_{n})\, \tag{3.1}\] shown in Fig. 22. It has already been shown in Chapter 1 that (3.1) describes the angles \(x_{n}\) of a strongly damped kicked rotator. But the logistic map, which is, arguably, the simplest nonlinear difference equation, appears in many contexts. It has already been introduced in 1845 by P. F. Verhulst to simulate the growth of a population in a closed area. The number of species \(x_{n+1}\) in the year \(n+1\) is proportional to the number in the previous year \(x_{n}\) and to the remaining area, which is diminished, proportionally, to \(x_{n}\) i.e. \(x_{n+1}=rx_{n}(1-x_{n})\) where the parameter \(r\) depends on the fertility, the actual area of living etc. Another example is a savings account with a self-limiting rate of interest (Peitgen and Richter, 1984). Consider a deposit \(z_{0}\) which grows with a rate of interest \(\varepsilon\) as \(z_{n+1}=(1+\varepsilon)z_{n}=\ldots(1+\varepsilon)^{n+1}z_{0}\). To prohibit unlimited wealth, some politician could suggest that the rate of interest should be reduced proportionally to \(z_{n}\), i.e. \(\varepsilon\to\varepsilon_{0}(1-z_{n}/z_{\max})\). Then the account develops according to \(z_{n-1}=[1+\varepsilon_{0}(1-z_{n}/z_{\max})]\,z_{n}\) which becomes equal to eq. (3.1) for \(x_{n}=z_{n}\varepsilon_{0}/z_{\max}(1+\varepsilon_{0})\) and \(r=1+\varepsilon_{0}\). One could expect for both examples that due to the feedback mechanism the quantities of interest (population and bank account) develop towards mean values. But as found by Grossmann and Thomae (1977), by Feigenbaum (1978), and by Coullet and Tresser (1978), and many others (see May, 1976, for earlier references) the iterates \(x_{1}\), \(x_{2}\ldots\) of (3.1) display, as a function of the external parameter \(r\), a rather complicated behavior that becomes chaotic at large \(r\)'s (see Fig. 23). Once can, therefore, understand the conclusion that May (1976) draws at the end of his article in "Nature":"Perhaps we would all be better off, not only in research and teaching, but also in everyday political and economical life, if more people would take into consideration that simple dynamical systems do not necessarily lead to simple dynamical behavior." However, chaotic behavior is not tied to the special form of the logistic map. Feigenbaum has shown that the route to chaos that is found in the logistic map, the "Feigenbaum route", occurs (with certain restrictions which will be discussed below) in all first-order difference equations \(x_{n+1}=f(x_{n})\) in which \(f(x_{n})\) has (after a proper rescaling of \(x_{n}\)) only a single maximum in the unit interval \(0\leq x_{n}\leq 1\). It was found by Feigenbaum that the scaling behavior at the transition to chaos is governed by universal constants, the Feigenbaum constants \(\alpha\) and \(\delta\), whose value depends only on the order of the maximum (e. g. quadratic, i. e. \(f^{\prime}(x_{\max})=0\), \(f^{\prime\prime}(x_{\max})<0\), etc.). Because the conditions for the appearance of the Feigenbaum route are rather weak (it is practically sufficient that the Poincare map of a system is approximately one-dimensional and has a single maximum), this route has been observed experimentally in many nonlinear systems. The following sections of this chapter contain a rather detailed derivation of the universal properties of this route. We begin with a summary, which is intended to be a guide through the more mathematical parts. Section 3.1 gives an overview of the numerical results for the iterates of the logistic map. It shows that the number of fixed points of \(f(x)\) (towards which the iterates converge) doubles at distinct, increasing values of the parameter \(r_{n}\). At \(r=r_{\infty}\), the number of fixed points becomes infinite; and beyond this (finite) \(r\)-value, the behavior of the iterates is chaotic for most \(r\)'s. In Section 3.2, we investigate the pitchfork bifurcation, which provides the mechanism for the successive doubling of fixed points. It is shown that the doubling can be understood by examining the image of even iterates (\(f[f(x)]\), \(f[f(x)]\)),...) of the original map \(f(x)\). This relates the generation of new fixed points to a law of functional composition. We, therefore, introduce the doubling transformation T that describes functional composition together with simultaneous rescaling along the \(x\)- and \(y\)-axis (T\(f(x)\equiv-\alpha f[f(-x/\alpha)]\)) and show that the Feigenbaum constant \(\alpha\) (which is related to the scaling of the distance between iterates) can be calculated from the (functional) fixed point \(f^{\star}\) of T (T\(f^{\star}=f^{\star}\)). This establishes the universal character of \(\alpha\). The other Feigenbaum constant \(\delta\) (which measures the scaling behavior of the \(r_{n}\)-values) then appears as an eigenvalue of the linearized doubling transformation. After having provided a method of calculating universal properties of the iterates, we consider several applications in Section 3.3. As a first step, we determine the relative separations of the iterates and show that the iterates form (at the accumulation point \(r_{\infty}\)) a self-similar point set with a fractal dimensionality. We then Fourier-transformthe distribution of iterates to obtain the experimentally measurable, and therefore important, power spectrum. In any real dissipative nonlinear system, there are, due to the coupling to other degrees of freedom, also fluctuating forces, which when they are incorporated explicitly into the difference equations, tend to wash out the fine structure of the distribution of iterates. We determine the influence of this effect on the power spectrum and show that the rate at which higher subharmonics become suppressed scales via a power law with the noise level. Up to this point, we have only considered the behavior of the iterates near the transition to chaos. It will be shown next that in the chaotic region (\(r_{\infty}\leq r\leq 4\)) periodic and chaotic \(r\) values are densely interwoven and one finds a sensitive dependence on parameter values. We also discuss the concept of structural universality and calculate the invariant density of the logistic map at \(r=4\). Finally, in Section 3.5 we present a summary that explains the parallels between the Feigenbaum route to chaos and ordinary equilibrium second-order phase transitions. This chapter ends with a discussion of the measurable properties of the Feigenbaum route and a review of some experiments in which this route has been observed. ### 3.1 Parameter Dependence of the Iterates To provide an overview in this section, we present several results for the logistic map obtained by computer iteration of eq. (3.1) for different values of the parameter \(r\). Fig. 23 shows the accumulation points of the iterates \(\{f_{r}^{n}(x_{0})\}\) for \(n>300\) as a function of \(r\) together with the Liapunov exponent \(\lambda\) obtained via eq. (2.9). We distinguish between a "bifurcation regime" for \(1<r<r_{\infty}\), where the Liapunov exponent is always negative (it becomes only zero at the bifurcation points \(r_{n}\)) and a "chaotic region" for \(r_{\infty}<r\leq 4\), where \(\lambda\) is mostly positive, indicating chaotic behavior. The "chaotic regime" is interrupted by \(r\)-windows with \(\lambda<0\) where the sequence \(\{f_{r}^{n}(x_{0})\}\) is again periodic The numerical results can be summarized as follows: 1. Periodic regime a) The values \(r_{n}\), where the number of fixed points changes from \(2^{n-1}\) to \(2^{n}\), scale like \[r_{n}=r_{\infty}-\mbox{const.}\delta^{-n}\quad\mbox{for}\quad n\gg 1\.\] (3.2) b) The distances \(d_{n}\) of the point in a \(2^{n}\)-cycle that are closest to \(x=1/2\) (see Fig. 24) have constant ratios: \[\frac{d_{n}}{d_{n\,\neq\,1}}=-\alpha\quad\mbox{for}\quad n\gg 1\.\] (3.3) Fig. 23: a) Iterates of the logistic map, b) Liapunov exponent \(\lambda\) (after W. Desnizza, priv. comm.). 3. The Feigenbaum constants \(\delta\) and \(\alpha\) have the values \[\delta\ =\ 4.6692016091\ldots\] (3.4a) \[\alpha\ =\ 2.5029078750\ldots\] (3.4b) Let us also note for later use that the \(R_{n}\) of Fig. 24 scale similar to \(r_{n}\): \[R_{n}\ -\ r_{\infty}\ =\ \mbox{const.}^{\prime}\delta^{-n}\,\] (3.5) furthermore \[R_{\infty}\ =\ r_{\infty}\ =\ 3.5699456\ldots\] 2. Chaotic regime a) The chaotic intervals move together by inverse bifurcations until the iterates become distributed over the whole interval [0, 1] at \(r=4\). b) The \(r\)-windows are characterized by periodic \(p\)-cycles (\(p=3\), 5, 6 \(\ldots\)) with successive bifurcations \(p\), \(p\cdot 2^{1}\), \(p\cdot 2^{2}\) etc. The corresponding \(r\)-values scale like (3.2) with the same \(\delta\) but different constants. c) Also, period triplings \(p\cdot 3^{n}\) and quadruplings \(p\cdot 4^{n}\), etc. occur at \(\bar{r}=\bar{r}_{\infty}\ -\ \overline{\mbox{const.}}\ \bar{\delta}\ \ ^{n}\) with different Feigenbaum constants \(\bar{\delta}\), which are again universal (e. g. \(\bar{\delta}=55.247\ \ldots\) for \(p\cdot 3^{n}\)). ### 3.2 Pitchfork Bifurcation and the Doubling Transformation In this section, we show that the "Feigenbaum route" in Fig. 23 is generated by pitchfork bifurcations that relate the emergence of new branches to a universal law of functional composition. By introducing the doubling transformation T (which describesthis law), we show that the Feigenbaum constants \(a\) and \(\delta\) are indeed universal. They appear as the (negative inverse) value of the eigenfunction of T at \(x=1\) and as the only relevant eigenvalue of the linearized doubling operator, respectively. ### Pitchfork Bifurcations As a first step, we investigate the stability of the fixed points of \(f_{r}\left(x\right)\) and \(f_{r}^{2}\left(x\right)=f_{r}\left[f_{r}\left(x\right)\right]\) as a function of \(r\). Fig. 25 shows that \(f_{r}\left(x\right)\) has, for \(r<1\), only one stable fixed point at zero, which becomes unstable for \(1<r<3\) in favor of \(x^{*}=1-1/r\). For \(r>3=r_{1}\) we have\(\left|f_{r}^{\prime}\left(x^{*}\right)\right|=\left|2-r\right|>1\); i. e., \(x^{*}\) also becomes unstable according to criterion (2.17). What happens then? Fig. 26 shows \(f_{r}\left(x\right)\) together with its second iterate \(f_{r}^{2}\left(x\right)\) for \(r>r_{1}\). We note four properties of \(f^{2}\) (the index \(r\) is dropped for convenience):a) It has three extrema with \(f^{2,\prime}=f^{\prime}[f\times]f^{\prime}(x)=0\) at \(x_{0}=1/2\), because \(f^{\prime}(1/2)=0\), and at \(x_{1,2}=f^{-1}\,(1/2)\), because \(f^{\prime}[f\times[f^{-1}\,(1/2)]]=f^{\prime}(1/2)=0\). b) A fixed point \(x^{\star}\) of \(f(x)\) is also a fixed point of \(f^{2}(x)\) (and all higher iterates). c) If a fixed point \(x^{\star}\) becomes unstable with respect to \(f(x)\), it becomes also unstable with respect to \(f^{2}\) (and all higher iterates) because \(|f^{\prime}(x^{\star})|>1\) implies \(|f^{2}(x^{\star})|=|f^{\prime}[f(x^{\star})]f^{\prime}(x^{\star})|=|f^{\prime} (x^{\star})|^{2}>1\). d) For \(r>3\), the old fixed point \(x^{\star}\) in \(f^{2}\) becomes unstable, and two new stable fixed points \(\bar{x}_{1}\), \(\bar{x}_{2}\) are created by a pitchfork bifurcation (see Fig. 26b). The pair \(\bar{x}_{1}\), \(\bar{x}_{2}\) of stable fixed points of \(f^{2}\) is called an attractor of \(f(x)\) of period two because any sequence of iterates which starts in [0, 1] becomes attracted by \(\bar{x}_{1}\), \(\bar{x}_{2}\) in an oscillating fashion as shown in Fig. 27. It is easy to see that \(f(x)\) maps these new fixed points of \(f^{2}\) onto each other, i. e., \[f(\bar{x}_{1})=\bar{x}_{2}\,\,\,\mbox{and}\,\,f(\bar{x}_{2})=\bar{x}_{1} \tag{3.6}\] because \(f^{2}\,(\bar{x}_{1})=\bar{x}_{1}\) implies \[f(f(\bar{x}_{1}))=f[f^{2}((\bar{x}_{i})]=f(\bar{x}_{1}) \tag{3.7}\] i. e. \(f(\bar{x}_{i})\) is also a fixed point of \(f^{2}\), and \(\bar{x}_{2}\) is the only possible choice. (\(f(\bar{x}_{i})=0\) or \(x^{\star}\) are at variance with \(ff(\bar{x}_{1})=\bar{x}_{1}\).) If we now increase \(r\) beyond a value \(r_{2}\), the fixed points of \(f^{2}\) also become unstable. Because the derivative is the same at \(\bar{x}_{1}\) and \(\bar{x}_{2}\) \[f^{2,\prime}(\bar{x}_{1})=f^{\prime}[f(\bar{x}_{1})]f^{\prime}(\bar{x}_{1})=f^ {\prime}(\bar{x}_{2})f^{\prime}(\bar{x}_{1})=f^{2,\prime}(\bar{x}_{2}) \tag{3.8}\] they even become unstable simultaneously. Fig. 28 shows that after this instability the fourth iterate\(f^{\pm}=f^{2}\cdot f^{2}\) displays two more pitchfork bifurcations which lead to an attractor of period four; i. e., one observes _period doubling_. These two examples can be generalized as follows: Figure 27: Iterates of \(\bar{x}_{0}\) if \(f(x)\) has an attractor of period two (schematically). **a)**: For \(r_{n-1}<r<r_{n}\), there exists a stable \(2^{n-1}\)-cycle with elements \(x_{0}^{\bullet}\), \(x_{1}^{\bullet}\)\(\cdots\)\(x_{2^{n-1}-1}^{\bullet}\) that is characterized by \[f_{r}(x_{i}^{\bullet})\,=\,x_{i+1}^{\bullet},\hskip 14.226378ptf_{r}^{2^{n-1}}(x _{i}^{\bullet})\,=\,x_{i}^{\bullet}\,,\hskip 14.226378pt\left|\,\frac{\mathrm{d}}{ \mathrm{d}x_{0}^{\bullet}}\,f_{r}^{2^{n-1}}(x_{0}^{\bullet})\,\right|\,=\,\left| \,\prod_{i}f_{r}^{*}(x_{i}^{\bullet})\,\right|\,<\,1 \tag{3.10}\] **b)**: At \(r_{n}\), all points of the \(2^{n-1}\)-cycle become unstable simultaneoulsy via pitchfork bifurcations in \[f_{r}^{2^{n}}\,=\,f_{r}^{2^{n-1}}\,\cdot\,f_{r}^{2^{n-1}} \tag{3.11}\] that, for \(r_{n}<r<r_{n+1}\), lead to a new stable \(2^{n}\)-cycle. Our last conclusion represents a first step towards universality because it connects the mechanism of subsequent bifurcations to a general law of functional composition. Let us add as a caveat that not all quadratic maps of the unit interval onto itself display an infinite sequence of pitchfork bifurcations, but only those which have a negative Schwarzian derivative (see Appendix C). ### Supercycles To progress further, we now consider the so-called supercycles. A \(2^{n}\)-supercycle is simply a superstable \(2^{n}\)-cycle defined by \[\frac{\mathrm{d}}{\mathrm{d}x_{0}^{\bullet}}\,f_{R_{n}}^{2^{n}}(x_{0}^{\bullet })\,=\,\,\prod_{i}f_{R_{n}}^{\prime}(x_{i}^{\bullet})\,=\,0 \tag{3.12}\]which implies that it always contains \(x_{0}^{\bullet}\,=\,1/2\) as a cycle element because this is the only point where \(f_{r}^{\prime}\,=\,0\). Referring to Fig. 24, we can see that the distances \(d_{n}\) are just the distances between the cycle elements \(x^{\bullet}\,=\,1/2\) and \(x_{1}\,=\,f_{R_{n}}^{2^{n-1}}\) (1/2), i. e., \[d_{n}\,=\,f_{R_{n}}^{2^{n-1}}\left(\frac{1}{2}\right)\,-\,\frac{1}{2}. \tag{3.13}\] In the following it is convenient to perform a coordinate transformation that displaces \(x\,=\,1/2\) to \(x\,=\,0\) such that (3.13) becomes \[d_{n}\,=\,f_{R_{n}}^{2^{n-1}}\left(0\right)\,. \tag{3.14}\] ### Pitchfork Bifurcation and the Doubling Transformation Fig. 29: The rescaled iterates \(f_{R_{n+1}}^{2^{n}}\left(x\right)\) converge towards a universal function. a)–d) Superstable cycles at \(R_{1}\) and \(R_{2}\). Note the horizontal tangents in b) and d). e) The content of the dashed square of c) is rescaled (dashed line) and compared to the whole of a) (full line). From the previous section, we see that eq. (3.3) implies \[\lim_{n\rightarrow\infty}\,(-\,\alpha)^{n}\,d_{n\,+\,1}\,=\,d_{1} \tag{3.15}\] i.e. the sequence of scaled iterates \(f_{R_{n\,+\,1}}^{2^{n}}\,(0)\) converges: \[\lim_{n\rightarrow\infty}\,(-\,\alpha)^{n}\,f_{R_{n\,+\,1}}^{2^{n}}\,(0)\,=\,d _{1}\,\,. \tag{3.16}\] Fig. 29 suggests that (3.16) can be generalized to the whole interval, and the rescaled functions \((-\,\alpha)^{n}\,f_{R_{n\,+\,1}}^{2^{n}}\,[x/(-\,\alpha)^{n}]\) converge to a limiting function \(g_{1}\,(x)\): \[\lim_{n\rightarrow\infty}\,(-\,\alpha)^{n}\,f_{R_{n\,+\,1}}^{2^{n}}\,\left[ \,\frac{x}{(-\,\alpha)^{n}}\,\right]\,=\,g_{1}\,(x) \tag{3.17}\] Eq. (3.17) shows that \(g_{1}\,(x)\) is determined only by the behavior of \(f_{R_{n\,+\,1}}^{2^{n}}\) around \(x\,=\,0\) (see also Fig. 28) and should, therefore, be universal for all functions \(f\) with a quadratic maximum. ### Doubling Transformation and \(\alpha\) As the next step, we introduce, by analogy to eq. (3.17), a whole family of functions \[g_{i}\,(x)\,=\,\lim_{n\rightarrow\infty}\,(-\,\alpha)^{n}\,f_{R_{n\,+\,i}}^{2^ {n}}\,\left[\,\frac{x}{(-\,\alpha)^{n}}\,\right]\,;\quad i\,=\,0,\,1\,\ldots \tag{3.18}\] We notice that all these functions are related by the _doubling transformation_ T\(\,\): \[g_{i\,-\,1}\,(x)\,=\,(-\,\alpha)\,g_{i}\,\left[\,g_{i}\,\left(-\,\frac{x}{ \alpha}\right)\right]\,\equiv\,\mbox{T}\,g_{i}\,(x) \tag{3.19}\] because \[g_{i\,-\,1}\,(x)\,=\,\lim_{n\rightarrow\infty}\,(-\,\alpha)^{n}\,f_{R_{n\,+\,1 }}^{2^{n}}\,\left[\,\frac{x}{(-\,\alpha)^{n}}\,\right]\] \[\,=\,\lim_{n\rightarrow\infty}\,(-\,\alpha)\,(-\,\alpha)^{n-1}\,f_{R_{n\,-\,1 +\,1}}^{2^{n\,-\,1+\,1}}\,\left[\,-\,\frac{1}{\,\alpha}\,\,\frac{x}{(-\,\alpha )^{n-1}}\,\right]\] \[\,=\,\lim_{m\,\rightarrow\infty}\,(-\,\alpha)\,(-\,\alpha)^{m}f_{R_{m\,+\,i}} ^{2^{m}}\,\left\{\frac{1}{(-\,\alpha)^{m}}\,\,(-\,\alpha)^{m}\,f_{R_{m\,+\,i}} ^{2^{m}}\,\left[\,-\,\frac{1}{\,\alpha}\,\,\frac{x}{(-\,\alpha)^{m}}\,\right]\right\}\] \[\,=\,-\,\alpha\,g_{i}\,\left[\,g_{i}\,\left(-\,\frac{x}{\alpha}\right)\right]\,. \tag{3.20}\]By taking the limit \(i\rightarrow\infty\) in (3.19), the function \[g\left(x\right)\,=\,\lim_{i\rightarrow\infty}g_{i}\left(x\right) \tag{3.21}\] becomes a fixed point of the doubling operator T: \[g\left(x\right)\,=\,\mathrm{T}\,g\left(x\right)\,=\,\,-\,\,a\,g\,\left|\,g \left(-\,\frac{x}{\alpha}\right)\right|\,. \tag{3.22}\] This equation determines \(\alpha\) universally by \[g\left(0\right)\,=\,\,-\,\,a\,g\left[g\left(0\right)\right]\,. \tag{3.23}\] It can easily be shown that \(\mu g(x/\mu)\) is also a solution of the fixed-point equation (3.22) with the same \(\alpha\). Thus, the theory has nothing to say about absolute scales, and we fix \(\mu\) by setting \[g\left(0\right)\,=\,1\,\,. \tag{3.24}\] Although a general theory for the solution of the functional equation (3.22) is still lacking, we can obtain a unique solution if we specify the nature of the maximum of \(g\left(x\right)\) at \(x\,=\,0\) (for example quadratic) and require that \(g\left(x\right)\) is a smooth function. If we use for \(g\left(x\right)\) in the quadratic case the extremely short power law expansion \[g\left(x\right)\,=\,1\,+\,bx^{2} \tag{3.25}\] the fixed point equation (3.22) becomes \[1\,+\,bx^{2}\,=\,\,-\,\alpha\left(1\,+\,b\right)\,-\,\left(\frac{2b^{\,2}}{ \alpha}\right)x^{2}\,+\,\mathrm{O}\left(x^{4}\right) \tag{3.26}\] What can we say about the scaling along the \(r\)-axis? The values \(r\,=\,R_{n}\), for which a \(2^{*}\)-cycle becomes superstable, are determined by the condition that \(x\,=\,1/2\) is an element of the supercycle (see eq. (3.12), i.e., \(x\,=\,1/2\) is a fixed point of \(f_{R_{n}}^{\ast}\,(x)\): \[f_{R_{n}}^{\ast}\,\left(\frac{1}{2}\right)\,=\,\frac{1}{2} \tag{3.29}\] which after translation by \(1/2\) becomes (see eqns. (3.13\(-\)14)): \[f_{R_{n}}^{\ast}\,(0)\,=\,0\,\,. \tag{3.30}\] This equation has a large number of solutions because it also yields the \(2^{*}\)-supercycles that occur in the windows of the chaotic regime. In order to single out the \(R_{n}\)-values in the bifurcation region with \[r_{1}\,<\,R_{1}\,<\,r_{2}\,<\,R_{2}\,<\,r_{3}\,\ldots\,, \tag{3.31}\] (3.30) is solved starting from \(n\,=\,0\), and the \(R_{n}\) are ordered as in (3.31). The \(R_{n}\) tell us how \(R_{n}\) is approached. In order to prove the scaling relation (3.5), \[R_{n}\,-\,R_{n}\,\propto\,\delta^{-n}\,, \tag{3.32}\] we expand \(f_{R}\,(x)\) around \(f_{R_{n}}\,(x)\): \[f_{R}\,(x)\,=\,f_{R_{n}}\,(x)\,+\,(R\,-\,R_{n})\,\delta f(x)\,+\,\ldots\] where \[\delta f(x)\,=\,\frac{\,\partial f_{R}\,(x)\,}{\,\delta R\,}\,\bigg{|}_{R_{n}}\,. \tag{3.33}\] Let us now apply the doubling operator T to this equation. A straightforward linearization in \(\delta f\) yields \[{\rm T}f_{R}\,=\,{\rm T}f_{R_{n}}\,+\,(R\,-\,R_{n})\,{\rm L}_{f_{R_{n}}}\, \delta f\,+\,{\rm O}\,[(6/t)^{2}] \tag{3.34}\] where \({\rm L}_{f}\) is the linear operator \[{\rm L}_{f}\,\delta f\,=\,\,-\,\alpha\,\bigg{\{}\!f^{\ast}\,\bigg{[}\!f\, \bigg{(}\!-\frac{x}{a}\,\bigg{)}\bigg{]}\,\delta f\,\bigg{(}\!-\frac{x}{a}\, \bigg{)}\,+\,\,\delta f\bigg{[}\!f\,\bigg{(}\!-\frac{x}{a}\,\bigg{)}\bigg{]} \!\bigg{\}}\,. \tag{3.35}\]Note that \(L_{f}\) is only defined with respect to a function \(f\). Repeated application of T yields \[T^{*}f_{R}\,=\,T^{*}f_{R_{m}}\,+\,(R\,-\,R_{m})\,L_{T^{*-1}/R_{m}}\ldots L_{f_{R_ {m}}}\delta f\,+\,O\,[(\delta f)^{2}]. \tag{3.36}\] We observe that, according to eqns. (3.18-21), \(T^{*}f_{R_{m}}\) converges to the fixed point, \[T^{*}f_{R_{m}}\,(x)\,=\,(-\,a)^{*}f_{R_{m}}^{2}\,\left[\,\frac{x}{(-\,a)^{*}} \,\right]\,\equiv\,g\,(x)\quad\mbox{for}\quad n\,\gg\,1\, \tag{3.37}\] and (3.36) becomes approximately: \[T^{*}f_{R}\,(x)\,\equiv\,g\,(x)\,+\,(R\,-R_{m})\,L_{g}^{*}\,\delta f(x)\quad \mbox{for}\quad n\,\gg\,1. \tag{3.38}\] This equation can be further simplified if we expand \(\delta f(x)\) with respect to the eigenfunctions \(\varphi_{v}\) of \(L_{g}\), \[L_{g}\,\varphi_{v}\,=\,\lambda_{v}\,\varphi_{v}\ ;\quad\delta f =\,\sum_{v}\,c_{v}\,\varphi_{v}\ ;\quad v\,=\,1,\,2\,\ldots \tag{3.39}\] \[\to\,L_{g}^{*}\,\delta f\,=\,\sum_{v}\,c_{v}\,\lambda_{v}^{*}\, \varphi_{v} \tag{3.40}\] and assume that only one of the eigenvalues \(\lambda_{v}\) is larger than unity, i.e., \[\lambda_{1}\,>\,1\ ;\quad|\lambda_{v}\,|\,<\,1\quad\mbox{for}\quad v\,\neq\,1. \tag{3.41}\] We then obtain only the contribution from \(\lambda_{1}\) in (3.40), \[L_{g}^{*}\,\delta f\,\equiv\,c_{1}\,\lambda_{1}^{*}\,\varphi_{1}\quad\mbox{for }\quad n\,\gg\,1\, \tag{3.42}\] and (3.38) reduces to \[T^{*}f_{R_{m}}\,(x)\,\equiv\,g\,(x)\,+\,(R\,-\,R_{m})\cdot\delta^{*}\cdot a\cdot \,h\,(x)\quad\mbox{for}\quad n\,\gg\,1 \tag{3.43}\] where we introduced \(c_{1}\,=\,a\), \(\varphi_{1}\,=\,h\), \(\lambda_{1}\,=\,\delta\). The eigenvalue \(\lambda_{1}\,=\,\delta\) is identical with Feigenbaum's constant because for \(R\,=\,R_{n}\) and \(x\,=\,0\), (3.43) yields \[T^{*}f_{R_{n}}\,(0)\,=\,g\,(0)\,+\,(R_{n}\,-\,R_{m})\cdot\delta^{*}\cdot a\cdot \,h\,(0) \tag{3.44}\] and from (3.30) we have the condition \[T^{*}f_{R_{n}}\,(0)\,=\,(-\,a)^{n}f_{R_{n}}^{2*}\,(0)\,=\,0. \tag{3.45}\] This leads to the desired result (note \(g\,(0)\,=\,1\)) \[\lim_{n\to\infty}\,(R_{n}\,-\,R_{m})\cdot\delta^{n}\,=\,\frac{-\,1}{a\cdot\,h \,(0)}\,=\,\mbox{const}. \tag{3.46}\]The last equation can be generalized if we introduce the slopes \[\mu\,=\,\frac{\mathrm{d}}{\mathrm{d}x_{0}^{\ast}}\,f_{r}^{2\ast}\left(x_{0}^{\ast} \right)\,=\,\,\prod_{i}\,f_{r}^{\ast}\left(x_{i}^{\ast}\right) \tag{3.47}\] as a parameter and characterize \(r\) by the pair (\(n\), \(\mu\)), as shown in Fig. 30. Then we obtain from (3.44): \[\lim_{n\rightarrow\infty}\,\left(R_{n,\mu}\,-\,R_{\infty}\right)\,\cdot\, \delta^{n}\,=\,\frac{g_{0,\mu}\left(0\right)\,-\,g\left(0\right)}{\alpha\, \cdot\,h\left(0\right)}\] (3.48a) where \[g_{0,\mu}\left(x\right)\,=\,\lim_{n\rightarrow\infty}\,(-\,a)^{n}f_{R_{n,\mu} ^{\ast}}^{2\ast}\left[\frac{x}{(-\,a)^{\alpha}}\right]\] (3.48b) is again a universal function of \[\mu\]. At the bifurcation points, \(r_{n}\), the slopes have always the same value \(\mu\,=\,1\) (see Fig. 30). Therefore, the \(r_{n}\)'s scale according to (3.48) with the same \(\delta\) as the \(R_{n}\)'s of the superstable cycles (with \(\mu\,=\,0\)): \[r_{n}\,-\,r_{\infty}\,\propto\,\delta^{\,-\,n}\quad\mbox{for}\quad n\,\gg\,1\,\,.\] (3.49a) Note that the accumulation point is the same for all \[\mu\]'s: \[\lim_{\ast\rightarrow\infty}\,R_{n,\mu}\,=\,R_{\infty}\,=\,r_{\infty}\] (3.49b) because \[r_{n}\,\leq\,R_{n,\mu}\,\leq\,r_{n\,\cdot\,1}\,\,\mbox{and}\,\,r_{n\,+\,1}\,-\, r_{n}\to 0\,\,\mbox{for}\,\,n\rightarrow\infty\]. The numerical value for \(\delta\) can be obtained (by combining (3.35\(-\)43)) from the universal eigenvalue equation \[\mathrm{L}_{g}\,h\left(x\right)\,=\,\,-\,\alpha\,\left\{g^{\prime}\left[g\left( -\frac{x}{\alpha}\right)\right]\,h\left(-\,\frac{x}{\alpha}\right)\,+\,h\left[ g\left(-\,\frac{x}{\alpha}\right)\right]\right\}\,=\,\,\delta\,\cdot\,h\left(x \right)\,. \tag{3.50}\] To make things simple we retain in the power law expansion for \(h\left(x\right)\) only the first term \(h\left(0\right)\) such that (3.50) becomes an algebraic equation for \(\delta\):\[-\,\alpha\,\left\{g^{\ast}\left[g\left(0\right)\right]\,+\,1_{j}^{\ast}\,=\,\delta\.\right.\] (3.51 a) The value \(g^{\ast}\left[g\left(0\right)\right]\,=\,g^{\ast}\left(1\right)\) follows for functions with a quadratic maximum (ie. \(g^{\ast\prime}\left(0\right)\,\pm\,0\)) by differentiating the fixed-point equation (3.22) twice: \[g^{\ast\prime}\left(x\right) = -\left\{g^{\ast\prime}\left[g\left(-\,\frac{x}{\alpha}\right) \right]\,\left[g^{\prime}\left(-\,\frac{x}{\alpha}\right)\right]^{2}\,+\,g^{ \ast}\left[g\left(-\,\frac{x}{\alpha}\right)\right]g^{\ast\prime}\left(-\, \frac{x}{\alpha}\right)\right\}/\alpha\] (3.51 b) \[\rightarrow\,g^{\ast}\left(1\right)\,=\,-\,\alpha\.\] Thus (3.51 a) becomes \[\delta\,=\,a^{\,2}\,-\,\alpha\.\] (3.51 c) (For functions with a maximum of order \(2\,z\) one finds \(\delta\,=\,\alpha^{\,1\,+\,\ast}\,-\,\alpha\).) Using our previously determined value \(\alpha\,=\,2.73\), we obtain \(\delta\,=\,4.72\) from (3.51), i. e., an accuracy of about 1% with respect to Feigenbaum's numerical result \(\delta\,=\,4.6692016\ldots\) This is not so bad if one considers the crudeness of our approximation. It is of course much more laborious to show that \(\delta\) is indeed the only eigenvalue of \(L_{q}\) which is larger than unity. Extensive computer calculations by Feigenbaum and the analytical results of Collet, Eckmann, and Lanford (1980) have proven this assumption. Summarizing, the two main results of this section are * the fixed-point equation for the doubling operator (3.22) \[Tg\left(x\right)\,=\,\,-\,\alpha\,g\left[g\left(-\frac{x}{\alpha}\right) \right]\,=\,g\left(x\right)\] (3.52) which establishes the universality of \(\alpha\), * the linearized doubling transformation (3.43) \[T^{\,n}f_{R}\left(x\right)\,=\,g\left(x\right)\,+\,\left(R\,-\,R_{\infty}\right) \cdot\delta^{n}\cdot\,a\cdot\,h\left(x\right)\ \ \ \mbox{for}\ \ \ n\,\gg\,1\] (3.53) which shows that \(\delta\) is universal and determines the way in which a function is repelled from the fixed-point function \(g\left(x\right)\). Universality emerges here because the linearized doubling operator \(L_{g}\) has only one _relevant_ eigenvalue \(\lambda_{1}>1\) such that all functions \(f(x)\,-\,\) with the exception of \(\varphi_{1}\left(x\right)\,-\,\) renormalize, after several applications of \(T\), to the fixed-point function \(g\left(x\right)\) because the eigenvalues belonging to \(f\,-\,g\,=\,\sum\limits_{v\,\neq\,1}c_{v}\,\varphi_{v}\) are smaler than unity, i. e., _irrelevant._ Self-Similarity, Universal Power Spectrum, and the Influence of External Noise In this section, we calculate the distances between the elements of a \(2^{n}\)-cycle and determine its power spectrum. It is then shown that external noise changes the power spectrum drastically and destroys higher subharmonics. Finally, we discuss the bifurcation diagram for \(r>r_{\infty}\) and show that the chaotic behavior of the iterates (of the logistic map) at \(r=4\) is related to the chaos of a triangular map. The power spectrum is an important tool for characterizing irregular motion. In order to calculate this quantity for a system that exhibits the Feigenbaum route to chaos, we identify the time variable with \(n\) and determine as a first step the relative positions of the cycle elements. ### Self-Similarity in the Positions of the Cycle Elements All we know up to now about the positions of the cycle elements is that according to eqns. (3.3) and (3.14) the distances \(d_{n}\left(0\right)\) of the supercycle elements closest to \(x=0\) scale with \(a\), i. e. \[\frac{d_{n+1}\left(0\right)}{d_{n}\left(0\right)}\ =\ -\frac{1}{\alpha}\quad\text{for}\quad d_{n}\left(0\right)\ =f_{R_{n}}^{2^{n-1}}\left(0\right),n\ \gg\ 1. \tag{3.54}\] It is now our aim to generalize these equations. We will calculate for all \(m\) the distance \(d_{n}\left(m\right)\) of the \(m\)th element \(x_{m}\) of a \(2^{n}\)-supercycle to its nearest neighbor \(f_{R_{n}}^{2^{n-1}}\left(x_{m}\right)\), \[d_{n}\left(m\right)\ \equiv\ x_{m}\ -f_{R_{n}}^{2^{n-1}}\left(x_{m}\right) \tag{3.55}\] and the change of \(d_{n}\left(m\right)\) if one increases \(n\), \[\sigma_{n}\left(m\right)\ \equiv\ \frac{d_{n+1}\left(m\right)}{d_{n}\left(m\right)}. \tag{3.56}\] The function \(\sigma_{n}\left(m\right)\) changes sign after \(2^{n}\) cycle steps, \[\sigma_{n}\left(m\ +\ 2^{n}\right)\ =\ -\sigma_{n}\left(m\right) \tag{3.57}\] because \[d_{n+1}\left(m\ +\ 2^{n}\right) =f_{R_{n-1}}^{2^{n}}\left(x_{m}\right)\ -f_{R_{n-1}}^{2^{n}}\left[f_{R_{n+1}}^{2^{n}}\left(x_{m}\right)\right]\] \[=f_{R_{n}}^{2^{n}},\left(x_{m}\right)\ -\ x_{m}\ =\ -d_{n+1}\left(m\right) \tag{3.58}\] and \(d_{n}\left(m\right)\) is left invariant (\(f_{R_{n}}^{2^{n}}\left(x_{m}\right)\ =\ x_{m}\)). Let us now consider the values \(m=2^{n-i}\), \(i=0\ldots n\), and evaluate \(\sigma_{n}(m)\) in the limit \(n\gg 1\). The definitions (3.55), (3.56) yield \[\sigma_{n}\left[2^{n-i}\right] =\frac{f_{R_{n-1}}^{2^{n-i}}\left(0\right)\,-\,f_{R_{n+1}}^{2^{n} }\left[f_{R_{n-1}}^{2^{n-i}}\left(0\right)\right]}{f_{R_{n}}^{2^{n-i}}\left(0 \right)\,-\,f_{R_{n}}^{2^{n-1}}\left[f_{R_{n}}^{2^{n-i}}\left(0\right)\right]} \tag{3.59}\] \[=\frac{f_{R_{(n-n)+1}}^{2^{n-i}}\left(0\right)\,-\,f_{R_{(n-n)+1} }^{2^{n-i}}\left[f_{R_{n+1}}^{2^{n}}\left(0\right)\right]}{f_{R_{(n-n)+1}}^{2^ {n-i}}\left(0\right)\,-\,f_{R_{(n-n)+1}}^{2^{n-i}}\left[f_{R_{(n-n)+1}}^{2^{n- i}}\left(0\right)\right]}\] and because \[f_{R_{i+j}}^{2^{i}}\left(x\right)\,\equiv\,\left(-\,\,\alpha\right)^{-l}g_{j} \left[\left(-\,\,\alpha\right)^{l}x\right]\quad\text{for}\quad l\,=\,n\,-\,i\, \rightarrow\,\infty \tag{3.60}\] this becomes \[\sigma_{n}\left[2^{n-i}\right]\,=\,\frac{g_{i+1}\left(0\right)\,-\,g_{i+1} \left[\left(-\,\,\alpha\right)^{-i}g_{1}\left(0\right)\right]}{g_{i}\left(0 \right)\,-\,g_{i}\,\left[\left(-\,\,\alpha\right)^{-i+1}g_{1}\left(0\right) \right]}\quad\text{ for }\quad n\,\gg\,1. \tag{3.61}\] We note that the functions \(g_{i}\left(x\right)\) can be obtained from (3.18) and (3.44) for \(i\gg 1\): \[g_{i}\left(x\right)\,=\,\lim_{n\rightarrow\infty}\,\Upsilon^{n}\,f_{R_{n+}} \left(x\right)\,=\,g\left(x\right)\,-\,\delta^{-i}\cdot\,h\left(x\right)\,. \tag{3.62}\] For smaller \(i\) one uses the recursion (3.19), \[g_{i-1}\left(x\right)\,=\,\Upsilon g_{i}\left(x\right)\,. \tag{3.63}\] If we introduce, for convenience, the new variable \(x=\frac{m}{2^{n+1}}\) and drop the index \(n\), the symmetry relation (3.57) reads \[\sigma\left(x\,+\,\frac{1}{2}\right)\,=\,\,-\,\,\sigma\left(x\right)\,. \tag{3.64}\] This generates from our familiar scaling relation (3.54) the value of \(\sigma\) at \(x\,=\,1/2\): \[\sigma\left(0\right)\,=\,\frac{-\,1}{\,\alpha}\,\,\rightarrow\,\,\sigma\left( \frac{1}{2}\right)\,=\,\,-\,\sigma\left(0\right)\,=\,\frac{1}{\,\alpha}. \tag{3.65}\] But starting instead from (3.61) we obtain More elaborate calculations show that \(\sigma\left(x\right)\) jumps at all rationals as depicted in Fig. 31. Fortunately, the discontinuities decrease rapidly as the number of terms in the binary expansion of the rational increases, and it is therefore often sufficient to consider only the jumps at \(x=0\), 1/4, 1/2. ### Hausdorff Dimension According to Fig. 31, the distances between nearby points in a supercycle change with universal ratios after each bifurcation. The self-similarity of this pattern can be characterized by the Hausdorff dimension of the attractor. If for a set of points in \(d\) dimensions the number \(N(l)\) of \(d\)-spheres of diameter \(l\) needed to cover the set increases like \[N(l)\propto l^{-D}\quad\mbox{for}\quad l\to 0 \tag{3.69}\]then \(D\) is called the _Hausdorff_ dimension of the set. (The quantity defined in eq. (3.69) is actually the capacity dimension which agrees for our purposes with the Hausdorff dimension whose rigorous definition is, e. g., elaborated in Falconer's book on the _Geometry of Fractal Sets_ (1985)). For the self-similar sets shown in Fig. 32, \(D\) can be calculated from \[D\,=\,-\,\frac{\log\left[N(l)/N(l^{\prime})\right]}{\log\left(l/l^{\prime} \right)}\,. \tag{3.70}\] ### Self-Similarity, Universal Power Spectrum, and the Influence of External Noise Figure 32: Hausdorff dimension of a straight line and of some typical self-similar point sets, so-called _fractals_ (drawn after Mandelbrot, 1982). It is understood that the ramifications continue ad infinitum. Koch’s curve is a line of infinite length that encloses a finite area. We note that the length \(L\) of the Cantor set shown in Fig. 32 is indeed zero: \[L\,=\,1\,-\,\frac{1}{3}\,-\,\frac{2}{9}\,-\,\frac{4}{27}\,\cdots\,=1\,-\,\frac{1 }{3}\,\sum_{v\,=0}^{\infty}\,\left(\frac{2}{3}\right)^{v}\,=\,0. \tag{3.71}\] The Hausdorff dimension \(D^{*}\) of a \(2^{\,n}\)-cycle can be calculated in the limit \(n\,\rightarrow\,\infty\) as follows. If for a \(2^{\,n}\)-supercycle we need \(N(l)\,=\,2^{\,n}\) segments of length \(l\) to cover all its points, then from Fig. 31 it is found that the mean minimum length \(l^{\prime}\) to cover all \(N(l^{\prime})\,=\,2^{\,n\,+1}\) cycles is given approximately by \[l^{\prime}\,=\,\frac{1}{2^{\,n\,-1}}\,\left[\,2^{\,n}\,\frac{l}{\alpha}\,+\,2^ {\,n}\,\frac{l}{\alpha^{\,2}}\,\right] \tag{3.72}\] which yields \[D^{*}\,=\,-\,\log\,2/\log\,\left[\,\frac{1}{2}\,\left(\frac{1}{\alpha}\,+\, \frac{1}{\alpha^{\,2}}\right)\right]\,\equiv\,0.543. \tag{3.73}\] This value differs only by 5% from Grassberger's (1981) analytical and numerical result \(D^{*}\,=\,0.5388\,\ldots\) (The numerical result was obtained by covering the attractor with successively smaller segments \(l\) and counting \(N(l)\)). Fig. 33 demonstrates the typical Cantor-set structure of the attractor. We will now show that this leads to a remarkably simple change in the measurable power spectrum after each bifurcation step. ### Power Spectrum The power spectrum \(P(k)\) can be obtained by resolving the element \(x^{\,n}(t)\,=\,f^{\,k}_{\,k_{n}}(0)\) of a \(2^{\,n}\)-cycle (\(t\,=\,1,\,2,\,\ldots,\,2^{\,n}\,\equiv\,T_{n}\)) into its Fourier components \(a^{\,n}_{k}\) The periodicity of the cycle implies \[x^{n}(t)\,=\,x^{n}(t\,+\,2^{n})\,\rightarrow\,\mbox{e}^{\,2\pi ik}\,=\,1\, \rightarrow\,k\,=\,0,\,1,\,\ldots,\,2^{n}\,-\,1 \tag{3.75}\] i. e. after each bifurcation step from \(n\,\rightarrow\,n\,+\,1,\,2^{n}\) new subharmonics with frequencies \(k/2^{n\,+\,1}\,(k\,=\,1,\,3,\,5,\,\ldots)\) are obtained, as shown in Fig. 34. The corresponding change in the \(a_{k}^{n}\)'s can be calculated from \(\sigma\,(m)\). As a first step, we invert (3.74): \[a_{k}^{n}\,=\,\frac{1}{2\,^{n}}\,\sum\limits_{t\,=\,1}^{2^{n}}\,\mbox{e}^{\, \frac{2\pi ikt}{2^{n}}}\,x^{n}(t)\,\approx\,\frac{1}{T_{n}}\,\int\limits_{0}^{ T_{n}}\,\mbox{d}t\,\mbox{e}^{\,\frac{2\pi ikt}{T_{n}}}\,x^{n}(t) \tag{3.76}\] and by splitting the interval \([0,\,T_{n\,+\,1}]\) into two halves with \(\,T_{n}\,=\,\frac{1}{2}\,\,T_{n\,+\,1}\,\), we obtain : \[a_{k}^{n\,+\,1}\,=\,\int\limits_{0}^{T_{n}}\,\frac{\,\mbox{d}t}{2\,T_{n}}\, \left[x^{n\,+\,1}\,(t)\,+\,(-\,\,1)^{k}\,x^{n\,+\,1}\,(t\,+\,T_{n})\right]\, \mbox{e}^{\,-\,\frac{\pi ikt}{T_{n}}}\,. \tag{3.77}\] The new even harmonics \(a_{2k}^{n\,+\,1}\) are essentially represented by the old spectrum at \(n\) (see Fig. 34), because \[a_{2k}^{n\,+\,1}\,=\,\int\limits_{0}^{T_{n}}\,\frac{\,\mbox{d}t}{ 2\,T_{n}}\,[x^{n\,+\,1}\,(t)\,+\,x^{n\,+\,1}\,(t\,+\,T_{n})]\,\mbox{e}^{\,- \,\frac{2\pi ikt}{T_{n}}}\] \[\,=\,\int\limits_{0}^{T_{n}}\,\frac{\,\mbox{d}t}{T_{n}}\,x^{n\,+\, 1}\,(t)\,\,\mbox{e}^{\,-\,\frac{2\pi ikt}{T_{n}}}\,\,\approx\,\int\limits_{0}^ {T_{n}}\,\frac{\,\mbox{d}t}{T_{n}}\,x^{n}\,(t)\,\,\mbox{e}^{\,-\,\frac{2\pi ikt }{T_{n}}}\,=\,a_{k}^{n}\,. \tag{3.78}\] The calculation of the odd components is somewhat more delicate, and we require our previously calculated function \(\sigma\,(x)\). ### 3.3 Self-Similarity, Universal Power Spectrum, and the Influence of External Noise Figure 34: Change of the Fourier components after one bifurcation (schematically). From (3.77) we have \[a_{2k^{+}1}^{n^{n^{+}-1}}\,=\,\int\limits_{0}^{T_{n}}\,\frac{{\rm d}t}{2\,T_{n}}\, \left[x^{n^{+}+1}\left(t\right)\,-\,x^{n^{+}1}\left(t\,+\,T_{n}\right)\right]\,{ \rm e}^{-\frac{(2k+1)\,\varepsilon it}{T_{n}}} \tag{3.79}\] and \[x^{n^{+}1}\left(t\right)\,-\,x^{n^{+}1}\left(t\,+\,T_{n}\right) =\,x^{n^{+}1}\left(t\right)\,-\,f_{k_{n^{+}1}}^{2^{n}}\left[x^{n +1}\left(t\right)\right]\,=\,d^{n+1}\left(t\right)\] \[=\,\sigma\left(\frac{t}{2\,T_{n}}\right)\,d^{n}\left(t\right) \tag{3.80}\] with \[d^{n}\left(t\right) =\,x^{n}\left(t\right)\,-\,x^{n}\left(t\,+\,T_{n-1}\right)=\, \sum\limits_{k}\,a_{k}^{n}\left[1\,-\,(-1)^{k}\right]\,{\rm e}^{\frac{2\pi ikt }{T_{n}}}\] \[=\,2\,\sum\limits_{k}\,a_{2k^{+}1}^{n}\,{\rm e}^{\frac{2\pi i(2k+1 )}{T_{n}}}\;. \tag{3.81}\] Thus, we obtain: \[a_{2k^{+}1}^{n^{+}-1}\,=\,\sum\limits_{k^{\prime}}a_{2k^{\prime} +1}^{n}\,\int\limits_{0}^{T_{n}}\,\frac{{\rm d}t}{T_{n}}\,\,\sigma\left(\frac{ t}{2\,T_{n}}\right)\,{\rm e}^{\frac{2\pi i}{T_{n}}\,\left[2k^{+}+1\,-\,\frac{1}{2 }\left(2k+1\right)\right]}\] \[=\,\sum\limits_{k^{\prime}}\,a_{2k^{\prime}-1}^{n}\left[\frac{1}{ \alpha^{2}}+\frac{1}{\alpha}+i\left(-1\right)^{k}\left(\frac{1}{\alpha}-\frac{ 1}{\alpha^{2}}\right)\right]\,\frac{1}{2\,\pi i}\,\frac{1}{2\,k^{\prime}+1\,- \,\frac{1}{2}\,\left(2\,k+1\right)} \tag{3.82}\] because \[\int\limits_{0}^{1}\,{\rm d}\xi\,\sigma\left(\frac{\xi}{2}\right) \,{\rm e}^{2\pi i\xi\gamma} \approx\,\frac{1}{\alpha^{2}}\int\limits_{0}^{1/2}{\rm d}\xi\,\,{ \rm e}^{2\pi i\xi}\,+\,\frac{1}{\alpha}\int\limits_{1/2}^{1}{\rm d}\xi\,\,{ \rm e}^{2\pi i\xi\gamma}\] \[=\,\frac{1}{2\,\pi i}\,\,\frac{1}{y}\,\left[({\rm e}^{\pi iy}\,- \,1)/\alpha^{2}\,+\,({\rm e}^{2\pi iy}\,-\,{\rm e}^{\pi iy})/\alpha)\right] \tag{3.83}\] where \(\sigma\left(x\right)\) is approximated by a simple piecewise constant function. Replacing the sum over \(k^{\prime}\) in (3.82) by an integral and using \[\frac{1}{2\,\pi i}\,\int\,{\rm d}k^{\prime}\,x_{2k^{\prime}-1}^{n}\,\frac{1}{ 2\,k^{\prime}\,+\,\,1\,-\,\frac{1}{2}\,+\,\left(2\,k\,+\,1\right)}\,\,=\,\frac {1}{4}\,x_{(1/2)(2k\,+\,1)}^{n} \tag{3.84}\]we eventually obtain : \[\left|\,a_{2k+1}^{n\,+\,1}\,\right|\,\,=\,\,\mu^{\,-\,1}\left|\,a_{(1/2)(2k+1)}^{n \,n}\,\right|\,,\,\,\,\,\,\mu^{\,-\,1}\,\,=\,\,\frac{1}{4\,\alpha}\,\,\sqrt{2\, \left(1\,\,+\,\frac{1}{\alpha^{\,2}}\right)} \tag{3.85}\] \[\mu^{\,-\,1}\,=\,0.1525\,,\,\,\,\,\,\text{i.e.}\,\,\,\,\,\,10\,\log_{10}\mu\,=\, \,8.17\,\text{dB}\,\,.\] Therefore, the amplitudes of the odd subharmonics, which appear after each bifurcation step, are "in the mean" just the averaged amplitudes of the old odd components reduced by a constant factor \(\mu^{\,-\,1}\). (The many approximations which have been made in deriving (3.85) require this cautious restriction to averages.) The universal pattern \[\left|\,a_{2k}^{n\,+\,1}\,\right|\,\,\,\,\approx\,\,\left|\,a_{k}^{n}\,\right| \,,\,\,\,\,\,\,\left|\,a_{2k\,+1}^{n\,+\,1}\,\right|\,\,\,\,\approx\,\,0.152 \left|\,a_{(1/2)(2k\,,\,1)}^{n\,n}\,\right| \tag{3.86}\] is shown schematically in Fig. 34 and is reasonably consistent, e. g., with the numerical result found for the quadratic map \(f(x)\,=\,1\,\,-\,1.401155\,x^{2}\) depicted in Fig. 35. ### Influence of External Noise The full details of this power spectrum cannot be observed experimentally because there will always be some external noise due to the coupling to other degrees of freedom (see Fig. 36). In order to discuss this perturbation quantitatively, we add a noise term \(\xi_{n}\) to the logistic equation: \[x_{n\,+\,1}\,=\,f_{r}\left(x_{n}\right)\,+\,\,\xi_{n} \tag{3.87}\] and calculate its influence on the cascade of bifurcations. Figure 35: Numerically determined power spectrum for a quadratic map. Subsequent odd subharmonics differ by a factor \(\mu^{\,-\,1}\) (after Collet and Eckmann, 1980). Here, \(\xi_{n}\) are Gaussian-distributed variables with averages \[\langle\xi_{n}\xi_{n}\rangle\ =\ \sigma^{2}\,\delta_{n,n}. \tag{3.88}\] (similarly their Fourier components \(\xi_{k}\) are Gaussian-distributed), and \(\sigma\) measures the intensity of the white noise. We recall that the new Fourier components \(|\,a_{k}^{n+1}|\) of a \(2^{n+1}\)-cycle are a factor of \(\mu^{-1}\) smaller than the old components \(|\,a_{k}^{n}|\). This means that any finite external noise eventually suppresses all subharmonics above a certain \(n\), as shown in Fig. 36c. In fact the values \(R_{n}\) (above which all subharmonics become unobservable because they have merged into the chaos provided by the external noise) and the corresponding amplitude \(\sigma_{n}\) are related by a power law \[(R_{\infty}\ -\ R_{n})\ \propto\ \sigma_{h}^{\gamma} \tag{3.89}\] where \(\gamma\ =\ \log\delta/\log\mu\). This can be derived as follows: If at \(R_{1}\) a noise level \(\sigma_{1}\) is just sufficient to suppress the first subharmonic \(|a_{k}^{1}|\), then all \(|a_{k}^{n}|\ =\mu^{-n}|a_{k}^{1}|\) will disappear at \(R_{n}\) for \(\sigma_{n}\) = \(\mu^{-n}\)\(\sigma_{1}\). If the common \(n\) is eliminated, the corresponding scaling relations \[(R_{\infty}\ -\ R_{n}) \propto\ \delta^{-n} \tag{3.90a}\] \[\sigma_{n} \propto\ \mu^{-n} \tag{3.90b}\] yield (3.89). The decrease of \(R_{n}\) with increasing noise amplitude as in (3.89) has been verified numerically as shown in Fig. 37. The external noise, which produces chaos for \(R_{n}<R_{\infty}\), plays a similar role as a magnetic field which causes a finite magnetization above the critical point of a magnet. This analogy has been worked out by Shraiman, Wayne, and Martin (1981) who have shown, for example, that the Liapunov exponent scales in the presence of external noise like \[\lambda\ =\ r^{\beta}\lambda_{0}\,[r^{-1/\gamma}\,\sigma]\ ;\ \ \ \beta\ =\ \log 2/\log\delta\ ;\ \ \ r\ =\ R_{\infty}\ -\ R \tag{3.91a}\] Figure 37: Suppression of the periodic regime by the presence of external noise for the logistic map (after Crutchfield, Farmer and Huberman, 1982). or equivalently \[\dot{\iota}\,=\,\sigma^{\theta}\dot{\iota}_{1}\,[r\sigma^{-\gamma}]\ ;\ \ \theta\,=\,\log 2/\log\mu\] (3.91 b) where \(\dot{\iota}_{0,1}\) are universal functions (see Fig. 38). These results have also been obtained by Feigenbaum and Hasslacher (1982) using a decimation of path integrals. Their method, which has a wide range of potential applications, is explained in Appendix E. Eq. (3.91 a) is reminiscent of the scaling behavior of the magnetization \(M\) at a second-order phase transition: \[M\,=\,r^{\theta}f(r^{1}\,\,r\,h)\] (3.92) where \(r\,=\,|\,T\,-\,T_{c}|\) is the temperature distance to the critical point, and \(h\) is the magnetic field. For the onset of chaos, where \(\dot{\iota}\) changes sign, eq. (3.91 a) yields \[0\,=\,\dot{\iota}_{0}\,[r^{-1}\,\,r\,\sigma]\,\to\,r^{-1}\,\,r\,\cdot\,\sigma\,=\,{\rm const}.\] (3.93) i.e. our equation (3.89). ### 3.4 Behavior of the Logistic Map for \(r_{\infty}\leq r\) Let us now discuss the behavior of the logistic map for \(r\,\geq\,r_{\infty}\). We have already seen above that at \(r_{\infty}\) the sequence of bifurcations ends in a set of infinitely many points, the so-called Feigenbaum attractor, which has a Hausdorff dimension \(D\,=\,0.548\,\ldots\) Fig. 23 shows that the Liapunov exponent \(\dot{\iota}\) of the logistic map at \(r_{\infty}\) is still zero, i.e. the Feigenbaum attractor is no strange attractor (see Chapter 5 for the definition of this object). But according to Fig. 23, \(\dot{\iota}\) becomes mostly positive for \(r\,>\,r_{\infty}\), and it is therefore reasonable to say that chaos starts at the end of the bifurcation region. Although the detailed behavior of the iterates (of the logistic map) appears rather complicated in this region, it shows regularities which are again dictated by the doubling operator and therefore universal. It will be shown in the first part of this section that for \(r_{\infty}<r_{\ast}\) periodic and chaotic regions are densely interwoven, and one finds a sensitive dependence on the parameter values. Next we discuss the structural universality discovered by Metropolis, Stein and Stein (1973) which preceeded the work of Feigenbaum (1978). Finally we calculate the invariant density at \(r=4\) and explain the scaling of the reverse band-splitting bifurcations. ### Sensitive Dependence on Parameters Fig. 32 shows that for \(r_{\infty}<r\leqslant 4\) "chaotic parameter values" \(r\) with \(\lambda>0\) and non-chaotic \(r\)'s with \(\lambda<0\) are densely interwoven. Close to every parameter value where there is chaos, one can find another \(r\) value which corresponds to a stable periodic orbit, that is, the logistic map displays a sensitive dependence on the parameter \(r\). The practical implications of this behavior are worse than those of sensitive dependence on initial conditions. When chaos occurs, the only alternative is to resort to statistical predictions. But for sensitive dependence on parameters, statistical averages become unstable under variations in parameters because the average behavior of the system may be completely different in the periodic and in the chaotic case. Although there is a rigorous proof (Jacobson, 1981) that the total length of chaotic parameter intervals in \(r_{\infty}\leqslant r\leqslant 4\) is finite, there remain the following questions: * Which fraction of parameter values is chaotic? * What is the probability that a change in the parameter values will lead to a change in qualitative behavior. Since it is no longer possible to distinguish experimentally (i. e. when one has finite precision) between chaotic and nonchaotic parameter values, one can only make statistical predictions for the parameter dependence of the system. An answer to these questions has been given by D. Farmer (1985) who calculated numerically the coarse grained measure (i. e. total length) \(\mu\left(l\right)\) of all chaotic parameter intervals for \(f(x)=rx(1\ -\ x)\) and \(g\left(x\right)=r\sin\left(\pi x\right)\). Coarse grained means that all nonchaotic holes on the \(r\) axis, with a size larger than \(l\), were deleted (see Fig. 39). Fig. 40 shows that \(\mu\left(l\right)\) scales like \[\mu\left(l\right)\ =\ \mu\left(0\right)\ +\ A\,l^{\beta} \tag{3.94}\] Figure 39: a) Piece of a fat fractal with measure \(\mu\left(0\right)\), b) its coarse grained measure \(\mu\left(l\right)\) is larger than \(\mu\left(0\right)\) because only those holes that are bigger than the resolution \(l\) are deleted. where \(\beta=0.45\pm 0.05\) is numerically the same for both maps, whereas \(\mu\left(0\right)=0.8979\) (0.8929) for \(f(x)\) and \(g\left(x\right)\) respectively. The set of chaotic parameter values with a scaling behavior described by eq. (3.94) is an example of a "fat" fractal. Fat fractals have, in contrast to the "thin" fractals considered on page 55, a finite measure (i. e. volume). A typical example of a fat Cantor set is shown in Fig. 41 which is obtained by deleting from the unit interval the central 1/3, 1/9, 1/27... of each piece. The remaining lengths scale like \(l_{n}=1/2\left[1-\left(1/3\right)^{n}\right]l_{n-1}\). Using \(N_{n}=2^{n}\), the Hausdorff dimension \(D\) of this fat Cantor set becomes \(D=1\) via eq. (3.70). However, its volume scales according to: \[\mu\left(l_{n}\right)\,-\,\mu\left(0\right)\,=\,N_{n}\,l_{n}\,-\,N_{\infty}\,l _{\infty}\,=\,\prod\limits_{j=1}^{n}\,\left[1\,-\,\left(\frac{1}{3}\right)^{j }\right]\,-\,\prod\limits_{j=1}^{n}\,\left[1\,-\,\left(\frac{1}{3}\right)^{j }\right]\,\propto\] \[\,\propto\,1\,-\,\prod\limits_{j=n+1}^{n}\,\left[1\,-\,\left(\frac{1}{3}\right) ^{j}\right]\,\propto\,1\,-\,\left(\frac{1}{3}\right)^{n}\quad\mbox{for}\quad n \,\rightarrow\,\infty\,\,. \tag{3.95}\] Via \(l_{n}\,\propto\,(1/2)^{n}\)_for \(n\,\rightarrow\,\infty\)_then follows: \[\mu\left(l\right)\,-\,\mu\left(0\right)\,\propto\,l^{\beta}\quad\mbox{with} \quad\beta\,=\,\log 3/\log 2\,\,. \tag{3.96}\] Figure 41: Fat fractal which is constructed by deleting the central 1/3 then 1/9... of each remaining subinterval (compare this to the thin fractal in Fig. 32). Figure 40: The logarithm of the change \(\Delta\mu\left(l\right)=\mu\left(l\right)\,-\,\mu\left(0\right)\) in the coarse grained measure of chaotic parameter intervals plotted against the logarithm of the resolution \(l\) (after Farmer, 1985). Let us now come back to the physical meaning of eq. (3.94). It answers both questions raised above. The measure \(\mu\) (0) gives the fraction of chaotic parameter values in \(r_{\infty}<r<4\). The exponent \(\beta\) determines the probability \(p\) that a variation in \(r\) will change the qualitative behavior of the iterates. If one is sitting on a chaotic parameter value, \(p\) is proportional to the probability of finding a nonchaotic hole of size \(l\), that is, \(p\propto\mu\left(l\right)\,-\,\mu\left(0\right)\propto l^{\beta}\). This situation means, for numerical computations of the logistic map (which are usually done with a precision \(l\,\sim\,10^{\,-14}\)), that the odds of a mistake (i. e. that a trajectory believed to be chaotic is actually periodic) are, for \(\beta\cong 0.45\), of the order \(10^{\,-6}\), which is acceptable. According to Farmer (1985), one speaks only then of sensitive dependence on parameters, if \(\beta<1\) (i. e. if the odds of a mistake are larger than in the trivial case where one has \(p\,\sim\,l^{\,-1}\)). It has been found (Farmer, 1986) that the set of parameter values where quasiperiodic behavior occurs in the subcritical circle map is also a fat fractal (see Chapter 6). This implies sensitive dependence on parameters which distinguish between quasi-periodic and mode-locked behavior (i. e. sensitive parameter dependence is not necessarily tied to chaos). Let us finally note that the fact that the exponent \(\beta\) is numerically the same for the logistic map \(f(x)\) and the sine map \(g\left(x\right)\) indicates a sort of _global universality_ which is different from that originally found by Feigenbaum since it _applies to a set of positive measure_ (volume) rather than just special points as period-doubling transitions. ### Structural Universality Structural universality in unimodal maps was discovered by Metropolis, Stein and Stein (1973). They considered the iterates of the logistic map \(f(x)\) in the periodic windows. Starting from \(x_{0}=1/2\), i. e. from the \(x\) value which corresponds to the maximum of \(f(x)\), the sequence of iterates \(f^{n}\left(x_{0}\right)\) on a periodic attractor can be characterized by a string \(RL\,\ldots\) where \(R\) or \(L\) indicates whether \(f^{n}\left(x_{0}\right)\) is to the right or left of \(x_{0}\) (see e. g. Fig. 42). Table 3, which has been computed by Metropolis et al. (1973), shows that the sequence of strings is (up to cycles of length 7) the same for \(f(x)=rx\left(1\,-\,x\right)\) and \(g\left(x\right)=\) Figure 42: The map \(x_{n\,+\,1}=rx_{n}\left(1\,-\,x_{n}\right)\) with \(r=3.49856\) displays a 4-cycle of the type \(RLR\). \(q\) sin (\(\pi x\)). This numerical result (which has actually been calculated for cycles up to length 11 and for other unimodal maps) suggests that the ordering of the sequence of _RL_... strings is universal for all maps on the [0, 1] interval which have a differentiable maximum and fall off monotonically on both sides. This so-called _structural universality_ has been put on a rigorous footing by Guckenheimer (1980). It does not depend on the order of the maximum as the _metric universality_ of Feigenbaum (1978) (a "metric" is needed there to measure the distances which scale). But it should be noted that structural universality seems to be restricted to one-dimensional maps because in higher dimensions, up to now, no ordering has been found, and one can have coexisting cycles of different length with different basins of attraction (see also sect. 5.7). From the mathematical point of view, the sequence of cycles in an unimodal map \(f(x)\) is completely described by Sarkovskii's theorem (1964). It states that if \(f(x)\) has a point \(x\) which leads to a cycle of period \(p\) then it must also have a point \(x^{\prime}\) which leads to a \(q\)-cycle for every \(q\gets p\) where \(q\) and \(p\) are elements in the following sequence: \[\begin{array}{l}1\gets 2\gets 4\gets 8\gets 16\ldots 2^{m} \ldots\gets\\ \ldots 2^{m}\cdot 9\gets 2^{m}\cdot 7\gets 2^{m}\cdot 5\gets 2^{m}\cdot 3 \ldots\gets\\ \ldots 2^{2}\cdot 9\gets 2^{2}\cdot 7\gets 2^{2}\cdot 5\gets 2^{2}\cdot 3\ldots\gets\\ \ldots 2\cdot 9\gets 2\cdot 7\gets 2\cdot 5\gets 2\cdot 3\ldots\gets\\ \ldots 9\leftarrow 7\leftarrow 5\leftarrow 3\end{array} \tag{3.97}\] \begin{table} \begin{tabular}{l l l l} \hline \hline Period & U-sequence & Parameter value \(r\) & Parameter value \(q\) \\ & in \(x_{n\_1}=rx_{n}(1\ -\ x_{n})\) & in \(x_{n\_1}=q\sin(\pi x_{n})\) \\ \hline \hline \hline \end{tabular} \end{table} Table 3: Universal sequences for two unimodal maps. where the symbol \(\leftarrow\) means "precede" (for a proof, see references of this chapter). It should be emphasized that Sarkovskii's theorem is only a statement concerning different \(x\) values at a fixed parameter value. It says nothing about the stability of the periods nor about the range of parameter values for which it could be observed. It follows, from the sequence in eq. (3.97), that if \(f(x)\) has period three, then this implies that it must also have all periods \(n\) where \(n\) is an arbitrary integer. This is the famous theorem of Li and Yorke (1975) "Period three implies chaos". But it should be noted that "chaos" in this theorem means only aperiodic behavior and does not imply automatically a positive Liapunov exponent. ### Chaotic Bands and Scaling The logistic map at \(r\,=\,4\), \[x_{n\,-\,1}\,=\,4\,x_{n}\,(1\,-\,x_{n})\,=\,f_{4}\,(x_{n}), \tag{3.98}\] can actually be solved by the simple change of variables : \[x_{n}\,=\,\frac{1}{2}\,\left[1\,-\,\cos\,(2\,\pi\,y_{n})\right]\,\equiv\,h\,(y _{n}). \tag{3.99}\] Then eq. (3.98) can be converted into \[\frac{1}{2}\,\left[1\,-\,\cos\,(2\,\pi\,y_{n\,+\,1})\right]\,=\, \left[1\,-\,\cos\,(2\,\pi\,y_{n})\right]\,\left[1\,+\,\cos\,(2\,\pi\,y_{n}) \right]\,=\] \[\frac{1}{2}\,\left[1\,-\,\cos\,(4\,\pi\,y_{n})\right] \tag{3.100}\] which has one solution: \[y_{n\,+\,1}\,=\,2\,y_{n}\,\,\mbox{mod}\,1\,\equiv\,g\,(y_{n})\quad\mbox{or} \quad y_{n}\,=\,2^{n}\,y_{0}\,\,\mbox{mod}\,1. \tag{3.101}\] This implies the following solution to eq. (3.99): \[x_{n}\,=\,\frac{1}{2}\,\left[1\,-\,\cos\,(2\,\pi\,2^{n}\,y_{0})\right] \tag{3.102}\] where \(y_{0}\,=\,\frac{1}{2\,\pi}\,\,\mbox{arc}\,\cos\,(1\,-\,2\,x_{0})\). (3.103) Using eqns. (3.98\(-\)3.101), the invariant density \(\rho_{4}\left(x\right)\) of \(f_{4}\left(x\right)\) can be calculated from its definition: \[\rho_{4}\left(x\right)\;=\;\lim_{\chi\;\rightarrow\;\infty}\;\frac{1}{N}\sum_{n -0}^{\chi\;-1}\delta\left(x\;-\;x_{n}\right)\;=\;\lim_{N\rightarrow\;\infty} \;\frac{1}{N}\sum_{n\;=\;0}^{\chi\;-1}\delta\left[x\;-\;h\left(y_{n}\right) \right]\;.\] Using \(\rho\left(y\right)\;=\;1\) (which holds in analogy to the triangular map on page 30 also for the map in eq. (3.101)) eq. (3.102) becomes: \[\rho_{4}\left(x\right)\;=\;\int\limits_{0}^{1}\mathrm{d}y\rho\left(y\right)\; \delta\left[x\;-\;h\left(y\right)\right]\;=\;\frac{2}{\left|\;h^{\prime}\left[ y(x)\right]\right|} \tag{3.105}\] i. e. \[\rho_{4}\left(x\;=\;\frac{1}{\pi}\;\frac{1}{\left|\;\chi\left(1\;-\;x\right) \right.}\right. \tag{3.106}\] as depicted in Fig. 43. These results show that the map \(f_{r}\left(x\right)\) becomes ergodic for \(r=4\) and that the invariant density of a chaotic map need not always be a constant. For the Liapunov exponent eq. (3.106) yields at \(r\;=\;4\): \[\lambda\;=\;\int\limits_{0}^{1}\mathrm{d}x\,\rho_{4}\left(x\right)\left|f^{ \prime}\left(x\right)\right|\;=\;\log\,2 \tag{3.107}\] i. e. the same value as for the map in eq. 3.101 which demonstrates that the Liapunov exponent is indeed invariant under a change of the coordinates. Fig. 44 makes it plausible that the \(r\)-values for the inverse cascade (in which the chaotic regime at \(r\;=\;4\), which extends from \(0\;\leq\;x\;\leq\;1\), is decomposed into finer and finer subintervals \(I_{n}\) that merge into the Feigenbaum attractor) are again determined by the law of functional composition. ### 3.5 Parallels between Period Doubling and Phase Transitions In the first part of this section we present a dictionary of the corresponding terms used in the bifurcation route to chaos and in the renormalization-group theory for second-order phase transitions. In the second part we summarize the measurable properties that characterize the Feigenbaum route and discuss some representative experiments. We have already seen in Chapter 2 that the Liapunov exponent corresponds to the order parameter near a second-order phase transition. Table 3 shows that for the bifur Figure 44: **a) Bifurcations for \(r<r_{\infty}\) and the corresponding merging of chaotic regions for \(r>r_{\infty}\); the dark areas indicate the corresponding invariant densities (see Fig. 43). Note the nonlinear scale on the abscissa. (After Grossmann and Thomae, 1977.) b) The lengths \(I_{n}\) of the chaotic intervals are again related to functional composition and the \(\tilde{r}_{n}\)’s therefore scale like \(\tilde{r}_{n}-r_{\infty}\sim\delta^{-n}\).**cation route to chaos the analogy to a magnetic phase transition can be worked out in more detail. Both phenomena show a certain self-similarity (in the bifurcation pattern and in the pattern of spin-up/spin-down clusters near a critical point) which forms the basis for a renormalization-group treatment. Universality emerges then because there are only a few relevant eigenvalues (see also Appendices E and D). We can also derive scaling laws for the Liapunov exponent \(\lambda\) and the correlation function \(C(m)\) which are similar to those for the magnetization and the spin-spin correlation near a magnetic phase transition. According to (9), the Liapunov exponent of a map \(f\) is (for \(x_{0}=0\)) is given by \[\lambda(f)=\lim_{n\to\infty}\frac{1}{n}\sum_{r=0}^{n}\log|f^{\prime}\left\{f^{ i}(0)\right\}|. \tag{108}\] Using \[\mathrm{T}f\cdot\mathrm{T}f\cdots\cdot\mathrm{T}f=(\mathrm{T}f)^{i}=-af^{2i} \left(-\frac{x}{\alpha}\right) \tag{109}\] and \[\frac{\mathrm{d}}{\mathrm{d}x}\ \mathrm{T}f=f^{\prime}\left[f\left(-\frac{x}{ \alpha}\right)\right]f^{\prime}\left(\frac{x}{\alpha}\right) \tag{110}\] we find \[\lambda\left[\mathrm{T}f\right]=2\lim_{n\to\infty}\frac{1}{2n}\sum_{i=0}^{2n \to 1}\log|f^{\prime}[f^{i}(0)]|=2\lambda\left[f\right] \tag{111}\] which can be iterated to \[\lambda\left[f\right]=2^{-n}\lambda\left[\mathrm{T}^{n}f\right]. \tag{112}\] By choosing \(f=f_{R}\), we can use \[\mathrm{T}^{n}f_{R}(x)=g(x)+(R-R_{\infty})\cdot\delta^{n}\cdot a\cdot h(x) \tag{113}\] in (112) which yields, by setting \((R-R_{\infty})\delta^{n}=1\), the scaling relation \[\lambda_{f_{R}}=(R-R_{\infty})^{\beta}\lambda\left[g(x)+a\cdot h(x)\right] \tag{114}\] with \(\beta=\log 2/\log\delta\) as a critical exponent. This equation describes the approach of the Liapunov exponent to zero if a sequence of \(R\)'s with the same \(\mu\) (see Fig. 30) approaches \(R_{\infty}\); i.e., the power law \(\lambda\propto(R-R_{\infty})^{\beta}\) holds for the envelope of \(\lambda\). \begin{table} \begin{tabular}{l l} Phase transitions & Period doubling \\ Ginzburg-Landau Functional & One-dimensional map \\ \(H=\int{\rm d}^{\,\,a}x\,[c(\nabla\sigma)^{2}\,+\,t\sigma^{\,2}\,+\,u\sigma^{\,4}]\) & \(f_{R}(x)\) \\ with parameter vector \(\mu\,=\,(c,\,t,\,u)\) & \\ Distance to the critical point & Distance from \(R_{\infty}\) \\ \(t\,=\,T\,-\,T_{c}\) & \(R\,-\,R_{\infty}\) \\ Order parameter & Liapunov exponent \\ \((\sigma\,(x))\) (Magnetization) & \(\lambda_{R}\) (changes sign at \(R_{\infty}\)) \\ Formation of block spins \(\rightarrow\) & Functional composition \(\rightarrow\) \\ renormalization-group transformation & R doubling operator T \\ with fixed point \(H^{\bullet}\) (\(\pm\,\mu^{\bullet}\)) & with fixed point \(g\) \\ R [\(\mu^{\bullet}\)] = \(\mu^{\bullet}\) & T [\(g\)] = \(g\) \\ Linearized renormalization-group transformation & Linearized doubling transformation \\ \({\rm R}_{2^{n}}\,[\mu^{\bullet}]\,=\,\mu^{\bullet}\,+\,(T\,-\,T_{c})\,2^{n_{i }}\tilde{e}_{1}\) & T\({}^{n}f_{R}(x)\,=\,g(x)\,+\,(R\,-\,R_{\infty})\,\cdot\,\delta^{\,n}\cdot\,a \cdot\,h(x)\) \\ Parameter space: & Space of functions: \\ \(\tilde{e}_{1}\,=\,\) unstable direction & \(a\cdot\,h(x)\,=\,\) one-dimensional unstable manifold \\ \end{tabular} \end{table} Table 4: Parallels between phase transitions and period doubling. In a similar way, for the correlation function (2.35) \[C\left[m,f\right]\;=\;\lim_{n\rightarrow\infty}\frac{1}{n}\stackrel{{ n}}{{\underset{i=0}{\overset{\cdot}{\sum}}}}f^{i}\left(0\right)f^{i \cdot\,m}\left(0\right) \tag{3.115}\] one finds the scaling relation \[C\left[m,\,\text{T}f\right] =\;\alpha^{\,2}\lim_{n\rightarrow\infty}\frac{1}{n}\stackrel{{ \pi}}{{\underset{i=0}{\overset{\cdot}{\sum}}}}f^{2i}\left(0\right)f^{2i \cdot\,2m}\left(0\right)\;=\] \[=\;\alpha^{\,2}\lim_{n\rightarrow\infty}\left\{2\;\frac{1}{2n} \stackrel{{\sum}}{{\underset{i=0}{\overset{\cdot}{\sum}}}}f^{i} \left(0\right)f^{i\cdot\,2m}\left(0\right)\;-\;\right. \tag{3.116}\] \[\left.\;-\;\frac{1}{n}\stackrel{{\pi-1}}{{\underset{i=0}{ \overset{\cdot}{\sum}}}}f^{2i}\left[f(0)\right]f^{2i\cdot\,2m}\left[f(0) \right]\right\}\] i.e. \[C\left[m,\,\text{T}f\right]\;=\;\alpha^{\,2}\,C\left[2\,m,f\right] \tag{3.117}\] and by using again (3.109) : \[C\left[m,f_{R}\right]\;=\;\alpha^{\,-2n}\,C\left[2^{\,-n}\,m,\,g\left(x\right) \;+\;\left(R\;-\;R_{\infty}\right)\cdot\delta^{\,n}\cdot\,a\,\cdot\,h\left(x \right)\right]\,. \tag{3.118}\] Eq. (3.118) leads to a variety of scaling laws, depending on which combination of variables we set equal to unity. We mention that at \(R_{\infty}\) the correlation function decays with a power law in \(m\) : \[C\left[m,f_{R_{\infty}}\right]\;=\;\alpha^{\,-2n}\,C\left[2^{\,-n}\,m,\,g\left( x\right)\right]\;=\;m^{\,-n}\,C\left[1,g\left(x\right)\right] \tag{3.119}\] with \(\eta\,=\,\log\alpha^{\,2}/\log 2\). These power laws have the following counterparts in magnetic phase transitions: \[\lambda\;\propto\;\left|\,R\;-\;R_{\infty}\right|^{\,\beta} \triangleq\;M\;\alpha\,|\,T\;-\;T_{c}\,|^{\,\beta} \tag{3.120a}\] \[C\left(m\right)\;\propto\;m^{\,-n}\quad\text{at}\quad R_{\infty} \triangleq\;C\left(\left|\,x\,\right|\right)\;\propto\;\left|\,x\,\right|^{ \,-q}\quad\text{at}\quad T_{c} \tag{3.120b}\] where \(M\) is the magnetization and \(C\left(\left|\,x\,\right|\right)\) is the spin-spin correlation function. Experimental Support for the Bifurcation Route After a preponderance of theory, let us now present some experimental support for the Feigenbaum route. First, we summarize its measurable fingerprints: Bénard cell with only two-roll convection pattern of liquid helium, basic frequency \(f=0.5\) sec \({}^{-1}\). b)–d) Power spectrum of the temperature \(x(t)\) with increasing Rayleigh number which is proportional to \(r\). e) The heights of the \(n\)th subharmonics are compared with Feigenbaum’s theory (horizontal lines). (After Libchaber and Maurer, 1980.) The situation is somewhat better for the nonlinear \(RCL\)-oscillator shown in Fig. 46. The nonlinear element in this circuit is, according to Linsay (1981), the capacitor-diode, which leads to the following nonlinear relation between charge \(q\) and voltage \(V\): \[V(g)\;=\;\left[\;1\;+\;\frac{V(q)}{0.6}\;\right]^{0.43}\;\frac{q}{C_{0}}\;. \tag{3.121}\] The differential equation for the time dependence of \(q\) is \[L\,\ddot{q}\;+\;R\,\dot{q}\;+\;V(q)\;=\;V_{0}\,\sin\,(2\,\pi f_{1}\,t) \tag{3.122}\] and the circuit acts like an analog computer for a driven nonlinear oscillator. Fig. 46 shows that for special values of \(V_{0}\) (which is proportional to the control parameter \(r\)) the sequence of current signals \(I_{n}\,=\,I(t_{0}\,+\,n\,T)\), where the time \(T\,=\,1/f_{1}\), can indeed be generated from a one-dimensional map with a quadratic maximum. (The current is related to the charge via \(I\,=\,\dot{q}\), and \(I_{n}\) corresponds to \(x_{n}\)). The corresponding power Figure 46: A) Circuit for the driven nonlinear RCL-oscillator. B) The observed current \(I(t\,+\,T)\) vs. \(T(t)\) yields a one-dimensional map with a single maximum. C) Determination of \(\delta\) from the values of the control parameter \(V_{0}\). D) a–c: Subharmonics in the power spectrum for increasing \(V_{0}\); d: comparison with Feigenbaum’s theory (horizontal lines). (After Linsay, 1981.) Figure 47: Experimental set up for generation and detection of cavitation noise (a). Sequence of observed (b) and calculated (c) power spectra for different input pressures. The noise amplitude is encoded as grey scale and the input pressure (which is measured experimentally by the voltage at the driving piezoelectric cyclinder) is increased linearly in time. See also the colored version of (c) in Plate V at the beginning of the book. (After Lauterborn and Cramer, 1981.) spectrum exhibits \(-\) as expected \(-\) all the features of the bifurcation route and yields an estimate for \(\delta\) which deviates by 10% from Feigenbaum's asymptotic value. See also the phase portraits (\(I(t)\) versus \(V(t)\)) for the nonlinear \(RCL\)-oscillator of Lauterborn et al. (1984) in Plate I at the beginning of the book. We note also that there exists theoretical (Rollins and Hunt, 1982) and experimental (S. Martin, priv. comm.) evidence that the chaotic behavior of \(RCL\)-oscillators with Varactor diodes (which were used in the experiments above) is not caused by the nonlinearity of the diode but by its large recovery time. But, this situation can again be described by a one-dimensional noninvertible map. To demonstrate that the Feigenbaum route indeed occurs in quite different systems, we finally describe an experiment by Lauterborn and Cramer (1981) in which this route has been observed in acoustics (Fig. 47a). They irradiated water with sound of high intensity and measured the sound output of the liquid. The nonlinear elements in this system are cavitations, i.e. bubbles filled with water vapor which are created by the pressure gradients of the initial sound wave and whose wall oscillations are highly nonlinear. Fig. 47 shows a sequence of power spectra that is obtained experimentally (b) and from a numerical calculation (c) (in which only a single spherical bubble was considered). With increasing input pressure (which is the external control parameter), one observes a subharmonic route to chaos that, besides the sequence \(f_{0}\to f_{0}/2\to f_{0}/4\ldots\)\(\rightarrow\) chaos, also contains \(f_{0}/3\). Moreover, the system shows signs of reverse bifurcations where it returns from chaotic behavior to a line spectrum. **Abstract** We study the behavior of the \(\beta\)-function in the \The Intermittency Route to Chaos By intermittency we mean the occurrence of a signal that alternates randomly between long regular (laminar) phases (so called intermissions) and relatively short irregular bursts. Such signals have been detected in a large number of experiments. It has also been observed that the number of chaotic bursts increases with an external parameter, which means that intermittency offers a continuous route from regular to chaotic motion. In the first section of this chapter, we present mechanisms for this phenomenon proposed by Pomeau and Manneville (1979) and discuss type-I intermittency which is generated by an inverse tangent bifurcation. It is shown in the second section that the transition to chaos via intermittency has in fact universal properties and represents one of the rare examples where the (linearized) renormalization-group equations can be solved exactly. These results will be used in Section 3 to demonstrate that intermittency provides a universal mechanism for 1/\(f\)-noise in nonlinear systems. In the final section, we summarize typical properties of the intermittency route and discuss some experiments. ### 4.1 Mechanisms for Intermittency The intermittency route to chaos has been investigated in a pioneering study by Pomeau and Manneville (1979). They solved numerically the differential equations of the Lorenz model, \[\dot{X} = \sigma\left(X\,-\,Z\right) \tag{4.1a}\] \[\dot{Y} = -XZ\,+\,rX\,-\,Y\] (4.1b) \[\dot{Z} = XY\,-\,bZ \tag{4.1c}\] and for the \(Y\)-component they found the behavior shown in Fig. 48. For \(r<r_{c}\), \(Y(t)\) executes a stable periodic motion. Above the threshold \(r_{c}\), the oscillations are interrupted by chaotic bursts, which become more frequent as \(r\) is increased until the motion becomes truly chaotic. Pomeau and Manneville gave the following interpretation for this behavior: The stable oscillations for \(r<r_{c}\) correspond to a stable fixed point in the Pioncar\(\acute{\text{e}}\) map (see also Fig. 6). Above \(r_{c}\) this fixed point becomes unstable. Because there are essentially three ways in which a fixed point can loose its stability (in all of them the modulus of the eigenvalues of the linearized Poincar\(\acute{\text{e}}\) map becomes larger than unity), Pomeau and Manneville distinguished the three types of intermittency shown in Table 5. (See also Table 7, p. 98, for the form of the signal.) ### Type-I Intermittency Fig. 49 shows a Poincar\(\acute{\text{e}}\) map for the Lorenz model, after Pomeau and Manneville who plotted the values \(y_{n}\) where \(y\left(t\right)\) crossed the plane \(x=0\). If this figure is compared with Table 5, it is seen that the Lorenz model displays intermittency of type I. This transition to chaos is charaterized by an _inverse tangent bifurcation_ in which two fixed points (a stable and an unstable one) merge as depicted in Fig. 50. \begin{table} \begin{tabular}{p{14.2pt} p{142.3pt} p{142.3pt}} \hline \hline Type & Characteristic behavior and maps & Typical map (\(\varepsilon<0\rightarrow\varepsilon>0\)) & Eigenvalues \\ \hline I & A real eigenvalue crosses the unit circle at \(+\)\(1\)\(x_{n+1}=\varepsilon+x_{n}+ux_{n}^{2}\) & & \\ \hline II & Two conjugate complex eigenvalues cross the unit circle simultaneously. & & \\ \(r_{n+1}=(1+\varepsilon)r_{n}+ux_{n}^{3}\) & & \\ \(\theta_{n+1}=\theta_{n}+\Omega\) & & \\ \hline III & A real eigenvalue crosses the unit circle at \(-\)\(1\)\(x_{n+1}=-(1+\varepsilon)x_{n}-ux_{n}^{3}\) & & \\ \hline \hline \end{tabular} \end{table} Table 5: Three types of intermittency. For \(r>r_{c}\), the map has no stable fixed points. However, a sort of "memory" of a fixed point is displayed since the motion of the trajectory slows down in the vicinity of \(x_{c}\), and numerous iterations are required to move through the narrow channel between the map and, the bisector. This leads to the long laminar regions for values of \(r\) just above \(r_{c}\) in Fig. 48. After the trajectory has left the channel, the motion becomes chaotic until reinjection into the vicinity of \(x_{c}\) starts a new regular phase. The theory of Pomeau and Manneville explains only the laminar motion but gives no information about the mechanism which generates chaos. Another example for type-I intermittency appears in the logistic map \[x_{n-1}=f_{r}(x_{n}=rx_{n}(1-x_{n}). \tag{4.2}\] Fig. 50: Mechanism for type-I intermittency. a) Poincaré map for \(\varepsilon=r-r_{c}\leq 0\), b) Poincaré map for \(\varepsilon>0\) and motion of the trajectory, (note that the “ghost of the fixed point†\(x_{c}\) attracts trajectories on the left hand side and repells them on the right hand side), c) inverse tangent bifurcation. Numerically, it is found that for \(r_{c}=1+\mid 8\) this map exhibits a cycle of period three with subsequent bifurcations, i. e., there is a window in the chaotic regime as shown schematically in Fig. 51. The iterates for \(r\)-values larger and smaller than \(r_{c}\) are shown in Fig. 52. There is a regular cycle of period three slightly above \(r_{c}\); but below \(r_{c}\), laminar regions occur interrupted by chaos. An explanation of this peculiar behavior follows from Fig. 53, which shows the third iterate of \(f_{r}(x)\) at \(r=r_{c}\). There are three fixed points that become unstable for \(r<r_{c}\) and lead to intermittency of type I. It should be noted that inverse tangent bifurcations provide (in contrast to pitchfork bifurcations in which the number of fixed points is doubled) the only mechanism by which an uneven number of fixed points can be generated in the logistic map. ### Length of the Laminar Region As the next step, we calculate the average length \(\langle l\rangle\) of a laminar region (as a function of the distance \(\varepsilon=r-r_{c}\) from the critical point) for the logistic map. It will become clear from our derivation that the result for \(\langle l\rangle\) (\(\varepsilon\)) is not confined to this special map but holds for any Poincare map that leads to type-I intermittency. ### Mechanisms for Intermittency Figure 52: Iterates of the logistic map starting from \(x=0.7\); a) in the stable three-cycle region \(r_{c}-r=-0.02\); b) in the intermittent region \(r_{c}-r=0.002\). (After Hirsch et al. 1981.) Expanding \(f_{t}^{3}(x)\) around the points \(x_{c}\) and \(r_{c}\) that are determined by \[\frac{\mathrm{d}}{\mathrm{d}x}\,f_{r_{c}}^{3}(x_{c})\,=\,1,\,\,\,\,f_{r_{c}}^{3} (x_{c})\,=\,x_{c} \tag{4.3}\] we obtain \[f_{r}^{3}(x)=f_{r}^{3}[x_{c}\,+\,(x-x_{c})]\,=\,x_{c}\,+\,(x-x_{c})\,+\,a_{c}\, (x-x_{c})^{2}\,+\,b_{c}\,(r-r_{c}) \tag{4.4}\] In this region we can therefore safely replace the difference equation (4.7) by the differential equation, \[\frac{{\rm d}\,y}{{\rm d}\,l}\,=\,a\,y^{\,2}\,+\,\varepsilon \tag{4.9}\] (\(l\) counts the iterations in the laminar region) which after integration yields \[l(y_{out}\,,\,y_{in})\,=\,\frac{1}{\mid\,\stackrel{{\sim}}{{a \varepsilon}}\,\mid\,\arctan\,\left[\,\frac{y_{out}}{\mid\,\varepsilon/a}\, \right]\,-\,\arctan\,\left[\,\frac{y_{in}}{\mid\,\varepsilon/a}\,\right]\,\right] \tag{4.10}\] To find the average length \(\langle l\rangle\) of a laminar region, we assume that after having left the laminar region at \(y_{out}=c\), the point becomes, after some irregular bursts, reinjected to \(|y|<c\) at \(y_{in}\) with a probability function \(P\) (\(y_{in}\)), which is symmetric about \(x_{c}\), i. e. \(P\) (\(y_{in}\)) \(=\)\(P\) (\(-\)\(y_{in}\)). This yields \[\langle l\rangle\,=\,\int\limits_{-c}^{\varepsilon}{\rm d}\,y_{in}\,P\,(y_{in })\,l\,(c\,,\,y_{in})\,=\,\frac{1}{\mid\,\stackrel{{\sim}}{{a \varepsilon}}\,\arctan\,\left[\,\frac{c}{\mid\,\varepsilon/a}\,\right]\,. \tag{4.11}\] For \(c/\mid\,\varepsilon/a\,\gg\,1\), the average length \(\langle l\rangle\) varies as \[\langle l\rangle\,\propto\,\varepsilon^{-1/2}\,. \tag{4.12}\] This characteristic variation was first derived by Pomeau and Manneville (1980) and is valid numerically for the logistic map as shown in Fig. 54. Renormalization-Group Treatment of Intermittency The intermittency phenomenon has also been investigated by the renormalization-group method using the doubling operator which we encountered previously for the Feigenbaum route. The idea is as follows: One considers a generalization \(f\left(x\right)\) of the map (4.7) for \(\epsilon=0\) to arbitrary exponents \(z>1\) which for \(x\to 0\) has the form \[f(x\to 0)=x+u|x|^{z}. \tag{4.13}\] Its second iterate \(f^{z}\left(x\right)\) shows (because of the linear term in \(x\)), after proper rescaling, the same asymptotic behavior (see Fig. 55). This is reminiscent of Fig. 29 for the logistic map. It could, therefore, be asked whether repeated application of the doubling operator T to a function of type (4.13) could also lead to a fixed point \(f^{\star}\left(x\right)\) of T : \[\text{T}f^{\star}\left(x\right)=af^{\star}\left[f^{\star}\left(\frac{x}{\alpha} \right)\right]=f^{\star}\left(x\right) \tag{4.14}\] but with the boundary conditions (4.13), i. e. \(f^{\star}\left(0\right)=0\) and \(f^{\star\,\prime}\left(0\right)=1\) instead of \(f^{\star}\left(0\right)=1\) and \(f^{\star\,\prime}\left(0\right)=0\) for the Feigenbaum bifurcations. It has been shown by Hu and Rudnick (1982) that together with the new boundary condition (4.13) the fixed-point equation (4.14), which is characteristic for intermittency, can be solved exactly. Here, the trick is to write the recursion relation \[x^{\prime}\,=\,f(x) \tag{4.15}\] in implicit form \[G(x^{\prime})\,=\,\,G\left(x\right)\,-\,a \tag{4.16}\] i. e. \[x^{\prime}\left(x\right)\,=\,G^{\,-\,1}\left[G\left(x\right)\,-\,a\right]\,=\, f(x) \tag{4.17}\] where \(a\) is a free parameter. The fixed-point equation \[a\,f^{\ast}\left[f^{\ast}\left(x\right)\right]\,=\,f^{\ast}\left(\alpha\,x\right) \tag{4.18}\] then becomes \[\alpha\,x^{\prime\prime}\left(x\right)\,=\,x^{\prime}\left(\alpha\,x\right) \tag{4.19}\] or by operating on this with \(G\): \[G\left(\alpha\,x^{\prime\prime}\right)\,=\,G\left[x^{\prime}\left(\alpha\,x \right)\right]\,=\,G\left(\alpha\,x\right)\,-\,a\,. \tag{4.20}\] Next, eq. (4.16) is used to obtain \[G\left(x^{\prime\prime}\right)\,=\,G\left(x^{\prime}\right)\,-\,a\,=\,G\left( x\right)\,-\,2a \tag{4.21}\] i. e. \[\frac{1}{2}\,\,G\left(x^{\prime\prime}\right)\,=\,\frac{1}{2}\,\,G\left(x \right)\,-\,a\,. \tag{4.22}\] Comparison of (4.20) and (4.22) indicates that to solve the fixed-point equation, \(G\) must have the property \[\frac{1}{2}\,\,G^{\ast}\left(x\right)\,=\,G^{\ast}\left(\alpha\,x\right)\,. \tag{4.23}\] The simple choice \(G^{*}(x)\,=\,|\,x\,|^{-(z-1)}\) with \(\alpha\,=\,2^{\frac{1}{(z-1)}}\) yields the desired result. The fixed-point function therefore becomes \[f^{*}(x)\,=\,G^{*\,-1}\,[G^{*}(x)\,-\,a]\,=\,[\,|\,x\,|^{-(z-1)}\,-\,a]^{-1/(z-1)} \tag{4.24}\] which for \(a\,=\,(z\,-\,1)\,u\) fulfills boundary condition (4.13). This derivation shows that the fixed-point map for intermittency is mathematically related to a translation \(G(x^{\prime})\,=\,G(x)\,-\,a\); however, a simple physical explanation for this connection is not clear. It is of course enough to find the fixed-point function \(f^{*}(x)\) but one wants to classify the perturbations to \(f^{*}\) according to their relevance (see, e. g., Table 4). We investigate, therefore, how the doubling transformation T acts (to linear order in \(\varepsilon\)) on a function \[f_{\varepsilon}(x)\,=\,f^{*}(x)\,+\,\varepsilon\,h_{\lambda}(x)\quad\mbox{ for }\quad\varepsilon\,\ll\,1. \tag{4.25}\] Using the definition (4.14) for T we find: \[{\rm T}f_{\varepsilon}\,=\,af_{\varepsilon}\left[f_{\varepsilon}\left(\frac{x }{\alpha}\right)\right] \tag{4.26}\] \[\,=\,af^{*}\left[f^{*}\left(\frac{x}{\alpha}\right)\,+\,\varepsilon\,h_{ \lambda}\left(\frac{x}{\alpha}\right)\right]\,+\,\varepsilon\,ah_{\lambda} \left[f^{*}\left(\frac{x}{\alpha}\right)\,+\,\varepsilon\,h_{\lambda}\left( \frac{x}{\alpha}\right)\right]\] \[\,=\,af^{*}\left[f^{*}\left(\frac{x}{\alpha}\right)\right]\] \[\,+\,\varepsilon\alpha\left\langle f^{*}\left[f^{*}\left(\frac{x}{\alpha} \right)\right]h_{\lambda}\left(\frac{x}{\alpha}\right)\,+\,h_{\lambda}\left[f^ {*}\left(\frac{x}{\alpha}\right)\right]\right\rangle\,+\,{\rm O}\,(\varepsilon ^{2})\] \[\,=\,f^{*}(x)\,+\,\lambda\,\varepsilon\,h_{\lambda}(x)\,+\,{\rm O}\,( \varepsilon^{2})\.\] The last equation holds only if \(h_{\lambda}(x)\) is an eigenfunction, with the eigenvalue \(\lambda\), of the linearized doubling operator \({\rm L}_{f^{*}}\): \[{\rm L}_{f^{*}}\left[h_{\lambda}(x)\right]\,\equiv\,\alpha\,\left\langle f^{* }\left[f^{*}(x)\right]h_{\lambda}(x)\,+\,h_{\lambda}\left[f^{*}(x)\right] \right]\,=\,\lambda\,h_{\lambda}(\alpha\,x) \tag{4.27}\] by analog to eq. (3.50) for the Feigenbaum route. We now show that the method used above (to find the fixed-point function \(f^{*}\)) allows us also to find the spectrum of eigenvalues \(\lambda\) and the corresponding eigenfunctions \(h_{\lambda}\). First we write \(f_{\varepsilon}(x)\) in implicit form using eq. (4.17): \[f_{\varepsilon}(x)\,=\,f^{*}(x)\,+\,\varepsilon\,h_{\lambda}(x)\,=\,x^{*}\,= \,G_{\varepsilon}{}^{-1}\left[G_{\varepsilon}(x)\,-\,a\right]. \tag{4.28}\]If we expand \[G_{\epsilon}\left(x\right)\;=\;G^{*}\left(x\right)\;+\;\varepsilon H_{\lambda} \left(x\right) \tag{4.29}\] then \(h_{\lambda}\left(x\right)\) can be expressed in terms of \(H_{\lambda}\left(x\right)\) (and vice versa) by comparing the factors linear in \(\varepsilon\) on both sides of (4.28). Next we consider the second iterate, \[x^{\prime\prime}\left(x\right)\;=\;f_{\epsilon}\left\{U_{\epsilon}\left(x \right)\right\} \tag{4.30}\] and apply \(G_{\epsilon}\) to this. This yields \[G_{\epsilon}\left(x^{\prime\prime}\right)\;=\;G_{\epsilon}\left(x^{\prime} \right)\;-\;a\;=\;G_{\epsilon}\left(x\right)\;-\;2\,a \tag{4.31}\] or more explicitly: \[G^{*}\left(x^{\prime\prime}\right)\;+\;\varepsilon H_{\lambda}\left(x^{\prime \prime}\right)\;=\;G^{*}\left(x\right)\;+\;\varepsilon H_{\lambda}\left(x \right)\;-\;2\,a\;. \tag{4.32}\] Because \(G^{*}\left(x\right)\) has the form of a simple power of \(x\) we try a similar ansatz for \(H_{\lambda}\left(x\right)\): \[H_{\lambda}\left(x\right)\;=\;\left|\;x\,\right|\;^{-p}\;. \tag{4.33}\] Using the property (4.23) of \(G^{*}\left(x\right)\), (4.32) then becomes \[G^{*}\left(\alpha\,x^{\prime\prime}\right)\;+\;\lambda\,\varepsilon H_{ \lambda}\left(\alpha\,x^{\prime\prime}\right)\;=\;G^{*}\left(\alpha\,x\right)\; +\;\lambda\,\varepsilon H_{\lambda}\left(\alpha\,x\right)\;-a \tag{4.34}\] or \[G_{\lambda\epsilon}\left(\alpha\,x^{\prime\prime}\right)\;=\;G_{ \lambda\epsilon}\left(\alpha\,x\right)\;-\;a \tag{4.35}\] \[\;\rightarrow\;\alpha\,x^{\prime\prime}\;=\;G_{\lambda\epsilon}^{ -1}\left[G_{\lambda\epsilon}\left(\alpha\,x\right)\;-\;a\right] \tag{4.36}\] where \[\lambda\;=\;2^{\frac{p+1-z}{z-1}}\;. \tag{4.37}\] With (4.28) this translates into \[\alpha f_{\epsilon}\left\{U_{\epsilon}\left(x\right)\right\}\;=\;f_{\lambda \epsilon}\left(\alpha\,x\right)\;=\;f^{*}\left(\alpha\,x\right)\;+\;\lambda\, \varepsilon\,h_{\lambda}\left(\alpha\,x\right)\;. \tag{4.38}\] By comparing this result with eq. (4.26) we see that \(\lambda\) is indeed the eigenvalue of \(h_{\lambda}\). Using (4.26) this becomes after many iterations \[\langle l\rangle\ \bigcup\langle l^{\prime}(\chi_{0})\rangle\ =\ 2^{n}\,\langle l \rangle\ [\Gamma^{n}f(\chi_{0})\ =\ f^{\star}\,(\chi_{0})\ +\ \varepsilon\, \lambda_{\varepsilon}^{n}\,h_{i}\,(\chi_{0})]\] from which for \(\varepsilon\,\lambda_{\varepsilon}^{n}\ =\ 1\) we obtain with (4.42): \[\langle l\rangle\ \propto\ \varepsilon^{\,-\,v}\quad\mbox{with}\quad v\ =\ \frac{z\ -\ 1}{z}. \tag{4.44}\] For \(z=2\) this agrees with our previous result (4.12). One can show with the same method that a perturbation which is linear in \(x\), i.e. \[f(x)\ =\ f^{\star}\,(x)\ +\ \varepsilon x \tag{4.45}\] leads to \[\langle l\rangle\ \propto\ \varepsilon^{\,-\,1} \tag{4.46}\] and perturbations \(\varepsilon x^{m}\) with \(m>z\) are irrelevant. Finally, we mention that the effect of external noise with amplitude \(\sigma\) on intermittency has been treated by Hirsch, Nauenberg, and Scalapino (1982) with the net result that \(\langle l\rangle\) scales like \[\langle l\rangle\ =\ \varepsilon^{\,-\,v}\,g\,(\sigma^{\,u}\,\varepsilon)\quad\mbox{with}\quad\mu\ =\ \frac{z\ -\ 1}{z\ +\ 1} \tag{4.47}\] where \(g\) is a universal function. ### 4.3 Intermittency and \(1/f\)-Noise It has been observed experimentally that the power spectra \(S_{f}\) of a large variety of physical systems (see Table 6) diverge at low frequencies with a power law \(1/f^{\delta}\,(0.8<\delta<1.4)\). This phenomenon is called \(1/f\)-noise. Despite considerable theoretical efforts, a general theory encompassing \(1/f^{\delta}\)-divergencies in several experiments is still lacking. In the following, we show that a class of maps which generates intermittent signals also displays \(1/f^{\delta}\)-noise, and we link the exponent \(\delta\) to the universal properties of the map using the renormalization-group approach. Although the intermittency mechanism for \(1/f\)-noise is \(-\) as we shall demonstrate below \(-\) well verified numerically for maps, is still remains unresolved, whether it also provides an explanation for the experiments shown in Table 6. (We do not think that the intermittency mechanism which is very sensitive to external pertubations could explain the robust 1/_f_-noise found in resistors. But there is a good chance to find this mechanism in chemical reactions and in the Benard convection; see Manneville (1980), and Dubois et al. (1983).) \begin{table} \begin{tabular}{l l} System & Signal \\ Carbon film & Current \\ Metal film & Current \\ Semiconductor & Current \\ Metal contact & Current \\ Semiconductor contact & Current \\ Ionic solution contact & Current \\ Superconductor & Flux flow \\ Vaccum tube & Current \\ Junction diode & Current \\ Schottky diode & Current \\ Zener diode & Current \\ Bipolar transistor & Current \\ Field effect transistor & Current \\ Thermocell & Thermovoltage \\ Electrolytic concentration cell & Voltage \\ Quartz oscillator & Frequency \\ Earth (5 days mean of rotation) & Frequency \\ Sound and speech sources & Loudness \\ Nerve membrane & Potential \\ Highway traffic & Current \\ \end{tabular} \end{table} Table 6: Systems showing 1/_f_-noise. We want to calculate the power spectrum \(S_{f}\) for the map \[x_{n+\,1}\,=\,f(x_{n}) \tag{4.48}\] in Fig. 58 where \(x_{n}\geq 0\). In other words, we only use that part of the map where the "ghost of the fixed point" is repulsive (compare Figs. 50 and 65). Therefore our mechanism for 1/\(f\)-noise only works for type-III (and type-II) intermittency (Ben-Mizrachi et al., 1985). It is useful to express \(S_{f}\) via the correlation function \(C\left(m\right)\): \[S_{f}\,\propto\,\lim_{N\rightarrow\infty}\,\frac{1}{N}\,\sum_{m=0}^{N}\,\cos \left(2\,\pi\,mf\right)C\left(m\right) \tag{4.49}\] where \[C\left(m\right)\,=\,\lim_{N\rightarrow\infty}\,\frac{1}{N}\,\sum_{n=0}^{N}\,x _{n+\,m}\,x_{n}. \tag{4.50}\] Fig. 58: The map \(f(x)\) has the limiting behavior \(f(x\to 0)\,=\,x\,+\,ux^{z}\) and is arbitrary beyond \(x\,=\,c\) with the only requirement that this part of the map produces random reinjection into the region \(0\,\leq\,x_{0}\,\leq\,c\) with a probability \(\hat{P}\left(x_{0}\right)\). Fig. 59: a) The iterates \(x_{n}\,=\,f^{n}\left(x_{0}\right)\) as a function of time, showing laminar and chaotic behavior according to whether the trajectory is in [0, \(c\)] or in the chaotic region; b) the idealized signal. (This result follows by Fourier transformation from the definitions in eqns. (4.49) and (4.50).) To evaluate \(C\left(m\right)\), we idealize the signal as shown in Fig. 59b; i. e., we assume that \(x_{n}\) is practically zero in the laminar regions and replace the short burst regions by lines of height one. \(C\left(m\right)\) then becomes proportional to the conditional probability of finding a signal at time \(m\), given that there occured a signal at time zero. Next, we express \(C\left(m\right)\) in terms of the probability \(P(l)\) of finding an intermission of length \(l\), which we shall calculate below in a universal way. Fig. 60 shows that \[C\left(1\right) = P\left(1\right)\] \[C\left(2\right) = P\left(2\right)\,+\,P\left(1\right)^{2}\,=\,P\left(2\right)\,+\,C \left(1\right)P\left(1\right)\] \[C\left(m\right) = P\left(m\right)\,+\,C\left(1\right)P\left(m\,-\,1\right)\,+\, \ldots\,+\,C\left(m\,-\,1\right)P\left(1\right) \tag{4.51}\] which can be written as \[C\left(m\right)\,=\,\sum\limits_{k\,=\,0}^{m}\,C\left(m\,-\,k\right)P\left(k \right)\,+\,\delta_{m,0} \tag{4.52}\] if we define \(P\left(0\right)\,=\,0\), \(C\left(0\right)\,=\,1\). We now use eq. (4.24) to calculate the probability \(P\left(l\right)\) of finding a laminar region of length \(l\) for (4.48). \(P(l)\) is related to the probability \(\tilde{P}\left(x_{0}\right)\) via \[\tilde{P}\left(x_{0}\right)\mathrm{d}x_{0}\,=\,\tilde{P}\left[x_{0}\left(l \right)\right]\,\left|\,\frac{\mathrm{d}\,x_{0}}{\mathrm{d}l}\,\right|\,\, \mathrm{d}l\,\equiv\,P\left(l\right)\mathrm{d}l \tag{4.53}\] Figure 60: The probability of finding a signal at \(m\), assuming there was a signal at zero, can be expressed by \(P(l)\). \[\to P\,\left(l\right)\,=\,\hat{P}\left[x_{0}\left(l\right)\right]\,\left|\,\frac{ \mathrm{d}x_{0}}{\mathrm{d}l}\,\right| \tag{4.54}\] since it follows from Fig. 58 that \[f^{\prime}\left(x_{0}\right)\,=\,c\,\,\,\rightarrow\,x_{0}\,=\,x_{0}\left(l\right) \tag{4.55}\] \(x_{0}\left(l\right)\) can be calculated by using the doubling operator. In the absence of relevant perturbations (which will be discussed later), we have \[\mathrm{T}^{\,n}f\left(x_{0}\right)\,=\,\alpha\,^{n}f^{2n}\left(x_{0}/\alpha\, ^{n}\right)\,\approx\,f^{\ast}\left(x_{0}\right)\,,\quad\mathrm{for}\quad n \,\gg\,1 \tag{4.56}\] i. e. the function is driven to the fixed point. This yields \[f^{2n}\left(x_{0}\right)\,=\,\alpha\,^{-n}f^{\ast}\left(\alpha\,^{n}x_{0} \right)\,. \tag{4.57}\] Fig. 61: Numerically determined power spectra for \(z=5/2\) and \(z=2\) compared to eq. (4.62) (after Procaccia and Schuster, 1983). Fig. 61 shows that this result agrees reasonably well with the numerically determined power spectra of the map \[x_{n+1}\,=\,x_{n}\,+\,x_{n}^{z}\,\,{\rm mod}\,\,1\] for \[z\,=\,\frac{5}{2}\quad{\rm and}\quad z\,=\,2\,\,. \tag{4.63}\] Let us now briefly discuss the effect of perturbations. The low frequency divergence of the power spectrum arises because arbitrarily long laminar regions (\(P(l)\,\propto\,l^{-z/(\xi-1)}\)) occur with finite probability in the (unperturbed) map in Fig. 58. But we also showed in Section 2 that in the presence of relevant perturbations (as e. g. a shift \(\varepsilon\) from tangency) the average duration of an intermission becomes finite: \[\langle l\rangle\,\sim\,\varepsilon^{-\nu}\,\,. \tag{4.64}\] This yields a cutoff \[f_{\varepsilon}\,\sim\,\langle l\rangle^{-1}\,\sim\,\varepsilon^{\nu} \tag{4.65}\] in the \(1/f^{\delta}\)-behavior of \(S_{f}\) as shown in Fig. 62. Experimental Observation of the Intermittency Route Table 7 summarizes some measurable characteristic properties of the intermittency route to chaos. The different types of intermittency can be distinguished by the form of the signal and by the distribution \(P(l)\) of the laminar lengths. \begin{table} \begin{tabular}{c c c} Type & Poincaré map & Laminar Signal & Distribution \(P(l)\) \\ \hline \(x_{n+1}\) & \(=\)\(x_{n}\) + \(x_{n}^{2}\) + \(\varepsilon\) & increases monotonously \\ \hline \(x_{n+1}\) & \(=\)\((1\ +\ \varepsilon)\,r_{n}\) + \(u\,r_{n}^{3}\) & \\ \(\theta_{n+1}\) & \(=\)\(\theta_{n}\) + \(\Omega\) & spirals \\ \hline \(x_{n+1}\) & \(=\)\(-(1\ +\ \varepsilon)\,x_{n}\) - \(u\,x_{n}^{3}\) & alternates \\ \hline \end{tabular} \end{table} Table 7: Characteristic properties of different types of intermittency Below we present a derivation of \(P(l)\) and describe two representative experiments in which type-I intermittency has been detected. Type-II intermittency has (to the best of our knowledge) not yet been found in a real experiment. This section closes with brief report on the first experimental observation of type-III intermittency. ### Distribution of Laminar Lengths We assume that the signal is randomly reinjected (with a probabilitiy \(\tilde{P}(x_{0})\)) into the laminar regime in such a way that we can use eq. (4.54): \[P(l)\,=\,\tilde{P}(x_{0})\,\left|\,\frac{\mathrm{d}x_{0}}{\mathrm{d}l}\,\right|\,\,. \tag{4.66}\] In order to obtain \(x_{0}(l)\), we approximate, as in (4.9), the Poincare map for type-I intermittency (see Table 4) \[x_{n+1}\,=\,\varepsilon\,+\,x_{n}\,+\,ux_{n}^{2} \tag{4.67}\] in the laminar region by the differential equation \[\frac{\mathrm{d}x}{\mathrm{d}l}\,=\,\varepsilon\,+\,ux^{2}\,\,. \tag{4.68}\] This yields by integration \[l\,=\,\frac{1}{\sqrt{\varepsilon u}}\left[\arctan\left[\frac{c}{\sqrt[1]{ \varepsilon/u}}\right]\,-\,\arctan\left[\frac{x_{0}}{\sqrt{\varepsilon/u}} \right]\right] \tag{4.69}\] where \(c\) is the maximum value of \(x(l)\) in the laminar regime (see Fig. 58). \(P(l)\) follows from eqns. (4.66) and (4.69): \[P(l)\,=\,\frac{\varepsilon}{2c}\left\{1\,+\,\tan^{2}\left[\arctan\left[\frac{c }{\sqrt[1]{\varepsilon/u}}\right]\right]\,-\,l\sqrt[1]{\varepsilon u}\right\} \tag{4.70}\] and \[\langle l\rangle\,=\,\int\limits_{0}^{\infty}\,\mathrm{d}l\,P(l)\,l\,-\, \varepsilon^{-1/2}\quad\text{for}\quad\varepsilon\,\to\,0\,\,. \tag{4.71}\]The distributions \(P(l)\) for the two other types of intermittency are obtained in a similar way, with the net results \[P(l)\,\sim\,\frac{\varepsilon^{2}\mathrm{e}^{4\varepsilon l}}{(\mathrm{e}^{4 \varepsilon l}-1)^{2}}\qquad\quad\text{for type II} \tag{4.72}\] and \[P(l)\,\sim\,\frac{\varepsilon^{3\cdot 2}\mathrm{e}^{4\varepsilon l}}{(\mathrm{e}^{4 \varepsilon l}-1)^{3\cdot 2}}\qquad\quad\text{for type III}. \tag{4.73}\] For type-II intermittency eq. (4.66) has to be replaced by \(P(l)=\hat{P}(r_{0})\,r_{0}|\,\mathrm{d}\,r_{0}/\mathrm{d}\,l\,|\) because the Poincare map is two-dimensional. ### Type-I Intermittency Fig. 63 shows the vertical velocity as a function of time for a Benard experiment. The signal shows a behavior which is typical for type-I intermittency. The nonlinear \(RCL\)-oscillator described on page 63 also displays the intermittency route. Type-I intermittency is indicated in Fig. 64 by the Poincare map, the scaling behavior of the lengths of the laminar regions, and the maximum in \(P(l)\) for \(l>0\). Fig. 63: Intermittency for a Bénard experiment: The vertical velocity component measured in the middle of a Bénard cell changes with increasing Rayleigh number from periodic motion (a) via intermittent motion (b) to chaos (c) (after Bergé et al., 1980). ### Type-III Intermittency Type-III intermittency has first been observed by M. Dubois, M. A. Rubio and P. Berge (1983) in Benard convection in a small rectangular cell. They measured the local horizontal temperature gradient via the modulation of a light beam that was sent through the cell. Fig. 65 as shows the time dependence of the light intensity that is characteristic for type-III intermittency. The intermittency appears simultaneously with a period-doubling bifurcation. One observes the growth of a subharmonic amplitude together with a decrease of the fundamental amplitude. When the subharmonic amplitude reaches a high value, the signal looses its regularity, and turbulent bursts appear. ### Experimental Observation of the Intermittency Route Figure 64: Intermittency in the nonlinear _RCL_-oscillator: a)\(I(t+5T)\) versus \(I(t)\) which corresponds to the fifth iterate of the logistic map at tangency which is shown in b). c) The measured averaged length for which the laminar regions scales like \(\langle I\rangle\propto\varepsilon^{-0.43}\) (where \(\varepsilon\sim V_{0}-V_{c}\)) is in reasonable agreement with the prediction of Manneville and Pomeau \(\langle I\rangle\propto\varepsilon^{-0.5}\). d) \(P(I)\) vs. laminar lengths \(I\) (in units of \(5~{}T\)) for \(\varepsilon=2.5\cdot 10^{-4}\). (After Jeffries and Pérez, 1982.)By plotting subsequent maxima \(I_{n}\) of both the subharmonic mode (even \(n\), crosses) and the fundamental mode (odd \(n\), squares), the Poincare map shown in Fig. 65 b is obtained. Its form can be described by \[I_{n+2}\ =\ (1\ +\ 2\varepsilon)\,I_{n}\ +\ bI_{n}^{3} \tag{4.74}\] where \(b\) is a constant and \(\varepsilon\ \propto\ (R\ -\ R_{c})\) measures the distance to the critical Rayleigh number \(R_{c}\) (which corresponds to the threshold of the intermittent behavior). Equation (4.74) can be derived from the map \[I_{n+1}\ =\ f(I_{n})\ \equiv\ -\ (1\ +\ \varepsilon)\,I_{n}\ -u\,I_{n}^{3} \tag{4.75}\] with \(b\ =\ u\,(2\ +\ 4\,\varepsilon)\). Its eigenvalue \[\lambda\ =\ f^{\prime}\,(0)\ =\ -\ (1\ +\ \varepsilon) \tag{4.76}\] crosses the unit circle at \(-1\), which again signals type-III intermittency according to Table 5. Fig. 65: a) Time dependence of the light intensity which is roughly proportional to the local horizontal temperature gradient. b) Poincaré map \(I_{n+2}\) versus \(I_{n}\) constructed from the data in a) for \(\varepsilon\ =\ 0.098\). The amplitudes of the light modulation in the turbulent bursts have not been drawn. Note that the “ghost of the fixed point†\(\circ\) is purely repulsive. c) Number \(N\) of laminar lengths with \(l>\ T_{0}\) i. e. \(N=\int\limits_{t_{0}}^{\infty}\,\mathrm{P}\,(l)\,\mathrm{d}l\) versus \(T_{0}\). The experimental points agree with the line obtained from (4.73) for \(\varepsilon\ =\ 0.098\). (After Dubois et al., 1983). ## Chapter 5 Strange Attractors in Dissipative Dynamical Systems In the first part of this chapter we show that nonlinear dissipative dynamical systems lead naturally to the concept of a strange attractor. In Section 2, the Kolmogorov entropy is introduced as the fundamental measure for chaotic motion. Section 3 deals with the problem of how much information about a strange attractor can be obtained from a measured random signal. We discuss the reconstruction of the trajectory in phase space from the measured time series of a single variable and introduce generalized dimensions and entropies. It is demonstrated how this quantities can be obtained from a measurement and how one can extract from them the distribution of singularities in the invariant measure that characterizes the static structure of a strange attractor and the fluctuation spectrum of the Kolmogorov entropy which describes the dynamical evolution of the trajectory on the attractor. Finally we present in the last chapter a collection of pictures of strange attractors and fractal boundaries. ### 5.1 Introduction and Definition of Strange Attractors In this section, we consider dissipative systems that can be described either by flows or maps. Let us begin with dissipative flows. These are described by a set of autonomous first-order differential equations, \[\dot{\vec{x}}=\,\vec{F}(\vec{x}),\qquad\vec{x}=\,(x_{1},\,x_{2},\,\ldots\,x_{ \text{d}}) \tag{5.1}\] and the term dissipative means that an arbitrary volume element \(V\) enclosed by some surface \(S\) in phase space \([\vec{x}]\) contracts. The surface \(S\) evolves by having each point on it follow an orbit generated by (5.1). This yields, by the divergence theorem, \[\frac{\text{d}\,V}{\text{d}\,t}=\,\int\limits_{t}\,\text{d}^{\,d}x\,\biggl{(} \sum\limits_{t\,\cdot\,1}^{d}\frac{\partial F_{i}}{\partial x_{i}}\biggr{)} \tag{5.2}\] and dissipative systems are defined by \(\text{d}\,V/\text{d}\,t<\,0\). An example of this kind of flow is given by the Lorenz model \[\begin{array}{l}\dot{X}\,=\,\,-\,\,\sigma X\,+\,\,\sigma\,Y\\ \dot{Y}\,=\,\,-\,\,XZ\,+\,\,rX\,-\,\,Y\\ \dot{Z}\,=\,\,\,\,\,\,\,XY\,-\,\,bZ\end{array} \tag{5.3}\] for which one finds via (5.2) \[\frac{\,{\rm d}\,V}{\,{\rm d}\,t}\,=\,\,-\,\,(\sigma\,+\,\,1\,+\,\,b)\,V\,<\,0; \,\,\,\,\,(\sigma\,>\,0,\,b\,>\,0) \tag{5.4}\] i. e. the volume element contracts exponentially in time: \[V(t)\,=\,V(0)\,{\rm e}\,^{-(\sigma\,-\,1\,-\,b)t}\,. \tag{5.5}\] If, on the other hand, the trajectory generated by the equations of the Lorenz model for \(r\,=\,28,\sigma\,=\,10,\,b\,=\,8/3\) is considered (see Fig. 66), one finds that it is a) attracted to a bounded region in phase space; b) the motion is erratic; i. e., the trajectory makes one loop to the right, then a few loops to the left, then to the right, etc.; and c) there is a sensitive dependence of the trajectory on the initial conditions; i. e., if instead of (0, 0.01, 0) an adjacent initial condition is taken, the new solution soon deviates from the old, and the number of loops is different. Fig. 67 shows a plot of the \(n\,\)th maximum \(M_{n}\) of \(Z\) versus \(M_{n\,-\,1}\). The resulting map is approximately triangular, which corresponds, according to the material discussed in Chapter 2, to a chaotic sequence of \(M_{n}\)'s. Summarizing: The trajectory depends sensitively on the initial conditions; it is chaotic; it is attracted to a bounded region in phase space; and (according to eq. (5.4)) the volume of this region contracts to zero. This means that the flow of the three-dimensional Lorenz system generates a set of points whose dimension is less than three; i. e., its volume in three-dimensional space is zero. At first sight, one might think of the next lower integer dimension, two. However, this is forbidden by the _Poincare-Bendixson_ theorem which states that there is no chaotic flow in a bounded region in two-dimensional space. We refer, e. g., to the monograph by Hirsch and Smale (1965) for a rigorous proof of this theorem. However, Fig. 68 makes it plausible that both the continuity of the flow lines and the fact that a line divides a plane into two parts restrict the trajectories in two dimensions so strongly that the only possible attractors for a bounded region are limit cycles or fixed points. The solution to this problem is that the set of points to which the trajectory in the Lorenz system is attracted, the so-called Lorenz attractor, has a Hausdorff dimension which is noninteger and lies between two and three (the precise value is \(D=2.06\)). This leads, in a natural way, to the concept of a strange attractor which appears in a large variety of physical, nonlinear systems. A _strange attractor_ has the following properties (a more formal definition can be found in the review articles by Eckmann and Ruelle, 1985): 1. It is an attractor, i. e., a bounded region of phase space \(|\vec{x}|\) to which all sufficiently close trajectories from the so-called basin of attraction are attracted asymptotically for long enough times. We note that the basin of attraction can have a very complicated structure (see the pictures in Sect. 5.4). Furthermore, the attractor itself should be indecomposable; i. e., the trajectory should visit every point on the attractor in the course of time. A collection of isolated fixed points is no single attractor. 2. The property which makes the attractor strange is the sensitive dependence on the initial conditions; i. e., despite the contraction in volume, lengths need not shrink in all directions, and _points, which are arbitrarily close initially, become exponentially separated at the attractor for sufficiently long times_. This leads to a positive Kolmogorov entropy, as we shall see in the next section. All strange attractors that have been found up to now in dissipative systems have fractal Hausdorff dimensions. Since there exists no generally accepted formal definition of a strange attractor (Ruelle, 1980; Mandelbrot, 1982), it is not yet clear whether a fractal Hausdorff dimension follows already from a)-b) or should be additionally required for a strange attractor. A strange attractor arises typically when the flow contracts the volume element in some directions, but stretches it along the others. To remain confined to a bounded domain, the volume element is folded at the same time. By analogy to the broken linear maps in Chapter 2, this stretching and backfolding process produces a chaotic motion of the trajectory at the strange attractor (see also Fig. 69). Because the definition given above describes the properties of a set of points, the concept of a strange attractor is not confined to flows, and dissipative maps can also generate strange attractors. A map \[\vec{x}\left( n + 1 \right) = \vec{G}\left[ \vec{x}\left( n \right) \right]\ ;\ \ \ \ \vec{x}_{1}\left( n \right) = \ \left[ x_{1}\left( n \right),\,\ldots\, x_{d}\left( n \right) \right]\] (5.6a) is called dissipative if it leads to a contraction of volume in phase space; i. e., if the absolute value of its Jacobian \[J\], by which a volume element is multiplied after each iteration, is smaller than unity: \[\mid J\mid = \ \left| \det\,\left( \frac{\partial G_{i}}{\partial x_{j}} \right) \right|\ < 1. \tag{5.6b}\] Figure 69: a) Two strange attractors I and II with different basins of attraction separated by a boundary \(B\) b) Deformation of a volume element on a strange attractor with increasing time. This leads to the foliated fractal structure shown in Fig. 90c. The Poincare-Bendixson theorem that restricts the dimension of strange attractors generated by flows to values larger than two does not hold for maps. This is because maps generate discrete points and the restrictions imposed by the continuity of the flow are lifted. Dissipative maps can therefore lead to strange attractors that also have dimensions smaller than two. Let us consider two illustrative examples which, because of their lower dimensionality, are easier to visualize than the Lorenz attractor. ### Baker's Transformation Fig. 70 shows the usual baker's transformation, which is an area preserving map (reminiscent of a baker kneading dough), and the non-area preserving, dissipative baker's transformation. The mathematical expression for the latter is \[x_{n+1} = 2x_{n}\bmod 1\] (5.7 a) \[y_{n+1} = \begin{cases}ay_{n}&\text{for}\quad 0\ \leq x_{n}<\frac{1}{2}\\ \frac{1}{2}\ +\ ay_{n}&\text{for}\quad\frac{1}{2}\ \leq x_{n}\ \leq\ 1\end{cases}\] (5.7 b) where \(a<1/2\). ### Introduction and Definition of Strange Attractors Fig. 70: a) Baker’s transformation; b) dissipative baker’s transformation. The first equation (5.7a) is our old friend from Chapter 2: the transformation \(\sigma\) which leads to the Bernoulli shift. It has a Liapunov exponent (in \(x\)-direction), \(\lambda_{x}=\log 2>0\), which leads to the sensitive dependence on the initial conditions, and makes the object resulting from repeated applications of this map to the unit square a strange attractor. The attractor is an infinite sequence of horizontal lines, and its basin of attraction consists of all points within the unit square. The Liapunov exponent in the \(y\)-direction is \(\lambda_{y}=\log a<0\), and lengths are contracted in this direction such that the net result (of the stretching in \(x\)- and shrinking in \(y\)-direction) is a volume contraction, as required for a dissipative map. The Hausdorff dimension \(D_{B}\) of this strange attractor can be calculated as follows: In the \(x\)-direction the attractor is simply one-dimensional (as the map \(\sigma(x)\) of Chapter 2). The Hausdorff dimension in the \(y\)-direction follows from its definition \[\lim_{t\to 0}\,N(l)\,\propto\,l^{-D_{y}} \tag{5.8}\] and from the self-similarity of the attractor in the vertical direction, shown in Fig. 70b. This yields \[\frac{N(a)}{N(a^{2})}\,=\,\frac{1}{2}\,=\,a^{-D_{y}}\,\to\,D_{y}\,=\,\log\, \left(\frac{1}{2}\right)/\log a \tag{5.9}\] and finally \[D_{B}\,=\,1\,+\,D_{y}\,=\,1\,+\,\frac{\log\,2}{|\log a|}\,\,. \tag{5.10}\] ### Dissipative Henon Map This is the two-dimensional analogue of the logistic map introduced by Henon (1976), and we recall its recursion relation from Chapter 1 \[x_{n\,-\,1}\,=\,1\,-\,ax_{n}^{2}\,+\,y_{n}\] (5.11 a) \[y_{n\,-\,1}\,=\,b.x_{n}\,.\] (5.11 b) This map is area contracting, i. e., is dissipative for \(\mid b\mid\,<\,1\) because its Jacobian is just \[\left|\,\det\,\left(\begin{matrix}-\,2\,ax_{n}&1\\ b&0\end{matrix}\right)\,\right|\,=\,\,\left|\,b\,\right|\,\,. \tag{5.12}\] The action of the map is shown in Fig. 71. Let us now examine its iterates for, e. g., \(b=0.3\), \(a=1.4\). Fig. 72 a shows the result of an iteration with \(10^{4}\) steps, and we have indicated the dynamics by enumerating some successive points on the attractor that looks like a very tangled curve. Figs. 72 b-c show details of the regions inside the box of the previous figure and reveal the selfsimilar structure of the attractor. The Hausdorff dimension of the Henon attractor is: \(D(a=1.4\), \(b=0.3)=1.26\). This result was obtained by placing a square net of width \(l\) over the diagram, counting the number \(N(l)\) of squares occupied by points, and forming \(D=-\lim\limits_{l\to 0}\log N(l)/\log l\) If Fig. 72c is resolved into six,,leaves", then \[\begin{array}{c}\includegraphics[width=142.26378pt]{figures/figuresthe relative probability of each leaf can be estimated by simply counting its number of points. The height of each bar in Fig. 72 d is the relative probability, and the width is the thickness of the corresponding leaf. The different heights of the bars in Fig. 72 d show that the _Henon attractor is inhomogeneous_. This inhomogeneity cannot be described by the Hausdorff dimension alone and in the following we shall therefore introduce an infinite set of dimensions which characterize the static structure (i. e. the distribution of points) of the attractor. However, before this step, it is useful to discuss the Kolmogorov entropy that describes the dynamical behavior at the strange attractor. ### 5.2 The Kolmogorov Entropy The Kolmogorov entropy (Kolmogorov, 1959) is the most important measure by which chaotic motion in (an arbitrary-dimensional) phase space can be characterized. Before we introduce this quantity, it is useful to recall that the thermodynamic entropy \(S\) measures the disorder in a given system. A simple example, for a system where Figure 72: a) The Hénon attractor for \(10^{4}\) iterations. Some successive iterates have been numbered to illustrate their erratic movement on the attractor. b), c) Enlargements of the squares in the preceding figure. d) The height of each bar is the relative probability to find a point in one of the six leaves in c). (After Farmer, 1982a, b.) \(S\) increases, is that of gas molecules that are initially confined to one half of a box, but are then suddenly allowed to fill the whole container. The disorder in this system increases because the molecules are no longer separated from the other half of the box. This increase of disorder is coupled with an increase of our ignorance about the state of the system (before the confinement was lifted, we knew more about the positions of the molecules). More precisely, the entropy \(S\), which can be expressed as \[S\propto\ -\ \sum\limits_{i}\ P_{i}\ \log\ P_{i} \tag{5.13}\] where \(\{P_{i}\}\) are the probabilities of finding the system in states \(\{i\}\), measures, according to Shannon et al. (1949) (see Appendix \(F\)), the information needed to locate the system in a certain state \(i^{\star}\); i. e., \(S\) is a measure of our ignorance about the system. This example from statistical mechanics shows that disorder is essentially a concept from information theory. It is therefore not too surprising that the Kolmogorov entropy \(K\), which measures "how chaotic a dynamical system is", can also be defined by Shannon's formula in such a way that \(K\) becomes proportional to the rate at which information about the state of the dynamical system is lost in the course of time. ### Definition of K \(K\) can be calculated as follows (Farmer, 1982a, b): Consider the trajectory \(\vec{x}(t)=\{x_{\rm T}\,(t)\),... \(x_{d}(t)\}\) of a dynamical system on a strange attractor and suppose that the \(d\)-dimensional phase space is partitioned into boxes of size \(l^{d}\). The state of the system is now measured at intervals of time \(\tau\). Let \(P_{i_{0}}\,\ldots\,i_{s}\) be the joint probability that \(\vec{x}(t\ =\ 0)\) is in box \(i_{0}\), \(\vec{x}(r\ =\ \tau)\) that it is in box \(i_{1}\),..., and \(\vec{x}(t\ +\ n\,\tau)\) that it is in box \(i_{n}\). Acording to Shannon, the quantity \[K_{n}\ =\ -\ \sum\limits_{i_{0}\ldots\,i_{n}}P_{i_{0}\,\ldots\,i_{n}}\ \log P_{i_{0}\,\ldots\,i_{n}} \tag{5.14}\] is proportional to the information needed to locate the system on a special trajectory \(i_{0}^{\star}\ldots\,i_{n}^{\star}\)with precision \(l\) (if one knows a priori only the probabilities \(P_{i_{0}\,\The limit \(l\to 0\) (which has to be taken _after_\(N\to\infty\)) makes \(K\) independent of the particular partition. For maps with discrete time steps \(\tau=1\), the limit \(\tau\to 0\) is omitted. Table 8 shows that \(K\) is indeed a useful measure of chaos. \(K\) becomes zero for regular motion, it is infinite in random systems, but it is a constant larger than zero if the system displays deterministic chaos. Here we assumed for simplicity that a) \(P_{\{\mathbf{q}\}_{1}^{n}}\) factorizes into \(P_{\{\mathbf{q}\}}\cdot\,(1/N)\) where \(N\) is the number of possible new intervals which evolve from \(\{\mathbf{q}\}\) and b) \(K_{a+1}\,-\,K_{a}\,=\,K_{1}\,-\,K_{0}\) for all \(n\). \begin{table} \begin{tabular}{c c} _Regular motion_ & Initially adjacent points stay adjacent \\ \(P_{\{\mathbf{q}\}}=\,l\), & \(P_{\{\mathbf{q}\}_{1}^{n}}=\,l\cdot 1\) \\ \(K=\,0\) \\ \end{tabular} \end{table} Table 8: \(K\)-entropies for (one-dimensional) regular, chaotic and random motion. ### Connection of \(K\) to the Liapunov Exponents For one-dimensional maps, \(K\) is just the positive Liapunov exponent (see Table 8 and eq. (2.12)). In higher dimensional systems, we loose information about the system because the cell in which it was previously located spreads over new cells in phase space at a rate which is determined by the positive Liapunov exponents (see Fig. 73). It is therefore plausible that the rate \(K\) at which information about the system is lost is equal to the (averaged) sum of positive Liapunov exponents (Pesin, 1977): \[K = \int{\rm d}^{\,d}x\rho\left(\vec{x}\right)\,\sum_{i}\,\lambda_{i}^{\,+} \left(\vec{x}\right)\,. \tag{5.16}\] Here \(\rho\left(\vec{x}\right)\) is the invariant density of the attractor. In most cases, the \(\lambda\)'s are independent of \(\vec{x}\); the integral then becomes unity, and \(K\) reduces to a simple sum. The definition of the Liapunov exponent \(\lambda\) for a one-dimensional map \(G\left(x\right)\) (see eq. (2.9)), \[{\rm e}^{\,\lambda} = \lim_{\chi\rightarrow\infty}\left(\prod_{n=0}^{N-1}\,\left|\frac {{\rm d}G}{{\rm d}x_{n}}\right|\right)^{1/N} \tag{5.17}\] can be easily generalized to \(d\) dimensions, where we have \(d\) exponents for the different spatial directions, \[\left({\rm e}^{\,\lambda_{1}},{\rm e}^{\,\lambda_{2}}\ldots{\rm e}^{\, \lambda_{d}}\right) = \lim_{N\rightarrow\infty}\left({\rm magnitude\ of\ the\ eigenvalues\ of\ \prod_{n=0}^{\lambda-1}J(\vec{x}_{n})}\right)^{1/N} \tag{5.18}\] ### The Kolmogorov Entropy Figure 73: A two-dimensional map transforms a small circle into an ellipse with minor and major radii distorted according to the Liapunov exponents \(\lambda_{x}\) and \(\lambda_{y}\). Note that \({\rm e}^{\,\lambda\,-}\) does not enter \(K\) because, due to this exponent, _no_ new cells are covered after one time step. \[J(\vec{x})\,=\,\left(\frac{\partial G_{i}}{\partial x_{j}}\right) \tag{5.19}\] is the Jacobian matrix of the map \(\vec{x}_{n\,-\,1}\,=\,\vec{G}\,(\vec{x}_{n})\). Note that the eigenvalues \(\{\lambda_{i}\}\) of the Jacobian matrix are invariant under coordinate transformations in phase space, i.e. from (5.16), \(K\) is also invariant, as one would expect for such an important physical quantity. Let us briefly comment on the computation of Liapunov exponents for flows. First, there is a difference in the calculation of Liapunov exponents for maps (\(\lambda_{\rm M}\)) and flows (\(\lambda_{\rm F}\)) which can be explained by the following trivial example. The Liapunov exponent \(\lambda_{\rm M}\) of the map \[x_{n\,-\,1}\,=\,\,ax_{n}\,\rightarrow\,x_{n}\,=\,\,{\rm e}^{\,n\,\log a}\,x_{0} \tag{5.20}\] is obviously \(\lambda_{\rm M}\,=\,\log a\). Whereas one obtains for the flow \[\dot{x}\,=\,\,ax\,\rightarrow\,x(t)\,=\,\,{\rm e}^{\,at}\,x(0)\;, \tag{5.21}\] the result that nearby trajectories separate with rate a i. e. the Liapunov exponent \(\lambda_{\rm F}\) is simply \(\lambda_{\rm F}\,=\,a\). (Both examples show no chaos, of course, because backfolding is missing). For general flows described by an autonomous differential equation \[\dot{\vec{x}}\,=\,\vec{f}(\vec{x})\;, \tag{5.22}\] the difference \(\vec{\epsilon}\,(t)\) of infinitesimal neighbored trajectories (see Fig. 74) develops according to \[\dot{\vec{\epsilon}}\,=\,\,M\,(t)\,\vec{\epsilon} \tag{5.23}\] where \[M_{ij}\,(t)\,=\,\frac{\partial f_{i}}{\partial x_{j}}\,\left[\vec{x}\,[t,\vec{ x}\,(0)]\right] \tag{5.24}\]is the Jacobian matrix taken at the point \(\vec{x}\left(t\right)\). Therefore, in order to integrate eq. (5.23) one has to integrate eq. (5.22) first to know \(\vec{x}\left[t,\vec{x}\left(0\right)\right]\). However, eq. (5.23) can be integrated formally yielding \[\vec{\varepsilon}\left(t\right)\,=\,\left\{\hat{\mathbb{T}}\exp\,\left|\,\int \limits_{0}^{t}\mathrm{d}t^{\,\prime}\,M\left(t^{\,\prime}\right)\right|\right\} \,\vec{\varepsilon}\left(0\right)\,\equiv\,L\left(t\right)\,\vec{\varepsilon} \left(0\right) \tag{5.25}\] where the time ordering operator \(\hat{\mathbb{T}}\) has to be introduced because the matrices \(M\left(t\right)\) and \(M\left(t^{\,\prime}\right)\) usually do not commute at different times \(t\) and \(t^{\,\prime}\). The Liapunov exponents \(\lambda_{1}\,\ldots\,\lambda_{d}\) of the flow are, in analogy to eq. (5.18), defined as \[\left(\mathrm{e}^{\,\lambda_{1}},\mathrm{e}^{\,\lambda_{2}},\ldots\,\mathrm{e }^{\,\lambda_{d}}\right)\,=\,\lim_{t\,\rightarrow\,\infty}\,\,\left(\mathrm{ magnitude\,\,of\,\,the\,\,eigenvalues\,of\,\,}L\left(t\right)\right)^{1}\,. \tag{5.26}\] The Liapunov exponents in eq. (5.26) generally depend on the choice of the intial point \(\vec{x}\left(0\right)\). Even if \(\vec{x}\left(t\right)\) moves on a strange attractor, a change in \(\vec{x}\left(0\right)\) could place the system into the basin of attraction of another attractor with a different set of \(\lambda_{i}\)'s (see e. g. Fig. 69). We will not discuss all numerical methods which have been developed in order to extract the Liapunov exponents from eqns. (5.22 - 24) (see the References of this section for some examples), but only explain the simplest method which yields the largest Liapunov exponent \(\lambda_{m}\). Expanding in eq. (5.25), \(\vec{\varepsilon}\left(0\right)\) with respect to the eigenvectors \(\vec{\varepsilon}_{j}\) of \(L\left(t\right)\) i. e. \[\vec{\varepsilon}\left(0\right)\,=\,\sum\limits_{j\,=\,1}^{d}a_{j}\,\vec{ \varepsilon}_{j}\,\,\,;\,\,\,\,\,\,a_{j}\,=\,\vec{\varepsilon}_{j}\cdot\,\vec{ \varepsilon}\left(0\right) \tag{5.27}\] we obtain by using \[L\left(t\right)\,\vec{\varepsilon}_{j}\,\propto\,\mathrm{e}^{\,\lambda_{j}t} \,\,\,\,\,\,\mathrm{for}\,\,\,\,\,\,\,t\,\rightarrow\,\infty \tag{5.28}\] via eq. (5.25): \[\left|\,\vec{\varepsilon}\left(t\right)\right|\,=\,\,\left|\,\sum\limits_{j\,= \,1}^{d}a_{j}\,\vec{\varepsilon}_{j}\,\mathrm{e}^{\,\lambda_{j}t}\,\mathrm{e}^ {\,\lambda_{j}t}\,\,\,\,\,\mathrm{for}\,\,\,\,\,\,\,\,\,t\,\rightarrow\,\infty \right.\,. \tag{5.29}\] Here, \(\psi_{j}\) denotes the phase angle of the \(j\)'th eigenvalue of \(L\left(t\right)\), which can be complex, and \(\mathrm{e}^{\,\lambda_{m}t}\) dominates the sum in eq. (5.29) because the remaining terms decay as \(\exp\,\left[-\,\left|\,\lambda_{m}\,-\,\lambda_{j}\right|t\right]\). In order to obtain \(\lambda_{m}\), one could therefore start with any randomly chosen value for \(\vec{\varepsilon}\left(0\right)\), calculate \(\vec{\varepsilon}\left(t\right)\) by numerical integration of eqns. (5.22 - 24), and extract \(\lambda_{m}\) via eq. (5.29). To avoid overflow in the computer, this is usually done in steps as shown in Fig. 75. \[\varepsilon\left(\tau\right)\,=\,\bar{\varepsilon}\left(0\right)\,\exp\left( \lambda_{1},\tau\right)\,;\quad\bar{\varepsilon}\left(2\,\tau\right)\,=\,\left[ \bar{\varepsilon}\left(\tau\right)/\left|\,\bar{\varepsilon}\left(\tau\right) \right|\right]\varepsilon^{1_{2}\tau}\ldots\] \[\lambda_{m}\,=\,\lim_{\pi\to\infty}\,\frac{1}{n}\sum_{i=1}^{\pi} \lambda_{i}\,=\,\lim_{\pi\to\infty}\,\frac{1}{n\tau}\sum_{i=1}^{\pi}\log\left| \,\bar{\varepsilon}\left(\tau i\right)\right|\,.\] Plate XVII, at the beginning of this book, and Fig. 76 display the parameter dependence of \(\lambda_{m}\) for the driven pendulum with an additional torque and for the Lorenz model, respectively. In both cases, one observes a sensitive dependence of order \(\lambda_{m}<0\) and chaos \(\lambda_{m}>\quad 0\) on the parameter values. ### Average Time over which the State of a Chaotic System can be Predicted The \(K\)-entropy also determines the average time over which the state of a system, displaying deterministic chaos, can be predicted. Consider, e. g., the simple one-dimensional triangular map in Fig. 15b, which is confined to the unit square. After \(n\) time steps, an interval \(l\) increases to \(L=l\varepsilon^{1_{\pi}}\). If \(L\) becomes larger than 1, we can no longer locate the trajectory in [0, 1], and all we can say is that the system has a probability \[\rho_{0}\left(x\right)\mathrm{d}x \tag{5.30}\]of being in an interval \([x,\,x\,+\,\,{\rm d}\,x]\in[0,\,1]\), where \(\rho_{0}\left(x\right)\) is the invariant density of the system. In other words, precise predictions about the state of this system are only possible for times \(n\) that are smaller \(T_{m}\): \[I{\rm e}^{\lambda\,T_{m}}\,=\,1\,\to\,T_{m}\,=\,\frac{1}{\lambda}\,\,\log\, \left(\frac{1}{l}\right). \tag{5.31}\] Above \(T_{m}\), one can only make statistical predictions. Eq. (5.31) can be generalized to higher dimensional dynamical systems by replacing \(\lambda\) by the \(K\)-entropy (Farmer, 1982a): \[T_{m}\,\propto\,\frac{1}{K}\,\,\log\,\left(\frac{1}{l}\right). \tag{5.32}\] Note that the precision \(l\), with which the initial state is located, only influences \(T_{m}\) logarithmically. Let us summerize our results about the \(K\)-entropy: * It measures the average rate at which information about the state of a dynamical system is lost with time. * For one-dimensional maps, it is equal to the Liapunov exponent. In higher dimensional systems, \(K\) measures the average deformation of a cell in phase space and becomes equal to the integral over phase space of the sum of the positive Liapunov exponents. * It is inversely proportional to the time interval over which the state of a chaotic system can be predicted. Furthermore, in the next section, we shall show that \(K\) can be directly obtained by measuring the time dependence of one component of a chaotic system. These results show that the \(K\)-entropy is _the_ fundamental quantity by which chaotic motion can be characterized, and we define a strange attractor as an attractor with a positive \(K\)-entropy. ### 5.3 Characterization of the Attractor by a Measured Signal Having experimentally observed a seemingly chaotic signal, one wants to know what information it contains about the strange attractor. To provide an answer, we proceed in several steps. First, we will explain the result of Takens (1981) who has shown that, after the transients have died out, one can reconstruct the trajectory on the attractor (the whole time dependent vector \(\vec{x}\left(t\right)=\left[x_{1}\left(t\right),x_{2}\left(t\right)\ldots\right]\) in phase space) from the measurement of a single component, say \(x_{1}\left(t\right)\). A knowledge of the time series of one variable is therefore sufficient to reconstruct the statical and dynamical properties of the strange attractor. Since the whole trajectory contains too much information, we then follow a series of papers by Grassberger, Hentschel and Procaccia (1983), Halsey et al. (1986), Eckmann and Procaccia (1986) and introduce a set of averaged coordinate invariant numbers (generalized dimensions, entropies, and scaling indices) by which different strange attractors can be distinguished. For this purpose, we divide the attractor into boxes of linear dimension \(l\) and denote by \(p_{i}\) the probability that the trajectory on the strange attractor visits box \(i\). By averaging powers of the \(p_{i}^{\prime}s\) over all boxes, we obtain the generalized dimensions \(D_{q}\), defined by \[D_{q}\;=\;-\;\lim_{l\to 0}\;\frac{1}{q\!-\!1}\;\left|\;\frac{1}{\log l}\; \right|\;\log\;\left(\sum_{i}\;p_{i}^{q}\right) \tag{5.33}\] that are formally similar to the free energy \(F_{\beta}\) of ordinary equilibrium thermodynamics: \[F_{\beta}\;=\;-\;\lim_{N\!\sim\!\infty}\;\frac{1}{\beta}\;\;\frac{1}{N}\;\log \;\left|\;\sum_{i}\;(\mathrm{e}^{-E_{i}})^{\beta}\right| \tag{5.34}\] where \(E_{i}\) are the energy levels of the system, \(N\) is its particle number, and \(\beta\) is the inverse temperature. Since \(\sum_{i}\;p_{i}^{q}\), which appears in eq. (5.33), is for \(q>1\) the total probability that \(q\) points of the attractor are within one box, it is obvious that the \(D_{q}\)'s measure correlations between different points on the attractor and are therefore useful in characterizing its inhomogeneous static structure. But, it will be shown below that the (negative) Legendre transform \(f(\alpha)\) of the \(D_{q}\)'s (more precisely of \(\left(q\;-\;1\right)D_{q}\)): \[f(\alpha) \;=\;-\;\left(q\;-\;1\right)D_{q}\;+\;q\alpha \tag{5.35}\] \[\alpha \;=\;\frac{\partial}{\partial q}\;\left(\left(q\;-\;1\right)D_{q}\right] \tag{5.35}\] is more appropriate to describe universal properties of strange point sets. Let us briefly explain the meaning of \(f(\alpha)\) (its connection to the \(D_{q}^{\prime}\,\mathrm{s}\) will be shown below). Assuming ergodicity, the probabilities \(p_{i}\) are, by construction, related to the invariant density \(\rho\left(\vec{x}\right)\) of the attractor: \[p_{i}\;=\;\int_{\left|\vec{x}_{i}-\vec{x}\;\right|\;\leq\;l}\;\mathrm{d}^{ \,d}x\rho\left(\vec{x}\right) \tag{5.36}\]where \(\vec{x}_{i}\) denotes the center of box \(i\). If \(p_{i}\) (\(l\)) diverges for \(l\to 0\) as \[p_{i}(l\to 0)\propto l^{\alpha_{i}} \tag{5.37}\] the invariant density has according to eq. (5.36) at \(\vec{x}_{i}\) a singularity whose strength is characterized by \(\alpha_{i}\). Since different points \(\vec{x}_{i}\) on the attractor can have different strengths \(\alpha_{i}\), it is useful to introduce a function \(f(\alpha)\) which measures the Hausdorff dimension of the set of points \(\{\vec{x}_{i}\}\) on the attractor which have the same strength of singularity \(\alpha\). \(f(\alpha)\) characterizes the static distribution of points on the attractor and can therefore also be used for point sets which are not generated dynamically (see Fig. 77). In order to describe the dynamical behavior of the trajectory on the attractor, we use the quantities \(P_{i_{0}}\cdots i_{n}\) which we introduced already to define the \(K\)-entropy via eq. (5.14). The \(P_{i_{0}}\cdots i_{n}\)'s measure the probability that the trajectory visits a certain sequence \(i_{0}\ldots i_{n}\) of boxes of size \(l\) in time \(n\). By playing with these variables the same game as with the \(P_{i}\)'s, we introduce generalized entropies \(K_{q}\) via: \[K_{q}\,=\,-\,\lim_{l\to 0}\,\lim_{n\to\infty}\,\frac{1}{q-1}\,\stackrel{{ 1}}{{\longrightarrow}}\,\log\,\sum_{i_{0}\ldots i_{n}}\,P_{i_{0}\ldots i_{n}} ^{q} \tag{5.38}\] and show that their Legendre transform \(g(\lambda)\) is connected to the fluctuations of the \(K\)-entropy around its mean value \(K_{1}\) given by eq. (5.15). It will also be demonstrated that both quantities, the \(D_{q}\)'s and the \(K_{q}\)'s (and therefore \(f(\alpha)\) and \(g(\lambda)\)) can be extracted from a time series of a single variable. Two further important quantities, which can be obtained in this way, are the embedding dimension of the attractor, that is, the dimension of the space with the lowest integer dimension, which contains the attractor, and the amplitude of white noise on the signal. Thus irregularities originating from deterministic motion on the attractor can be separated from disturbing white noise. ### Reconstruction of the Attractor from a Time Series It is not always possible to measure all components of the vector \(\vec{x}(n)\) simultaneously. This clearly holds for an infinite-dimensional system. If we define the dimension of a Figure 77: The invariant measure has on a strange attractor different power law singularities. \(f(\alpha)\) measures the Hausdorff dimension of the set of points with the same power \(\alpha\). system by the number of initial conditions, then the so-called Mackey-Glass equation (Mackey and Glass, 1977) \[\dot{x}\,=\,\frac{ax(t\,-\,\tau)}{1\,+\,\{x(t\,-\,\tau)^{\rm i\,0}_{\rm i}\}}\,-\, \,bx(t) \tag{5.39}\] (which describes the regeneration of blood cells) obviously provides a simple example of an infinite-dimensional system, because all the \(x\,(t)\)-values in the interval \(t\), \(t\,-\,\tau\) have to be known (as initial conditions) to solve it. How do we proceed in this, or the less difficult case, where we have an attractor embedded in \(d\)-dimensional space, but measure only one component of the signal? It has been shown by Takens (1981) that one can _reconstruct certain properties of the attractor_ in phase space _from the time series of a single component._ Instead of the rather cumbersome proof, we present the following simplified argument. As an example, consider a two-dimensional flow generated by \[\frac{{\rm d}}{{\rm d}t}\,\,\vec{x}\,=\,\vec{F}(\vec{x})\qquad\vec{x}\,=\, \langle x,\,y\rangle \tag{5.40}\] Every point \(\{x(t\,+\,\tau\},y(t\,+\,\tau\}\) then originates uniquely from a point \(\{x(t),y(t)\}\), and the relation between both points is one-to-one because the trajectories do not cross (otherwise the trajectory would not be determined uniquely by the initial conditions). Next, we construct a sequence of vectors \[\vec{\xi}(t)\,=\,\{x(t),\,x(t\,+\,\tau)\} \tag{5.41}\] \[\vec{\xi}(t\,+\,\tau)\,=\,\{x(t\,+\,\tau),\,x(t\,+\,2\tau)\}\,.\] Since the components of \(\vec{\xi}\) are related to \(\{x(t),y(t)\}\) via the one-to-one relationships \[\xi_{1}(t)\,=\,x(t) \tag{5.42}\] \[\xi_{2}(t)\,=\,x(t\,+\,\tau)\,=\,\int\limits_{t}^{t\,-\,\tau}{\rm d}t^{\prime} \,F_{1}\,\,\{x(t^{\prime}),y(t^{\prime})\}\,+\,x(t)\,\equiv\] \[\equiv\,\tau F_{1}\,\,\{x(t),y(t)\}\,+\,x(t) \tag{5.43}\] with a Jacobian \(\mid\tau\,(\partial F_{1}/\partial y)\mid\,\,\neq\,0\), it is plausible that the information contained in the time sequences \(\vec{x}(t_{i})\) and \(\xi(t_{i})(t_{i}=i\tau)\) is the same, and both sequences should lead to the same characteristic dimensions. A simple example for which \(\vec{x}(t_{i})\) and \(\vec{\xi}(t_{i})\) are indeed completely equivalent is a circle: \[\vec{x}(t_{i})\,=\,\{x(t_{i}),\,y(t_{i})\}\,=\,\{\sin\,(2\,\pi\, t_{i}),\,\cos\,(2\,\pi\,t_{i})\}\,=\] \[\,=\,\left\{\sin\,(2\,\pi\,t_{i}),\,\sin\,\left|\,2\,\pi\,\left(t _{i}+\frac{1}{4}\right)\right|\right\}=\left\{x(t_{i}),\,x\left(t_{i}+\frac{ 1}{4}\right)\right\}=\,\vec{\xi}\left(t_{i}\right)\,. \tag{5.44}\]But we should be aware that arguments are only heuristic and can only be applied"cum grano salis" to situations where strange attractors appear. What Takens (1981) actually proved is the following: "1f \(\dot{\vec{x}}=\vec{F}(\vec{x})\) generates a \(d\)-dimensional flow, then eq. (5.44), \[\vec{\xi}(t)\,=\,\left\{x_{j}(t)\,\ \ x_{j}(t\,+\,\tau)\,\ \ \ldots x_{j} \left[t\,+\,(2\,d\,+\,1)\,\tau\right]\right\} \tag{5.44}\] where \(x_{j}(t)\) is an arbitrary component of \(\vec{x}\), provides a smooth embedding for this flow, and the metric properties in both spaces (the \(d\)-dimensional \(\{\vec{x}(t)\}\) and the \((2\,d\,+\,1)\)-dimensional \(\{\vec{\xi}(t)\}\)) are the same in the sense that distances in \(\{\dot{x}(t)\}\) and \(\{\vec{\xi}(t)\}\) have a ration which is uniformly bounded and bounded away from zero". Fig. 78 shows a reconstruction of (a projection of) the Rossler attractor (Rossler, 1976), which is generated by the system, \[\dot{x}\,=\,\,-\,\,z\,-\,\,y \tag{5.45a}\] \[\dot{y}\,=\,\,x\,+\,\,ay\] (5.45b) \[\dot{z}\,=\,\,b\,+\,\,z\,(x\,-\,c) \tag{5.45c}\] from about \(6\cdot 10^{5}\) points for different choices of the delay time \(\tau\). Although for an infinite amount of noise free data, \(\tau\) could be chosen almost arbitrary (Takens, 1981), it can be seen from Fig. 78 that for a finite time series the quality of the reconstruction depends on \(\tau\). If \(\tau\) is too small, \(x(t)\) and \(x(t\,+\,\tau)\) become practically indistinguishable and one obtains a linear dependence that is not present for the coordinates of the real trajectory. It is, therefore, reasonable to choose the decay time of the autocorrelation function \(C(t)\) of the signal \(x_{n}\) for \(\tau\) Figure 78: Reconstruction of the Rössler attractor for \(\alpha=0.15\), \(b=0.20\), \(c=10.0\); \(\dot{x_{0}}=(10.0\), \(0\), \(0\)) from a time series: a) \(x_{j}\) coordinates of the “true†attractor obtained by numerical integration of eq. (5.45a – c); b) and c) reconstructions for \(\tau=0.23\), \(0.39\), \(3.26\), measured in units of the average orbital time, respectively. (After Fraser and Swinney, 1986.) \[C(t) = \lim_{N\to\infty}\ \frac{1}{N}\sum\limits_{n\to 1}^{N-1}x_{n}x_{n\,+\,1}\,\equiv\,\langle\,x_{0}\,x_{i}\,\rangle \tag{5.46a}\] \[C(\tau) = \frac{1}{2}\ C(0) \tag{5.46b}\] which ensures that \(x(t)\) and \(x(t\ +\ \tau)\) become linearly independent, but other choices for \(\tau\) have also been proposed (Fraser and Swinney, 1986; Liebert, Kaspar and Schuster, 1987). ### Generalized Dimensions and Distribution of Singularities in the Invariant Density In this section, we discuss the meaning of the generalized dimensions \(D_{q}\) for special values of \(q\) and demonstrate explicitly the connection of \(D_{q}\)to the distribution \(f(\alpha)\) of singularities in the invariant density of a strange attractor. Proceeding in a similar way as in Section 5.2, we chop the trajectory \(\vec{x}(t)=[x_{1}(t)\ \ldots\ x_{d}(t)]\) of a dynamical system on a strange attractor into a sequence of points \(\vec{x}(t=0)\), \(\vec{x}(t=\tau)\ \ldots\ \vec{x}(t=N\tau)\) and partition the \(d\)-dimensional phase space into cells \(l^{d}\). The probability \(p_{i}\) of finding a point of the attractor in cell number \(i\,(i=1,\,2\,\ldots\,M(l))\) is then given by \[p_{i}\ =\ \lim_{N\,\to\infty}\ \frac{N_{i}}{N} \tag{5.47}\] where \(N_{i}\) is the number of points \(\{\vec{x}(t=j\tau)\}\) in this cell. The generalized dimensions \(D_{q}\) which are related to the \(q\)th powers of \(p_{i}\) via \[D_{q}\ =\ \lim_{l\to 0}\ \frac{1}{q\ -\ 1}\ \frac{\log\,\left(\sum\limits_{i = 0}^{N(l)}p_{i}^{q}\right)}{\log\,l}\ ;\ \ \ q\ =\ 0,\ 1,\,2\ \ldots \tag{5.48}\] For \(q\ \to\ 0\) we obtain from (5.48) \[D_{0}\ =\ \lim_{l\to 0}\ (\log\sum\limits_{l\,=\,0}^{N(l)}1)/\log\,l\ =\ -\lim_{l\,\to\,0}\ \frac{\log\,M(l)}{\log\,l} \tag{5.49}\] which is just the usual definition (3.69) of the Hausdorff dimension of the attractor (i. e. \(D=D_{0}\)). As \(q\ \to\ 1\), eq. (5.48) becomes \[D_{1}\ =\ -\ \lim_{l\,\to\,0}\ \frac{S(l)}{\log\,l} \tag{5.50}\]where \[S(l)\;=\;-\;\sum_{i\;=\;0}^{M(l)}p_{i}\,\log p_{i}\;. \tag{5.51}\] Since \(S(l)\) is the information gained, if we know \(\langle p_{i}\rangle\) and learn that the trajectory is in a specific cell \(i\), \(D_{i}\) is called the information dimension. It tells us how this information gain increases as \(l\to 0\). For a homogeneous attractor where all \(p_{i}\) are the same, i. e. \(p_{i}\,=\,1/M(l)\), we have \[S(l)\;=\;-\;\sum_{i\;=\;0}^{M(l)}\,\frac{1}{M(l)}\,\log\,\frac{1}{M(l)}\,=\, \log M(l)\;. \tag{5.52}\] Furthermore, the information dimension is always less or equal to the Hausdorff dimension, that is, \[D_{i}\;\leqslant\;D_{0}\;. \tag{5.53}\] This can be proven by maximizing \(S(l)\) under the constraint \(\sum\limits_{i}p_{i}\,=\;1\): \[\frac{\partial}{\partial p_{j}}\;\bigg{|}\;-\;\sum_{i\;=\;1}^{M(l )}p_{i}\,\log p_{i}\,+\;\lambda\;\sum\limits_{i}p_{i}\bigg{]}\;=\;0 \tag{5.54a}\] \[\to\;p_{j}\,=\;\epsilon^{\,-\,1\,+\,i}\;. \tag{5.54b}\] After eliminating the Lagrange multiplier \(\lambda\) via the constraint, eq. (5.54b) yields \[p_{j}\quad=\;\frac{1}{M(l)} \tag{5.55}\] \[S(l)\;\leqslant\;\max\,[S(l)]\;=\;\log M(l) \tag{5.56}\] from which eq. (5.53) follows after division by \(\log\,l\). The inequality (5.53) has been generalized to (Hentschel and Procaccia, 1983): \[D_{q}\;\leqslant\;D_{q}\;\mbox{for}\;\;q^{\prime}\;>\;q \tag{5.57}\] where the equality sign holds if the attractor is uniform. In order to explain the connection between the \(D_{q}\)'s and the singularities in the invariant density of an attractor, we calculate \(D_{q}\) for a one dimensional system which has a power law singularity in its invariant density \(\rho(x)\) at \(x\,=\,0\).
This is just the behavior of \(\rho\left(x\right)\) near \(x=0\) for the logistic map at \(r=4\) (see eq. (3.106) where we ignored the singularity at \(x=1\) to simplify our argument. (The \(D_{q}\)'s will be the same for both systems). Fig. 79 shows that: \[p_{i}=\int\limits_{x_{i}}^{x_{i}-l}\rho\left(x\right)\mathrm{d}x\,\propto\, \begin{pmatrix}l^{\frac{1}{2}}&\text{for}&i=1\\ l^{1}&\text{for}&i\neq 1\end{pmatrix} \tag{5.59}\] Thus, \[\sum\limits_{i}p_{i}^{q} =\left[\int\limits_{0}^{t}\rho\left(x\right)\mathrm{d}x\right]^{ q}\,+\,\sum\limits_{i=1}\left[\rho\left(x_{i}\right)l\right]^{q}\,\equiv \tag{5.60a}\] \[\equiv\left[\int\limits_{0}^{t}\rho\left(x\right)\mathrm{d}x \right]^{q}\,+\,l^{q-1}\int\limits_{i}^{t}\mathrm{d}x\,\rho\left(x\right)^{q} \,=\] (5.60b) \[=\,\left(1\,-\,a\right)l^{\frac{q}{2}}\,+\,al^{q-1} \tag{5.60c}\] where \(a=\left(\frac{1}{2}\right)^{q}\left(1\,-\,\frac{q}{2}\right)^{-1}\). Eq. (5.60c) can be written as \[\sum\limits_{i}p_{i}^{q}\,\equiv\,\left\{\,\mathrm{d}\alpha\,\rho\left(\alpha \right)l^{-f\left(\alpha\right)}l^{aq}\right. \tag{5.61}\] where \[\rho\left(\alpha\right)\,=\,\left(1\,-\,a\right)\,\delta\left(a\,-\,\frac{1}{ 2}\right)\,+\,a\delta\left(a\,-\,1\right) \tag{5.62}\] and \[f\left(\alpha\right)\,=\,\begin{pmatrix}0&\text{for}&\alpha\,=\,\frac{1}{2}\\ 1&\text{for}&\alpha\,=\,1\end{pmatrix}\,. \tag{5.63}\]The interpretation of eqns. (5.59-63) is as follows. And after differentiation \[\alpha\left(q\right)\;=\;\frac{\delta}{\delta q}\;\left[\left(q\;-\;1\right)D_{q} \right]\;. \tag{5.70}\] By eliminating, via eq. (5.70), the variable \(q\) in favor of \(a\) and using eq. (5.69), one obtains \(f(a)\) as (negative) Legendre transformation of \(\left(q\;-\;1\right)D_{q}\): \[f(\alpha)\;=\;q\left(\alpha\right)\;-\;\left[q\left(\alpha\right)\;-\;1\right] D_{q(\alpha)}\;. \tag{5.71}\] For our example with \(\rho\left(x\right)=x^{-\frac{1}{2}}\), we find from eqns. (5.69-71): i. e. the \(\alpha\)-spectrum consists of two points, as calculated above (see eq. (5.63)). Numerical determination of the dimensions \(D_{q}\), by covering the phase space with a set of boxes of volume \(l^{d}\) and counting the number of iterates which lie in a certain cell, is rather cumbersome and in fact impossible for attractors of higher dimensions. However, we can replace the sum over the uniformly distributed boxes in \(\sum\limits_{i}p_{i}^{q}\) by a sum over nonuniformly distributed boxes around the points \(x_{j}\) of a time series which results e. g. from a map \(x_{j\;+\;1}=f(x_{j})\). \[\sum\limits_{i}p_{i}^{q}\;=\;\sum\limits_{i}\left[\int\limits_{\ln \alpha\;i}\rho\left(x\right)\mathrm{d}x\right]^{q}\;\equiv\] \[\;\equiv\;\sum\limits_{i}\;\left[\rho\left(x_{i}\right)l\right]^{ q}\;=\;\sum\limits_{i}\;\rho\left(x_{i}\right)l\left[\rho\left(x_{i}\right)l \right]^{q-1}\;\equiv\] \[\;\equiv\;\left[\rho\left(x\right)\mathrm{d}x\tilde{\rho}\left(x \right)^{q-1}\;\equiv\;\frac{1}{N}\;\sum\limits_{j}\left\{\tilde{\rho}\left[l ^{j}\left(x_{0}\right)\right]\right\}^{q-1}\;=\;\right.\] \[\;\;=\;\frac{1}{N}\;\sum\limits_{j}\tilde{\rho}^{\tilde{g}-1}\;. \tag{5.72}\] Here \(x_{i}\) is an element of box \(i\) and \(\tilde{\rho}\left[f^{j}\left(x_{0}\right)\right]=\tilde{p}_{j}\) is the probability of the trajectory to be in a box of size \(l\) around the iterate \(x_{j}=f^{j}\left(x_{0}\right)\). Eq. (5.72) should make it plausible, mathematical rigor is not attempted, that the change from \(p_{i}^{q}\) to \(\tilde{\rho}_{j}^{\tilde{g}-1}\) is due to the fact that the points \(x_{j}\) of the time series (and the boxes around them) are nonuniformly distributed. We next generalize eq. (5.72) to higher dimensional systemsand write the probability \(\tilde{p}_{j}\) that an element of the time series falls into an interval \(l\) around the element \(\tilde{x}_{j}\) as: \[\tilde{p}_{j}\,=\,\frac{1}{N}\,\sum_{i}\,\Theta\,(l\,-\,|\tilde{x}_{i}\,-\, \tilde{x}_{j}\,|) \tag{5.73}\] where \(\Theta\,(x)\) is the Heaviside step function. Using eqns. (5.72-73), \(\sum_{i}p_{i}^{\,g}\) becomes \[\sum_{i}\,p_{i}^{\,g}\,=\,\frac{1}{N}\,\sum_{i}\,\left[\,\frac{1}{N}\,\sum_{i} \,\Theta\,(l\,-\,|\,\tilde{x}_{i}\,-\,\tilde{x}_{j}\,|)\,\right]^{q-1}\,=\,C^{ \,q}\,(l)\;. \tag{5.74}\] For \(q\,=\,2\), this reduces to the correlation integral \(C\,(l)\) introduced by Grassberger and Procaccia (1983 a) which measures the probability of finding two points of an attractor in a cell of size \(l\): \[\sum_{i\,=\,0}^{M(l)}\,p_{i}^{\,2}\,=\,\mbox{the probability that two points of the attractor lie within a cell $l^{\,d}$}\] \[\,=\,\mbox{the probability that two points at the attractor are separated by a distance smaller than $l$}\] \[\,=\,\lim_{N\,\rightarrow\,\infty}\,\frac{1}{N^{\,2}}\,\left\{ \mbox{number of pairs $ij$ whose distance}\,|\,\tilde{x}_{i}\,-\,\tilde{x}_{j}\,|\,\mbox{is less}\right.\] \[\,\mbox{than $l$}\] \[\,=\,\lim_{N\,\rightarrow\,\infty}\,\frac{1}{N^{\,2}}\,\sum_{i}\, \theta\,(l\,-\,|\,\tilde{x}_{i}\,-\,\tilde{x}_{j}\,|)\] \[\,=\,C\,(l)\,=\,\mbox{correlation integral}\;. \tag{5.75}\] The correlation integral \(C\,(l)\) can be used to determine the following properties from a measured time series: * _The correlation dimension_\(D_{2}\): \[D_{2}\,=\,\lim_{l\,\rightarrow\,0}\,\frac{1}{\log l}\,\log\,\sum_{i}\,p_{i}^{ \,2}\] (5.76) which yields a lower bound to the Hausdorff dimension \(D_{0}\) i. e. \(D_{2}\,<\,D_{0}\). Fig. 80 shows how \(D_{2}\) is determined from \(C\,(l)\) for the Henon map. * _The embedding dimension d:_ Fig. 82 shows the \(l\) dependence of the correlation integral for the Mackey-Glass system. Although this system has an infinite dimension, its correlation dimension is finite and smaller than 3. It is therefore sufficient to use a simple time series with a three-dimensional vector \(\bar{\xi}(t_{i})=[x(t_{i}),x(t_{i}+\tau),x(t_{i}+2\tau)]\) to determine \(D_{2}\). The dimension \(d\) in \(\bar{\xi}(t)=\{x(t_{i})\ldots x(t_{i}+(d-1)\tau)\}\), above which \(D_{2}\) no longer changes, is the (minimal) _embedding dimension_ of the attractor. * _Separation of deterministic chaos and external white noise:_ The correlation integral can also be used as a tool to _distinguish between deterministic irregularities_, which arise from intrinsic properties of the strange attractor, _and external white noise_. Suppose we have a strange attractor embedded in the attractor \(\bar{\xi}(t_{i})=[x(t_{i}),x(t_{i}+\tau),x(t_{i}+2\tau)]\). The slope of the attractor \(\bar{\xi}(t_{i})\) is the same as the slope of the attractor \(\bar{\xi}(t_{i})\). The slope of the attractor \(\bar{\xi}(t_{i})\) is the same as the slope of the attractor \(\bar{\xi}(t_{i})\). \(d\)-dimensional space and we add an external white noise. Each point on the attractor then becomes surrounded by uniform \(d\)-dimensional cloud of points. The radius of this cloud is given by the noise amplitude \(l_{0}\). For \(l\gg l_{0}\), eq. (5.74) counts these clouds as points, and the slope of a plot of \(\log\,C\,(l)\) versus \(\log\,l\) yields the correlation exponent of the attractor. For \(l\ll l_{0}\) most of the points counted lie within the uniformly filled \(d\)-dimensional cells, and the slope crosses over to \(d\), as shown in Fig. 83 for the noisy H\(\acute{\rm e}\)non attractor. Finally, let us briefly comment on the intuitive meaning of the variable \(q\) and then present two examples of \(D_{q}\) and \(f(a)\) curves. If one replaces, in the definition of \(D_{q}\) via eq. (5.33), the \(p_{i}\) by the probabilities \(\bar{p}_{j}\), for a trajectory to fall in a box around an iterate (see eq. 5.72) then the resulting expression \[D_{q}\,=\,-\,\lim_{l\to 0}\,\frac{1}{q-1}\,\,\left|\,\frac{1}{\log l}\,\, \right|\,\,\sum_{j}\,\,\bar{p}_{j}(l)^{q-1} \tag{5.77}\]resembles closely the expression \(F_{\beta}\) of the free energy of an \(N\)-particle equilibrium system at a temperature \(T=\beta^{-1}\): \[F_{\beta}\,=\,\,-\lim_{N\,\rightarrow\infty}\,\frac{1}{\beta}\,\cdot\,\frac{1}{N }\sum_{i}\,\,(\mbox{e}^{-E_{i}})^{\beta}\,\,. \tag{5.78}\] The variable \(|\log I|\) corresponds to the number of particles and \(q\,-\,1\) corresponds to the inverse temperature \(\beta\). It follows already from eq. (5.77) that, for, \(q\,\rightarrow\,+\infty\), the most concentrated parts of the measure (large \(\bar{\rho}_{i}\)'s) are being stressed; whereas for \(q\,\rightarrow\,-\infty\), the most rarified parts (small \(\bar{\rho}_{i}\)'s) become dominant. In this sense, \(q\) indeed serves as the (inverse) temperature in statistical mechanics where at every temperature a different set of energy levels \(E_{i}\) (i.e. probabilities exp \((-\beta E)\)) becomes dominant in the free energy. Fig. 84 shows the \(D_{q}\) and \(f(\alpha)\) curves for the Feigenbaum attractor that is generated by the iterates of the logistic map \(x_{n\,-\,1}\,=\,rx_{n}\,(1\,-\,x_{n})\) at \(r=r_{m}\,=\,3.5699\,\ldots\) (see sect. 3.4). The function \(f(\alpha)\) must be concave because eq. (5.68 b) requires \(f^{\prime\prime}(\alpha)<0\), and the maximum of \(f(\alpha)\) at \(\alpha\,=\,\alpha_{m}\) is equal to the Hausdorff dimension \(D_{0}\) because at the maximum \(f^{\prime}(\alpha_{m})\,=\,0\) which yields via eqns. (5.68 a - b) \[f^{\prime}(\alpha_{m})\,=\,q_{m}\,=\,0 \tag{5.79}\] and \[D_{0}\qquad=f(\alpha_{m})\,\,. \tag{5.80}\] Fig. 84: The functions \(D_{q}\) and \(f(\alpha)\), computed from eqns. (5.33), (5.71), and (5.74), for the Feigenbaum attractor (after K. Pawelzik, priv. comm.). Furthermore, we see from eq. (5.69) that, as long as \(f(\alpha)\) remains bounded, the limiting dimensions \(D_{+\,\infty}\) becomes equal to the corresponding \(\alpha\) values, i. e. \(D_{-\,\infty}=a\,(+\,\infty)\) which implies via eq. (5.59) \(f[\alpha\,(+\,\infty)]=0\). Thus, the zeros of \(f(\alpha)\) are equal to \(D_{\pm\infty}\), and the slope of \(f(\alpha)\) is infinite at these points because of eq. (5.68a). The dimension \(D_{-\,\infty}\) which is associated with the most rarified regions of the Feigenbaum attractor can be calculated as follows. The size \(l_{n}\) of the most rarified region on the \(2^{n}\) attractor, which approaches the Feigenbaum attractor for \(n\to\infty\), decreases as \(a^{-n}\) where \(\alpha\) is the Feigenbaum constant. This is due to the fact that the function \(\sigma\,(x)\) from sect. 3.3 which measures the ratio of the distances between the elements of subsequent supercycles, has its maximum at \(\alpha^{-1}\) (see Fig. 31); i. e., the largest distance decreases like \(\alpha^{-n}\). The probability \(p_{n}\) of a point on the \(2^{n}\) cycle to lie within the interval \(l_{n}\) is just \(p_{n}\,=\,2^{-n}\) because only one point of the cycle is contained in \(l_{n}\). Putting everything together, \(D_{-\,\infty}\) becomes: \[D_{-\,\infty}\,=\,\lim_{q\,\cdots\,\infty}\,\lim_{n\,\cdots\,\infty}\,\frac{1} {q-1}\,\,\,\frac{1}{\log l_{n}}\,\,\,\log p_{n}^{q-1}\,=\,\frac{\log 2}{\log \alpha}\,\,\equiv\,0.7\,55\,51\,\ldots \tag{5.81}\] which is in excellent agreement with the numerical result in Fig. 84 obtained from the time series of the logistic map. Fig. 84 shows that \(D_{q}\) converges very slowly against its limits \(D_{+\,\infty}\), but \(\alpha\,(q=+\,\infty)=D_{+\,\infty}\) can be easily extrapolated from the corresponding \(f(\alpha)\) curves. Thus, the transformation to \(f(\alpha)\) leads to better estimates of \(D_{+\,\infty}\) than the direct calculation of the \(D_{q}\)'s. Another advantage of the \(f(\alpha)\) spectrum is the fact that it represents (e. g. for the Feigenbaum attractor) a smooth universal curve which yields the global density of scaling indices. The universal function \(\sigma\,(x)\) of Feigenbaum, which everywhere describes the local scaling (see section 3.3), contains in principle the same (and even more) information as \(f(\alpha)\), but it is nowhere differentiable and, is, therefore, a function that is difficult to use. A further example where the merits of the \(f(\alpha)\) representation of experimental data become obvious is given in chapter 6 where we investigate the question whether an experimental orbit obtained from a forced Rayleigh-Benard experiment is in the same universality class as the orbit generated from a circle map. ### Generalized Entropies and Fluctuations around the \(K\)-Entropy We generalize in this section the expression \[K\,=\,\lim_{i\to 0}\,\lim_{n\to\infty}\,\frac{1}{n}\,\sum_{i_{0}\ldots i_{n-1}} \,\,\mathrm{P}_{i_{0}\ldots i_{n-1}}\,\log\,P_{i_{0}\ldots i_{n-1}} \tag{5.82}\] for the Kolmogorov entropy of a map (see eq. 5.15) by introducing in analogy to the \(D_{q}\)'s a whole set of entropies \(K_{q}\):\[K_{q}\,=\,-\,\lim_{t\to 0}\,\lim_{s\to\infty}\,\frac{1}{n}\,\,\frac{1}{q\,-\,1}\, \log\,\sum_{\bullet\ldots\bullet_{-}}P^{q}_{\bullet\ldots\bullet_{-1}} \tag{5.83}\] and we show, by way of an example, that their Legendre transformation is related to the spectrum of fluctuations \(g\,(\lambda)\) around the \(K\)-entropy. If we introduce a variable \(T\,=\,\epsilon\,^{-\,s}\), eq. (5.83) can be rewritten as \[K_{q}\,=\,\lim_{t\to 0}\,\lim_{\tau\to 0}\,\frac{1}{\log T}\,\,\frac{1}{q\,-\,1}\, \log\,\sum_{\bullet\ldots\bullet_{-1}}P^{q}_{\bullet\ldots\bullet_{-1}} \tag{5.84}\] which looks \(-\) apart from the fact that we have a whole series of indices, instead of just one \(-\) similar to eq. (5.23) for the \(D_{q}\)'s with \(l\) replaced by \(T.\) It is, therefore, reasonable to try, in analogy to eq. (5.64) the scaling ansatz shown in Fig. 85 and compute \(K_{q}\) and \(\varepsilon(\lambda)\) explicitly for the dynamical system defined by \[x_{n+1}\,=\,f(x_{n}). \tag{5.90}\] The probabilities \(P_{i_{0}\ldots i_{n-1}}\) and the sums \(\sum\limits_{a_{0}\ldots i_{n-1}}P_{i_{0}\ldots i_{n-1}}^{q}\,\equiv\,S_{q}^{*}\) appearing in eq. (5.84) then become: \[P_{i_{0}}\,=\,\begin{cases}p&\text{if}\quad x_{1}\in[0,\,p]\\ &\qquad\qquad\qquad\qquad\qquad\qquad\to S_{q}^{1}\,=\,p^{q}\,+\,(1\,-\,p)^{q} \\ 1\,-\,p&\text{if}\quad x_{1}\in[p,\,1]\end{cases} \tag{5.91}\] \[P_{i_{0}i_{1}}\,=\,\begin{cases}p^{2}&\text{if}\quad x_{1}\,,\,x_{2}\in[0,p] \\ p(1\,-\,p)&\text{etc.}\qquad\qquad\to S_{q}^{2}\,=\,[p^{q}\,+\,(1\,-\,p)^{q}] ^{2}\\ (1\,-\,p)^{2}\end{cases}\] i.e. \(S_{q}^{n}\quad=\,[p^{q}\,+\,(1\,-\,p)^{q}]^{n}\) which yields for \(K_{q}\): \[K_{q}\,=\,\frac{-1}{q\,-\,1}\,\log\,[p^{q}\,+\,(1\,+\,p)^{q}]. \tag{5.92}\] For the limit \(q\,\to\,1\), we obtain from eq. (5.92) the Kolmogorov entropy \[K_{1}\,=\,p\log\left(\frac{1}{\rho}\right)\,+\,(1\,-\,p)\,\log\,\left(\frac{1} {1\,-\,\rho}\right)\,. \tag{5.93}\]\(K_{i}\) is, as expected, equal to the positive Liapunov exponent \(\lambda_{m}\) of the system, which can also be obtained directly from \[\lambda_{m}\,=\,\int{\rm d}x\rho\,(x)\,\log|f^{\prime}\,(x)|\,\,=\,p\,\log\, \left(\frac{1}{p}\right)\,+\,(1\,\,-\,p)\,\log\,\left(\frac{1}{1\,\,-\,p}\right)\] (5.94) where we used the fact that the invariant density \(\rho\,(x)\,=\,1\) for \(f(x)\). (This can be checked by using eq. (5.89) in the Frobenius-Perron equation (2.30)). Next, we compute \(\lambda\,(q)\) and \(g\,(\lambda)\) via eqns. (5.87, 5.92): \[\lambda\,(q)\,=\,\frac{\delta}{\delta q}\,\,(q\,-\,1)\,K_{q}\,=\] \[\,=\,-\,\,[p^{q}\,\log p\,+\,(1\,-\,p)^{q}\,\log(1\,-\,p)]/[p^{q} \,+\,(1\,-\,p)^{q}]\] (5.95) which becomes for \[x\,=\,p^{q/}[p^{q}\,+\,(1\,-\,p)^{q}]\] (5.96) equal to \[\lambda\,=\,\lambda\,(x)\,=\,-\,\,[x\log p\,+\,(1\,\,-\,x)\log(1\,\,-\,p)]\.\] (5.97) \[\hat{P}\,=\,\left(\begin{array}{c}n\\ r\end{array}\right)\rho^{r}(1\,-\,p)^{n\,-\,r}\,. \tag{5.101}\] Using Sterlings's formula \(n!\,=\,n^{n}\), this becomes: \[\log\,\hat{P} \,=\,n\left|\,y\,\log\left(\frac{y}{p}\right)\,+\,(1\,-\,y)\,\log \left(\frac{1\,-\,y}{1\,-\,p}\right)\right|\,= \tag{5.102}\] \[\,=\,n\left[y\,\log y\,+\,(1\,-y)\,\log\,(1\,-\,y)\,-\,\lambda\,( y)\right]\] where we can again replace \(y\) by \(\lambda\) via eq. (5.90) to obtain \(\hat{P}(\lambda)\). By comparing eqns. (5.97\(-\)98) and (5.100, 5.102), we see that \(\mathrm{e}^{\,n\varepsilon(\lambda)}\) is (apart from a factor \(\mathrm{e}^{\lambda}\)) equal to the probability \(\hat{P}(\lambda)\) of seeing in a finite series of iterates the "Liapunov exponent" \(\lambda\). The Legendre transformation from the variable \(q\) in \(K_{q}\) to the variable \(\lambda\) yields, therefore, the distribution \(g(\lambda)\) which describes the fluctuations of the Liapunov exponent for a time series of length \(n\). Note that we used for our interpretation a map which is piecewise expanding (i.e. \(|f^{\prime}(\lambda)|\geqslant 1\) for all \(x\in[0,1]\)) and which yields, therefore, only positive expansion rates \(\lambda\). For general systems (which can also be higher dimensional), our results generalize to the statement that \(\mathrm{e}^{\,n\varepsilon(\lambda)}\) describes the fluctuation spectrum of the (sum of the) positive Liapunov exponents, i. e. of the Kolmogorov entropy for finite time series (see Fig. 86). The numerical computation of \(K_{q}\) from a measured time series proceeds in a fashion which is closely analogous to the \(D_{q}\)'s. to a whole trajectory of length \(n\) we obtain: \[\begin{array}{l}\sum_{i_{0}\ldots i_{n-1}}P_{i_{0}\ldots i_{n-1}}^{q}\,=\, \frac{1}{N}\,\,\sum_{i}\left\{\frac{1}{N}\,\,\sum_{j}\,\Theta\left[l\,-\,\left| \sqrt{\sum\limits_{m\,-\,0}^{n}\,\left(\vec{x}_{i+\,m}\,-\,\vec{x}_{j+\,m} \right)^{2}}\,\right|\right\}^{q-1}\,=\\ \,\equiv\,C_{n}^{q}\left(l\right)\,.\end{array} \tag{5.104}\] This is again a generalization of a correlation integral \(C_{n}\left(l\right)\) which has been introduced by Grassberger and Procaccia (1983 b): Fig. 69 shows the results for \(C_{n}(l)\) and \(K_{2}\) for the Henon map with \(a=1.4\), \(b=0.3\). However, eqns. (5.83, 5.104) yield, for \(q\to 1\), an explicit expression for the \(K\)-entropy itself: \[K=K_{1}=\lim_{t\to 0}\lim_{n\to\infty}\frac{1}{n}\ \frac{1}{N}\ \sum_{i}\ \log\left\{\frac{1}{N}\sum_{j}\ \Theta\left[l-\sqrt{\sum_{m=0}^{n-1}(\tilde{x}_{i\,\,.m}-\,\tilde{x}_{j+m})^{2}}\ \right]\right\} \tag{5.107}\] which can be calculated from a measured signal. The condition \(K>0\) provides, of course, a sharper condition for chaos than \(K_{2}>0\) (see also Cohen and Procaccia, 1985). Let us finally summarize our results by a single formula which demonstrates that all generalized dimensions \(D_{q}\) and entropies \(K_{q}\) can be extracted from experimental data. Eqns. (5.83) and (5.104) yield for \(n\to\infty\) and \(l\to 0\): \[\log C_{n}^{q}(l)\propto n(q-1)\,K_{q}. \tag{5.108}\] If we watch the sequence of limits (first \(n\to\infty\) then \(l\to 0\)), we can combine eqns. (5.33, 5.54, 5.74, 5.83, 5.104) and obtain the compact expression \[\lim_{l\to 0}\lim_{n\to\infty}\ \log C_{n}^{q}(l)=(q-1)\,D_{q}\,\log l+n(q-1)\,K_{q}. \tag{5.109}\] ### Characterization of the Attractor by a Measured Signal Figure 88: a) \(K_{q}\)- and b) \(g(\lambda)\)-spectrum of the tent map (eq. (5.89)). Full lines: Theoretical curves obtained from eqns. (5.87, 5.92). Dots: Numerical results obtained via cqns. (5.87, 5.109) for 2000 iterates. (After Pawelzik and Schuster, 1987.) Therefore, a plot of log \(C_{n}^{q}(l)\) -- which can be determined from an observed time series via eq. (5.104) -- versus log \(l\) yields, for fixed \(q\) and different values of \(n\), straight lines, with slopes \((q-1)\)\(D_{q}\), whose separations along the \(y\)-axis converge for \(n\to\infty\) to \((q\ -\ 1)\)\(K_{q}\) (see, e. g., Fig. 87 for \(q\ =\ 2\)). The spectra \(f(a)\) and \(g(\lambda)\) can be obtained by Legendre transformation from these quantities. Fig. 88 shows examples of \(K_{q}\) and \(g(\lambda)\) curves that have been obtained by this method from a numerically generated time series of the tent map (5.89). Let us finally add a word of caution. It is by no means completely straightforward to obtain, from an experimentally measured time series, the \(D_{q}\) and \(K_{q}\) curves, because the signal is noisy, the length of the series is finite, and the delay time which is needed to reconstruct the attractor (see eq. 5.44) is generally unknown. All this adds a good deal of ambiguity to the application of the procedures described by eq. (5.109). We would like to call attention to the Proceedings of a conference on "Dimensions and Entropies in Chaotic Systems" (edited by Mayer-Kress, 1986) where merits and limits of different numerical procedures to extract dimensions, entropies, and Liapunov exponents from a time series are discussed. ### Kaplan-Yorke Conjecture Although we above made a distinction between dynamic properties of a strange attractor, such as the Liapunov exponents, and static properties measured by the \(D_{q}\)'s, both quantities are in fact connected. For example, if we have a flow in three-dimensional phase space with two negative Liapunov exponents, we know that the attractor contracts to a line with \(D_{q}\ =\ 1\) for all \(q\) (see Fig. 89). Another example is the attractor which belongs to the nonarea preserving baker's transformation (5.7a, b). Its Hausdorff dimension \(D_{B}\) (see eq. (5.10)) can be expressed in terms of the Liapunov exponents \(\lambda_{1}\ =\ \log\ 2\), \(\lambda_{2}\ =\ \log\ a\): \[D_{B}\ =\ 1\ +\ \frac{\lambda_{1}}{|\lambda_{2}|}. \tag{5.110}\] Kaplan and Yorke (1979) conjectured the following more general formula for arbitrary strange attractors: \[D_{KY}\ =\ j\ +\ \frac{\sum\limits_{i\ =\ 1}^{j}\lambda_{i}}{|\lambda_{j +\ 1}|}. \tag{5.111}\] Here \(D_{KY}\) is the Hausdorff dimension according to Kaplan and Yorke, and the Liapunov exponents are ordered \(\lambda_{1}\ >\ \lambda_{2}\ >\ \ldots\ >\ \lambda_{d}\), such that \(j\) is the largest integer for which \(\sum\limits_{i\ =\ 1}^{j}\lambda_{i}\ >\ 0\). Although this formula has been checked numerically and shown to hold for some cases by Russel et al. (1980) (see Table 9), it seems to be rigorously valid only for homogenous attractors, and its range of applicability is still an active field of research. ### Characterization of the Attractor by a Measured Signal \begin{table} \begin{tabular}{l l l} \hline \hline System & \(D\) (numerically) & \(D_{KY}\) \\ \hline Hénon map & & \\ \(a=1.2\), \(b=0.3\) & \(1.202\,\pm\,0.003\) & \(1.200\,\pm\,0.001\) \\ \(a=1.4\), \(b=0.3\) & \(1.261\,\pm\,0.003\) & \(1.264\,\pm\,0.002\) \\ Zaslavsky map & & \\ eq. (1.12a, b) & & \\ for \(f(x)=\cos x\) & \(1.380\,\pm\,0.007\) & \(1.387\,\pm\,0.001\) \\ \hline \hline \end{tabular} \end{table} Table 9: Test of the Kaplan-Yorke Conjecture. Figure 89: Connection between the dimensions of simple attractors embedded in three-dimensional phase space and the signs of their three Liapunov exponents given in the brackets. (Zero means that the Liapunov exponent has this value.) (After Shaw, 1981.) Pictures of Strange Attractors and Fractal Boundaries D. Ruelle writes at the end of his article on strange attractors in "The Mathematical Intelligencer" (1980): "I have not (yet) spoken of the esthetic appeal of strange attractors. These systems of curves, these clouds of points suggest sometimes fireworks or galaxies, sometimes strange and disquieting vegetal proliferations. A realm lies here to explore and harmonies to discover". Fig. 90 shows several examples of strange attractors that support this statement. But we will see in the following that _already the boundaries of attraction_ of simple rational maps of the complex plane onto iself _can have very complicated structures_. If these objects are plotted in color they show striking parallels to some of the self-similar pictures of M. C. Escher. Let us begin with a study of the basins of attraction for the fixed points \(z^{*}\,=\,(1,\,{\rm e}^{\,2\pi i/3},\,{\rm e}^{\,4\pi i/3})\) of the map \[z_{n\,\cdot\,1}\,=\,z_{n}\,-\,(z_{n}^{3}\,-\,1)/(3\,z_{n}^{2}) \tag{5.112}\] in the complex plane. (Eq. (5.112) is just Newton's algorithm for the solution of \(f(z)\)\(\,=\,z^{3}\,-\,1\,=\,0\,(0\,=\,f(z)\,\approx\,f(z_{0})\,+\,f^{\prime}(z_{0})\,(z\,- \,z_{0})\,\to\,z_{1}\,=\,z_{0}\,-\,f(z_{0})/f^{\prime}\,(z_{0})\), etc.) One could think that the different basins of attraction for the roots \(z^{*}\) on the unit circle would be separated by straight lines, But, if one runs eq. (5.112) on a computer and colors starting points, which move to \(1,\,{\rm e}^{\,2\pi i/3},\,{\rm e}^{\,4\pi i/3}\), in red, green and blue, respectively (and black if the starting point does not converge), one sees from the results in Plate VIII (the color Plates I-XV are shown at the beginning of the book) that the boundary of the different basins forms highly interlaced self-similar structures (see also Fig. 91). This fractal boundary solves the nontrivial problem of how to paint a plane with three colors in such a way that each boundary point of a colored region (e. g. red) is also a boundary point of the other regions (green, blue). The boundary of a basin of attraction of a rational map is nowadays called the _Julia set_ (Julia, 1918) (for a more precise definition see, e. g., Brolin, 1965). "Usually" Julia sets are fractals (for \(f(z)\,=\,z^{2}\) the Julia set is the unit circle), and the motion of iterates on these sets is chaotic. Next we consider the map \[z_{n\,+\,1}\,=\,f_{c}\,(z_{n})\,\equiv\,z_{n}^{\,2}\,+\,c \tag{5.113}\] in the complex plane for complex parameter values \(c\). (Eq. 5.113) is the logistic map \(x_{n\,+\,1}\,=\,rx_{n}\,(1\,-\,x_{n})\) in new variables \(x\,=\,1/2\,-\,z/r;\,\,c\,=\,(2\,r\,-\,r^{4})/4\)). ### 5.4 Pictures of Strange Attractors and Fractal Boundaries Figure 90: a), b), Both pictures are composed of different parts of strange attractors which arise if one iterates discretized versions of \(\dot{y}=y\,(1\,-\,y)\) and the pendulum equation, respectively (after Prüfer, 1984; Peitgen and Richter, 1984). c) Poincaré plot (\(\vec{x}_{n}=\vec{x}(t=n\,T)\) of trajectories of the driven Duffing oscillator (\(\ddot{x}\,+\,\gamma\dot{x}\,+\,ax\,+\,bx^{3}=A\,+\,B\,\cos\,(2\,\pi\,t/T)\)) in the chaotic régime (after Kawakami, 1984). Figure 91: Self-semilarity of the Julia set for eq. (5.112) (see also Plate VIII) (after Peitgen and Richter, 1984). Figure 92: Two typical Julia sets of \(f_{c}\) (z) in eq. (5.90). a) \(c=0.32+0.043i\), b) \(c=-0.194+0.6557i\). (After Peitgen and Richter, 1984.) The boundary of the basin of attraction of \(z^{\bullet}=\infty\) forms a Julia set \(J_{c}\) of \(f_{c}(z)\), which depends on \(c\): \[J_{c}\ =\ \mbox{boundary of}\ \{z\,|\,\lim_{n\ \cdot\infty}f_{c}^{n}(z)\ \to\ \infty\,\}. \tag{5.114}\] Fig. 92 shows several examples of these sets. An important therorem by Julia (1981) and Fatou (1919) states that \(J_{c}\) is connected, if and only if, \(\lim_{n\ \cdot\infty}f_{c}^{n}(0)\ \to\ \infty\). Since this limit depends only on \(c\), one is led to consider the set \(M\) of _parameter values_\(c\) in the complex plane for which \(J_{c}\) is connected, i. e. \[M\ =\ \mbox{}_{i}c\,|\,J_{c}\ \mbox{is connected}_{i}\ =\ \mbox{}_{i}c\,|\lim_{n\ \cdot\infty}f_{c}^{n}(0)\ \to\ \infty\,. \tag{5.115}\] ### Pictures of Strange Attractors and Fractal Boundaries Figure 93: Correspondence between the structure of “Mandelbrot’s set†\(M\) in the \(c\)-plane and the structure of bifurcations of the (transformed) logistic map \(x_{n\ \cdot\ 1}=x_{n}^{2}+c\) along the real \(c\)-axis (after Peitgen and Richter, 1984). The set \(M\) is called "Mandelbrot's set" after B. B. Mandelbrot who first published (1980) a picture of \(M\) (see Fig. 93). It shows that \(M\) has also a fractal structure (but it is no Julia set). This study was extended by Peitgen and Richter (1984). If \(c\) does not belong to \(M\), then \(\lim\limits_{n\to\infty}f_{c}^{n}(0)\to\infty\). Therefore, they define "level curves" in the following way: color a starting point according to the number of iterations it needs to leave a disk with a given radius \(R\). As shown by Douady and Hubbard (1982), lines of equal color can be interpreted as equipotential lines if the set \(M\) is considered to be a charged conductor. Plates VIII-XV show the fascinating results of this procedure which brings us back to Ruelles' remark at the beginning of this section. The Transition from Quasiperiodicity to Chaos In the first section of this chapter, we shall discuss the emergence of a strange attractor in the Ruelle-Takens-Newhouse route to turbulence (in time) and present some experimental support for this route. The subsequent section contains a study of the universal properties of the transition from quasiperiodicity to chaos via circle maps and we introduce two renormalization schemes, which are appropriate to describe local and global universality. In section 6.3, we present experimental evidence that circle maps indeed provide a useful description of the transition from quasiperiodicity to chaos in real systems. The chapter ends with a critical review of different transition scenarios that lead to a chaotic behavior. ### 6.1 Strange Attractors and the Onset of Turbulence We come now to one of the most fascinating and difficult questions; namely, how the onset of fluid turbulence in time (we will not consider the distribution of spatial inhomogeneities) is related to the emergence of a strange attractor. To understand what has been undertaken in this area, we first introduce the Hopf bifurcation (Hopf, 1942). #### Hopf Bifurcation A simple Hopf bifurcation generates a limit cycle starting from a fixed point. For example, consider the following differential equations in polar coordinates: \[\frac{\mathrm{d}r}{\mathrm{d}t} = - (\Gamma r\,+\,r^{3});\,\,\Gamma=a\,-\,a_{c}\] (6.1 a) \[\frac{\mathrm{d}\theta}{\mathrm{d}t} = \omega\.\] (6.1 b)Their solutions are \[r^{2}\,(t)\,=\,\frac{\Gamma r_{0}^{2}\,{\rm e}^{-\,2\Gamma t}}{r_{0}^{2}\,(1-{ \rm e}^{-\,2\Gamma t})\,+\,\Gamma}\qquad\mbox{with}\quad r_{0}\,=\,r\,(t\,=0) \tag{6.2a}\] \[\theta\,(t)\,=\,\omega\,t\qquad\qquad\qquad\qquad\qquad\mbox{ with}\quad\theta\,(t\,=\,0)\,=\,0. \tag{6.2b}\] For \(\Gamma\geq 0\) the trajectory approaches the origin (fixed point), whereas for \(\Gamma<0\) it spirals towards a limit cycle with radius \(r_{\infty}\,=\,|\,(a\,-\,a_{c})|^{1/2}\), as shown in Fig. 94. If (6.1a, b) is transformed into rectangular coordinates \[\frac{{\rm d}x}{{\rm d}t}\,=\,-\langle\Gamma\,+\,(x^{2}\,+\,y^{2}) \rangle\,x\,-\,y\omega \tag{6.3a}\] \[\frac{{\rm d}y}{{\rm d}t}\,=\,-\langle\Gamma\,+\,(x^{2}\,+\,y^{2}) \rangle\,y\,+\,x\omega \tag{6.3b}\] and linearized about the origin, we obtain \[\frac{{\rm d}\tilde{f}}{{\rm d}t}\,=\,{\rm A}\tilde{f} \tag{6.4}\] Figure 94: Hopf bifurcation from a fixed point (a) to a limit cycle (b), and behavior of the eigenvalues \(\lambda\) (c). where \(\bar{f}=(\Delta x,\Delta y)\), and A is the matrix \[\mathbf{A}=\begin{pmatrix}-\Gamma&-\omega\\ \omega&-\Gamma\end{pmatrix} \tag{6.5}\] with eigenvalues \(\lambda_{\pm}=-\Gamma\pm i\omega\). This means that at a Hopf bifurcation a pair of conjugate eigenvalues crosses the imaginary axis, as indicated in Fig. 94c. ### Landau's Route to Turbulence A Hopf bifurcation introduces a new fundamental frequency \(\omega\) into the system. As early as 1944 Landau therefore suggested a route to turbulence (in time) in which the chaotic state is approached by an infinite sequence of Hopf instabilities, as shown in Fig. 95. Although this route leads to a time dependence which becomes more and more complicated as more and more frequencies appear, the power spectrum always remains discrete and approaches the continuum limit only after an infinite sequence of Hopf bifurcations. ### Ruelle-Takens-Newhouse Route to Chaos Fig. 96 shows that this is not the case for the Benard experiment. After the appearance of two fundamental frequencies, the power spectrum becomes continuous. This experiment was in fact performed, after the theoretical work of Ruelle, Takens, and Newhouse (1978) who had suggested a route to chaos which is much shorter than that proposed by Landau (1944). They showed that, after three Hopf bifurcations, regular motion becomes highly unstable in favor of motion on a strange attractor (see Fig. 97). Figure 95: Landau’s route to chaos. As the parameter \(R\) increases, more and more fundamental (i. e. incommensurate) frequencies are generated by Hopf bifurcations. To be precise, we quote their theorem verbatim (Ruelle, Takens, Newhouse) (1978):,,Let \(v\) be a constant vector field on the \(n\)-torus \(T^{n}=R^{n}/Z^{n}\). If \(n>3\), every \(C^{2}\) neighborhood of \(v\) contains a vector field \(v^{\prime}\) with a strange Axiom \(A\) attractor. If \(n>4\), we may take \(C^{\infty}\) instead of \(C^{2}\). (Here \(C^{2}\) means that the neighborhood of the vector field is twice continuously differentiable, an Axiom \(A\) attractor (see Smale, 1967) is essentially our strange attractor, and we finally mention that the original work of Ruelle and Takens (1971) described the decay of a four torus instead of a three torus as in the theorem above). Fig. 97: The Ruelle-Takens-Newhouse route to chaos. Fig. 96: Power spectrum of the convection current for a Bénard experiment (after Swinney and Gollub, 1978). With increasing (relative) Rayleigh number \(R^{*}=R/R_{c}\), the following states are observed: a) periodic movement with one frequency and its harmonics, b) quasiperiodic motion with two incommensurate frequencies and their linear combinations, c) nonperiodic chaotic motion with some sharp lines, d) chaos. This means practically that if a system undergoes three Hopf bifurcations, starting from a stationary solution as a parameter is varied, then it is "likely" that the system possesses a strange attractor after the third bifurcation. The power spectrum of such a system will exhibit one, then two, and then possibly three independent frequencies. When the third frequency is about to appear, some broad band noise will simultaneously appear if there is a strange attractor. Practically, the three torus can decay (into a strange attractor) immediately after the cir ### Possibility of Three-Frequency Quasiperiodic Orbits Newhouse, Ruelle and Takens (1978) showed that, in a system with a phase-space flow consisting of three incommensurate frequencies, arbitrarily small changes to the system convert the flow from a quasiperiodic three-frequency flow to chaotic flow. One might naively conclude that three-frequency flow is improbable since it can be destroyed by small perturbations. However, it has been shown numerically by Grebogi, Ott and Yorke (1983) that the _addition of smooth nonlinear perturbations does not typically destroy three-frequency quasiperiodicity_: (In the proof by Newhouse et al., the small perturbations required to create chaotic attractors have small first and second derivatives, but do not necessarily have small third- and higher-order derivatives, as expected for physical applications.) The calculation by Grebogi et al. (1983) can be summarized as follows: According to Section 6.2, the Poincare map associated with a flow having two incommensurate frequencies (perturbed by \(\varepsilon\,f(\theta)\)) can be described by the map (6.13): \[\theta_{n-1}\,=\,\theta_{n}\,+\,\Omega\,+\,\varepsilon f(\theta_{n}) \tag{6.6.}\] where \(f(\theta)\) is periodic in \(\theta\), and \(\theta_{n}\) is taken modulo 1. By analogy, a flow with three incommensurate frequencies corresponds to a map: \[\theta_{n-1}\,=\,\theta_{n}\,+\,\omega_{1}\,+\,\varepsilon\,P_{1}\,(\theta_{n},\varphi_{n}) \tag{6.7.a}\] \[\varphi_{n-1}\,=\,\varphi_{n}\,+\,\omega_{2}\,+\,\varepsilon\,P_{2}\,(\theta_{n},\varphi_{n}) \tag{6.7.b}\] where \(\theta_{n}\) and \(\varphi_{n}\) are again taken modulo 1, and \(P_{1,2}\) are periodic in \(\theta_{n}\) and \(\varphi_{n}\). The parameters \(\omega_{1}\) and \(\omega_{2}\) are incommensurate with each other and with unity; that is, integers \(\rho\), \(r\), \(q\) do not exist for which \(p\,\omega_{1}\,+\,q\omega_{2}\,+\,r\,=\,0\). By expressing \(P_{1,2}\) as a Fourier sum of terms \[A_{\varepsilon,s}\,\sin\left[2\,\pi\,(r\theta\,+\,s\varphi\,+\,B_{\varepsilon,s})\right] \tag{6.8.}\] \begin{table} \begin{tabular}{l l l l l} \hline Type of attractor & \multicolumn{2}{l}{Liapunov exponents} & \multicolumn{2}{l}{\(\varepsilon\)\(\varepsilon\and retaining (somewhat arbitrarily) only the terms (_r, s_) = (0, 1), (1, 0), (1, 1) (1, - 1), Grebogi et al. calculated the Liapunov exponents l1, l2 for the map (6.7) for random values of \(o\)1, \(o\)2, _A_r,s, and B_r,*. Their results are summarized in Table 11, which shows that for a fixed typical choice of \(P\)1,2, the measure of (_o_1, \(o\)2) yielding chaos approaches zero as \(e\) - 0. Three-frequency quasiperiodicity is possible only when \(e\) < \(e_{c}\) where the map is invertible. ### 6.1 Strange Attractors and the Onset of Turbulence Figure 98: Log-linear plot of the power spectrum (of the local temperature) in a Bénard experiment with mercury in a magnetic field. a) Quasiperiodic region with two incommensurate frequencies \(f\)1 and \(f\)2; b) three-frequency periodicity, i. e. \(f\)1, \(f\)2 and \(f\)3 are present together with self-generated noise which decays exponentially. (Libchaber et al., 1983.) Figure 99: Power spectrum of the voltage across a BSN crystal through which a constant dc-current is maintained. With decreasing temperature, one observes a transition from one \(\rightarrow\)two \(\rightarrow\)three fundamental frequencies to chaos (after Martin, Leber and Martienssen, 1984). The data in this table were computed using 256 random values of (o1, o2). The Liapunov exponents have been determined to the order 10-4 (Grebogi et al., 1983a). A transition from quasiperiodicity to chaos which still exhibits three-frequency quasiperiodicity (i. e. the decay of this state to a strange attractor is not complete) has been observed by Libchaber, Faure, and Laroche (1983) in a Benard experiment with mercury in a magnetic field (see Fig. 98) and by Martin, Leber and Martienssen (1984) in the voltage spectrum of a ferroelectric Barium-Sodium-Niobate (BSN) crystal (see Fig. 99). In the first case, the horizontal field serves as a second control parameter and additionally increases the viscosity of the electrically conducting fluid. In the second case, the Ba2NaNb5O15 crystal, which displays a nonlinear current-voltage characteristic, is placed into a heating oven through which a constant flow of humidified oxygen is maintained (part of the conduction mechanism is due to oxygen vacancies). A stabilized dc-current is applied along the _c_-axis of the sample and one measures the voltage across the crystal together with the birefringence pattern. With increasing voltage, "domains" emerge from the cathode and disperse gradually through the crystal (see Plate IV at the beginning of the book). Since there are _three_ control parameters (temperature, current density and oxygen flow), BSN provides an interesting system for experimental studies of chaos. ### Break up of a Two Torus It has been mentioned above that the conversion of quasiperiodic motion into chaotic motion on a strange attractor could occur apparently from a two torus if the three torus is so unstable that the third incommensurate frequency cannot be observed. Such transitions belong in principle also to the Ruelle-Takens-Newhouse scenario (see however Curry and Yorke, 1978) and have been seen in two hydrodynamic experiments. Dubois and Berge (1982) observed experimentally the emergence of a strange attractor in a _Benard experiment_. They measured the time series of temperature \(T(t)\) and reconstructed a two-dimensional Poincare section by plotting \([T(t),\,\dot{T}(t)]\) at intervals \(t=n\tau\), where \(\omega_{0}=2\pi/\tau\) was determined from an independent measurement of the velocity. (This is another method of reconstructing an attractor from the measurement of one variable, note that in our example from chapter 5.3\(\vec{x}(t)=[\sin\,(2\,\pi\,t),\cos\,(2\,\pi\,t)]\) (eq. 5.43) the \(y\) component \(y=\cos\,(2\,\pi\,t)\) could be obtained by differentiation i. e. \(y\propto\dot{x}\)). Fig. 100 shows how the Poincare section, which consists of a closed loop (as expected for a section of a torus), develops into a strange attractor. Another example for the emergence of chaos after two Hopf bifurcations has been observed after a Taylor instability by Swinney and Gollub (1978). The Taylor instability occurs in a fluid layer between an inner cylinder rotating with an angular velocity \(\Omega\) and a stationary outer cylinder (see Fig. 101 and Plate III at the beginning of the book). For small \(\Omega\), angular momentum fed to the inner cylinder is transported outside by viscosity (a). Above a critical angular velocity \(\Omega_{c}\), this state becomes unstable, and momentum is transported by annular convection cells (b). At still higher \(Q\)'s,periodic and multiply periodic oscillations of these cells occur which merge into chaos after two Hopf bifurcations. The following results in Fig. 102 have been obtained by reconstructing the phase space for a Taylor experiment from a time series of the radial velocity \(\left\{v\left(t_{k}\right),\ \ldots,\ v\left(t_{k}\ +\ m\tau\right)\right\}\) with \(t_{k}=k\cdot\tau_{0}\), \(k=0\), \(1\), \(2\), \(\ldots\), (\(\tau_{0}<\tau\)): a) The Poincare section shows the break up of a torus similar to Fig. 100. ### Strange Attractors and the Onset of Turbulence Figure 100: Poincaré sections for the Bénard experiment: a) Schematic section through torus; b)-d) experiments showing with increasing Rayleigh number a transition form quasiperiodic motion (b) to substructures indication the destruction of the torus (c) and then to a strange attractor (d). (After Dubois, Berge and Croquett, 1982.) Figure 101: The Taylor instability and power spectrum of the velocity (after Swinney and Gollub, 1978). Figure 102: Experimental properties of a strange attractor which occurs in a Taylor experiment: a) Plane of the Poincaré section and break up of the torus with increasing \(\Omega\). b) \(K\)-entropy (â–²) and largest Liapunov exponent (â—) vs. \(\Omega/\Omega_{c}\). c) Hausdorff dimension D (â—) and correlation dimension D\({}_{2}\) (â–²) vs. \(\Omega/\Omega_{c}\). (After Brandstaiter et al., 1983.) b) The \(K\)-entropy (obtained via eq. (5.109) and the largest Liapunov exponent \(\lambda\) (obtained from the separation of nearby orbits in five-dimensional phase space) become positive for \(\Omega>\Omega^{*}\). This _proves experimentally_ the existence of a _strange_ attractor. c) The Hausdorff dimension \(D\) (obtained via eq. (5.49) and \(D_{2}\) (obtained via eq. (5.76) increase slowly with \(\Omega/\Omega_{c}\). This shows that there are only a _few relevant degrees of freedom_ cben at \(\Omega\)-values that are 30% above the critical value \(\Omega^{*}\) = 12\(\Omega_{c}\) at the onset of chaos. ### 6.2 Universal Properties of the Transition from Quasiperiodicity to Chaos The transition from quasiperiodic motion on a two torus to chaotic motion has also investigated by studying simple maps (Feigenbaum and Kadanoff, 1982; Rand et al. 1982, 1983; Jensen et al., 1984). \[\theta_{n+1}\,=\,f(\theta_{n})\,\equiv\,\theta_{n}\,+\,\Omega\mod 1\,\,. \tag{6.9}\] The parameter \(\Omega=\omega_{1}/\omega_{2}\) determines the winding number \[w\,=\,\lim_{n-\infty}\frac{f^{n}(\theta_{0})\,-\,\theta_{0}}{n} \tag{6.10}\] Figure 103: Motion on a unit torus. For rational \(\omega_{1}/\omega_{2}=p/q\), the trajectory closes after \(q\)-cycles. This is called a _mode-locked_ state. For irrational \(\omega_{1}/\omega_{2}\), the motion is called _quasiperiodic_; the trajectory never closes and covers the whole torus. which measures the average shift of the angle \(\theta\) per iteration in eq. (6.10), the modulo in \(f\) has to be omitted. We find from eqns. (6.9-10) \(w=\Omega\). But it should be noted that the definition of the winding number \(w\) given in eq. (6.10) holds for all maps of the unit circle onto itself. In order to obtain an idea how eq. (6.9) should be modified to describe the break up of a torus in a physical system, we reconsider our kicked rotator from chapter 1, eq. (1.18), for the case that a constant torque \(\Gamma\Omega\) has been added to the driving force. If we make, in eqns. (1.18 a, b), the following simplifying substitutions for \(T=1\): \[x_{n}\,\rightarrow\,\theta_{n}\;;\qquad\frac{{\rm e}^{\,\Gamma}\,-\,\,1}{\Gamma}\;y_{n}\,-\,\Omega\,\rightarrow\,r_{n}\;;\qquad{\rm e}^{\,-\,\Gamma}\,=\,b\] (6.11 a) \[Kf(\theta_{n})\,\rightarrow\,\frac{\Gamma}{1\,-\,{\rm e}^{\,-\, \Gamma}}\;\frac{K}{2\,\pi}\,\sin\left(2\,\pi\,\theta_{n}\right)\,+\,\Gamma\Omega\] (6.11 b) we obtain \[\theta_{n\,+\,1}\,=\,\theta_{n}\,+\,\Omega\,-\,\frac{K}{2\,\pi}\,\sin\left(2\, \pi\,\theta_{n}\right)\,+\,br_{n}\quad{\rm mod}\;1\] (6.12 a) \[r_{n\,+\,1}\,=\,br_{n}\,-\,\frac{K}{2\,\pi}\,\sin\left(2\,\pi\, \theta_{n}\right)\] (6.12 b) where \(\theta_{n}\) is the angle of the kicked rotator at time \(n\), \(r_{n}=y_{n}\,({\rm e}^{\,\Gamma}\,-\,1)/\Gamma\,-\,\Omega\) is \(-\) apart from a constant shift \(-\) proportional to the angular velocity \(y_{n}=\dot{\theta}(t=n)\). Eqns. (6.12a-b) define the so called _dissipative circle map_. For vanishing nonlinearity (\(K=0\)) and finite damping rate \(b={\rm e}^{\,-\,\Gamma}<1\), eqns. (6.12a-b) reduce to the unperturbed map eq. (6.9) where \(\Omega\) sets the rate of rotation. Fig. 104 shows that the dissipative circle map indeed describes the break up of a torus if the parameter \(K\) which measures the strength of the nonlinearity \(\sin\left(2\,\pi\,\theta_{n}\right)\) is increased from \(K=0.814\) to \(K=1.2\). In both pictures, we plotted \(y_{n}=(1\,+\,4\,r_{n})\)\(\sin\theta_{n}\) versus \(x_{n}=(1\,+\,4\,r_{n})\,\cos\theta_{n}\) with \(\theta_{n}\) and \(r_{n}\) from eq. (6.12) and \(\Omega=0.612\), \(b\,=\,0.5\). These pictures should be compared to Figs. 100 and 102 which show the destruction of a torus in experimentally measured Poincare maps. For strongly dissipative systems (\(b\to 0\)), the radial motion of the trajectory disappears in eqns. (6.12a, b), and they reduce to the _one dimensional circle map:_ \[\theta_{n\,+\,1}\,=\,f(\theta_{n})\,\equiv\,\theta_{n}\,+\,\Omega\,-\,\frac{K}{ 2\,\pi}\,\sin\,(2\,\pi\,\theta_{n})\quad{\rm mod}\,1\, \tag{6.13}\] which describes the transition from quasiperiodicity to chaos only by the motion of the angles \(\theta_{n}\). _Here \(\theta_{n}\)_ is again understood modulo 1; \(K\) provides, in analogy to the Reynold's number, a measure for the nonlinearity \(\sin\,(2\,\pi\,\theta_{n})\) (which must be added to obtain a transition to chaos), and \(\Omega\) sets again the rate of rotation (see eq. (6.10)). In the following section, we study the break up of the torus into a strange attractor via this map. It will be shown below that (by analogy to the logistic map for the period-doubling route) the special form of \(f(\theta)\) is rather unimportant, and, of more importance, are the following general features of \(f(\theta)\): * \(f(\theta)\) has the property \(f(\theta\,+\,1)\,=\,1\,+\,f(\theta)\). * For \(\,|\,K\,|\,<\,1,f(\theta)\) (and its inverse) exists and is differentiable (i. e. \(f(\theta)\) is a diffeomorphism). * At \(K\,=\,1,f^{-\,1}\) (\(\theta\)) becomes nondifferentiable, and for \(\,|\,K\,|\,>\,1\), no unique inverse to \(f(\theta)\) exists. To obtain an overview of the behavior of the circle map (6.13), we show in Plate XVI (at the beginning of this book) its Liapunov exponent \(\lambda\) depicted in colors as a function of the two control parameters \(K\) and \(\Omega\). We distinguish three regimes: * For \(\,|\,K\,|\,<\,1\), one finds the so-called Arnold's tongues (Arnold, 1965) where the motion is mode locked; that is, the winding number \(w\) (see eq. (6.10)) is rational. Fig. 105: Variation of the map \(f(\theta)\) with the parameter \(K\). Note, that for \(K\,>\,1\), the map becomes noninvertible. Between these tongues, the winding number is irrational. Both areas in the \(K-\Omega\) plane, the mode locking and the nonmode locking one, are finite (see Fig. 106). * At \(K=1\), the Arnold's tongues moved together in such a way that the nonmode locked \(\Omega\) intervals from a self similar Cantor set with zero measure. * For \(|K|>1\), the map becomes noninvertible, chaotic behavior becomes possible, but chaotic and nonchaotic regions are _densely_ interwoven in parameter space (i. e. the \(K-\Omega\) plane). In the following section, we will investigate these different regimes in more detail. In the first part of this section, we study the nonchaotic mode-locking behavior. Mode locking means, according to eq. (6.10), that the ratio between the number of cycles, which the system executes divided by the number of oscillations of the driving force (think of a kicked rotator), is a rational number. Thus, mode locking with winding number \(w=1\) corresponds to complete synchronisation between the external force and the system. Since this phenomenon occurs very often in nature \(-\) already in the 17th century the Dutch physicist Ch. Huyghens observed synchronisation between two clocks hanging back-to-back on a wall \(-\) the understanding of mode locking in nonlinear systems is of considerable interest. In the second part of this section, we investigate universal properties at the transition from quasiperiodicity to chaos using different renormalization group formalisms. Since one has two control parameters \(K\) and \(\Omega\) one has to distinguish between _local_ scaling behavior, which occurs _near a point_ in the \(K-\Omega\) plane, and _global_ scaling behavior, which occurs _for a whole set of \(\Omega\) values_ and describes the merging together of the Arnold tongues as the line \(K=1\) is approached in Fig. 106. It will be shown that the local transition from quasiperiodicity to chaos near an irrational winding number displays, as a function of the control parameter \(\Omega\) in its renormalization group description, some formal analogies to the period doubling route. In contrast, the numerically found global scaling requires a different normalization group approach, and we will only calculate the universal Hausdorff dimension of the Cantor set which is formed by the nonmode locked \(\Omega\) intervals at \(K=1\) (see Fig. 106). Figure 106: Phase diagram of the circle map (schematically). \(K<1\): Within the Arnold tongues (hatched) the winding number \(w\) is rational, and one has mode locking. \(K=1\); the Arnold tongues moved together, the remaining nonmode-locked “holes†form a Cantor set. \(K>1\): Chaos becomes possible, but coexists with order. The lines correspond to the parameter values for superstable, that is nonchaotic cycles which are associated with the mode locking regions. ### Mode Locking and the Farey Tree In this subsection, we investigate the mode locking which occurs in the interates of the circle map. It will be shown that for fixed \(K\) the width of an Arnold tongue decreases if the denominator \(q\) in the corresponding rational winding number \(w=p/q\) increases. The resulting hierarchy of tongues at \(K=1\) can be conveniently represented by a Farey tree which orders all rationals in [0, 1] according to their increasing denominators (see Hardy and Wright, 1938). For a general mode locked state with \(w=p/q\), the corresponding \(\Omega\) interval \(\Omega=\Omega\,(K)\) can be calculated from the condition that a \(q\)-cycle with elements \(\theta_{1}^{*}\ldots\,\theta_{q}^{*}\) occurs in the circle map (6.13): \[f_{\Omega,K}^{q}\,(\theta_{i}^{*})\,=\,p\,\,+\,\,\theta_{i}^{*} \tag{6.14}\] which is stable i. e. \[f_{\Omega,K}^{q^{\prime}}\,(\theta_{i}^{*})\,\mid\,=\,\mid\,\prod_{i=1}^{q}f_{\Omega,K}^{*}\,(\theta_{i}^{*})\,\mid\,=\,\mid\,\prod_{i\,-1}^{q}\,[1\!-\!K\cos\,(2\,\pi\,\theta_{i}^{*})]\mid\,<\,1\,\,. \tag{6.15}\] (Here the indices \(K\) and \(\Omega\) indicate that the left hand side in eqns. (6.14, 6.15) is still a function of both variables.) Eqns. (6.13-15) yield, e. g., for \(w=1\): \[f_{\Omega,K}\,(\theta_{0})\,=\,\theta_{0}\,\rightarrow\,\Omega\,=\,\frac{K}{2\,\pi}\,\sin\,(2\,\pi\,\theta_{0}) \tag{6.16}\] and \[\mid f_{\Omega,K}^{\prime}\,(\theta_{0})\mid\,=\,\mid\,1\,\,-\,K\cos\,(2\,\pi\, \theta_{0})\mid\,<\,1\,\,. \tag{6.17}\] For \(\mid K\mid\,<\,1\), the boundaries \(\mid f_{\Omega,K}^{\prime}\,(\theta_{0})\mid\,=\,1\) are reached for \(\theta_{0}=\,\pm\,\pi/4\) which implies, via eq. (6.16), that the first Arnold tongue is a triangle with a width \(\Omega\) \[\Omega\,=\,\,\pm\,\frac{K}{2\,\pi} \tag{6.17}\] as shown in Fig. 106. The general eqns. (6.13-15) have been solved numerically by P. Bak and T. Bohr (1984) who found that for \(0<K<1\) a whole interval \(\Delta\,\Omega\,(p/q\), \(K)\) of \(\Omega\) values is associated to every rational winding number. For \(K=1\), these intervals form a complete self similar devil's staircase as shown in Fig. 107. The staircase for \(K=1\) is termed complete because the sum \(S\) of all \(\Omega\) intervals is equal to 1 i. e. \[S\,=\,\sum_{p,q}\,\Delta\,\Omega\,(p/q,1)\,=\,1\,\,. \tag{6.18}\]For \(0<K<1\), the staircase becomes incomplete i.e. \(S<1\). Fig. 107 shows that the widths of the steps becomes smaller if the denominator in the corresponding winding number increases. Furthermore, if we have two steps with winding numbers \(\rho/q\) and \(p^{\prime}/q^{\prime}\), then the largest step in between has a winding number (\(p\ +\ p^{\prime}\))/(\(q\ +\ q^{\prime}\)). If we list a few examples: \(0/1<1/2<1/1\); \(1/2<2/3<1/1\); \(1/2<3/5<2/3\), etc., we see that (\(p\ +\ p^{\prime}\))/(\(q\ +\ q^{\prime}\)) is the rational number with the smallest denominator which lies between \(p/q\) and \(p^{\prime}/q^{\prime}\). Thus the Farey tree, shown in Fig. 108, which orders all rationals \(\rho/q\) in [0, 1] with increasing denominators \(q\), orders Fig. 107: The mode locking structure of the circle map, eq. (6.13) at \(K=1\). The devil’s staircase is complete; the numbers denote the rational winding numbers (after Jensen at al., 1984). Fig. 108: The Farey tree orders all rationals in [0, 1] with increasing denominators according to the rule that the largest rational between \(\rho/q\) and \(p^{\prime}/q^{\prime}\) is (\(p\ +\ p^{\prime}\))/(\(q\ +\ q^{\prime}\)) (after Cvitanovic and Soderberg, 1985 a). also all mode locking steps with \(w=p/q\) in the circle map according to their decreasing widths. Up to now, our observations were only based on numerical evidence shown in Fig. 107; however, there exists also a simple analytical result that establishes the relation between the devil's staircase and the Farey tree. The monotony of the circle map (and its iterates) in \(\Omega\) implies that to every winding number in the Farey tree belongs exactly one mode locking step in the devil's staircase. Suppose one has a superstable \(q\)-cycle \(f^{q}_{\Omega(p,q)}\left(\theta\right)=p+\theta\) and a \(q^{\prime}\)-cycle \(f^{q^{\prime}}_{\Omega(p^{\prime},q^{\prime})}\left(\theta^{\prime}\right)\). If we combine both iterations we obtain: \[f^{q}_{\Omega(q,q)}\,U^{q^{\prime}}_{\Omega(p^{\prime},q^{\prime})}\left(\theta \right)]\,=\,p\,+\,p^{\prime}\,+\,\theta \tag{6.19}\] that is, a cycle with the winding number (\(q\,+\,q^{\prime}\))/(\(p\,+\,p^{\prime}\)). Increasing \(\Omega\left(p,\,q\right)\) in \(f^{q}_{\Omega(q,q)}\) overshoots this cycle. This can be compensated by reducing \(\Omega\left(p^{\prime},\,q^{\prime}\right)\) in \(f^{q}_{\Omega(p^{\prime},q^{\prime})}\).Due to the fact that both iterates are monotonous in \(\Omega\), one can repeat this procedure until both \(\Omega\) values coincide. Hence, the \(\Omega\) interval between the \(p/q\) and \(p^{\prime}/q^{\prime}\) cycles, always contains an \(\Omega\) value which corresponds to a (\(p\,+\,q^{\prime}\))/(\(q\,+\,q^{\prime}\)) cycle as claimed above. The Farey tree construction has a universal importance because it orders not only the mode locking regions for the circle map but also for real systems such as a driven pendulum, Josephson junctions, and sliding charge density waves. This of course means that the dynamics of these systems can be reduced to circle maps as will be shown in sect 6.3. ### Local Universality The transition from quasiperiodicity to chaos is characterized by two types of universality. One is associated with the transition from quasiperiodicity to chaos for a special, that is, local winding number, and it shows close parallels to the period doubling route. Its experimental verification is difficult because minute changes in winding numbers lead to large changes in scaling behavior. The second type is called global universality and pertains to a whole range of winding numbers. It describes the scaling behavior of the set of \(\Omega\) values, complementary to the Arnold tongues on which the dynamical system is mode locked, and it has been observed experimentally in several systems. We begin with an investigation of the transition from quasiperiodicity to chaos for the golden mean winding number, because it also forms the basis for the investigation of the global universality for the circle map. In order to observe a transition from quasiperiodicity to chaos in the iterates of (6.13), _two_ parameters have to be adjusted. If we increase, for example, the nonlinearity via \(K\), \(\Omega\) must always be balanced to keep the winding number \(w\) fixed to a given irrational values (this guarantees quasiperiodicity). But how can this be performed for a winding number which still gives the average shift of \(\theta\) per iteration, and which, however, for general maps has to be defined as the limit (see eq. 6.10): \[w\,=\,\lim_{n\,\cdot\infty}\,\frac{f^{n}(\theta_{0})\,-\,\theta_{0}}{n} \tag{6.20}\] (where the modulo in \(f\) has to be omitted)? We use the following method which has been suggested by Greene (1979) (in a similar context for Hamiltonian systems). One calculates for fixed \(K\) the value \(\Omega_{p,\,q}(K)\) which a) belongs to a \(q\)-cycle of the map \(f(\theta)\), b) contains \(\theta=0\) as an element, and c) provides a shift by \(p\). Thus, \(\Omega_{p,\,q}\), which generates a rational winding number \(w=p/q\), is defined by \[f^{q}_{K,D}(0)\,=\,p. \tag{6.21}\] Next, the irrational winding number is approximated by a sequence of truncated continued fractions, i. e. rationals. If we consider, for example, the winding number \(w^{*}\,=\,(\!/\bar{5}\,\,-\,1)/2\) which has as a continued fraction of the simple form \[w^{*}\,=\,\frac{1}{1\,+\,\frac{1}{\ldots}} \tag{6.22}\] then the so-called _Fibonacci numbers_\(F_{n}\), which are defined by \[F_{n+1}\,=\,F_{n}\,+\,F_{n-1}\ ;\ \ \ F_{0}\,=\,0\,\ \ \ F_{1}\,=\,1\ ;\ \ \ n\,=\,0,1,2,\ldots \tag{6.23}\] via \[w_{n}\,=\,\frac{F_{n}}{F_{n+1}}\,=\,\frac{F_{n}}{F_{n}\,+\,F_{n-1}}\,= \tag{6.24}\] \[\,=\,\frac{1}{1\,+\,\frac{F_{n-1}}{F_{n}}}\,=\,\frac{1}{1\,+\, \frac{1}{1\,+\,\ldots}} \tag{6.24}\] yield a sequence of rationals \(w_{n}\) which converges towards \[w^{*}\,=\,\lim_{n\,\cdot\infty}w_{n}. \tag{6.25}\] For \(n\,\rightarrow\,\infty\) eqs. (6.24a, b) yield \[w^{*}\,=\,\frac{1}{1\,+\,w^{*}}\,\rightarrow\,w^{*\,2}\,+\,w^{*}\,-\,1\,=\,0 \,\rightarrow\,w^{*}\,=\,(\!/\bar{5}\,\,-\,1)/2. \tag{6.27}\]This number is the so-called _golden mean,_ which is defined in geometry by sectioning a straight line segment in such a way that the ratio of the longer segment \(l\) to the total length \(L\) equals the ratio of the shorter segment to the longer segment, i. e. _w*_ = _l/L = (L - 1)/l._ In the following, we confine ourselves to this special winding number _w*_ = (_l/5 - 1_)/2 = 0.6180339, which is the "worst" irrational number in the sense that it is least well approximated by irrationals (see eqs. (6.22-24)). Although any given irrational number has a unique representation by continued fractions, the renormalization scheme has, up to now, only been applied to the so-called quadratic irrationals, which are the solutions of a quadratic equation with integer coefficients, and for which the continued fraction representation is periodic. Using the procedure described above, Shenker (1982) obtained the following _numerical results_ for the circle map (6.13): a) The values of the parameters (6.13) which via (6.21) generate the winding numbers _w_n in (6.24), geometrically tend to a constant, i. e. \[{\cal Q}_{n}(K)\,=\,{\cal Q}_{\infty}\,(K)\,-\,{\rm const}\,\cdot\,\bar{\delta }^{-n} \tag{6.28a}\] where \[\bar{\delta}\,=\,\left(\begin{array}{ll}-\,2.6180339\ldots\,=\,-\,w^{*\,-2} &{\rm for}\quad|K|\,<\,1\\ \\ -\,2.83362\ldots&{\rm for}\quad|K|\,=\,1\end{array}\right. \tag{6.28b}\] is a universal constant that, however, depends on _w*_. b) The distances \(d_{n}\) from \(\theta\,=\,0\) to the nearest element of a cycle which belongs to \(w_{n}\) \[d_{n}\,=f_{\Omega_{n}}^{F_{n}}(0)\,-\,F_{n\,-1} \tag{6.29a}\] scale like \[\lim_{n\to\infty}\,\frac{d_{n}}{d_{n\,+1}}\,=\,\bar{\alpha} \tag{6.29b}\] where \(\bar{\alpha}\) is again a universal constant with values \[\bar{\alpha}\,=\,\left(\begin{array}{ll}-\,1.618\ldots\,=\,-\,w^{*\,-1}&{ \rm for}\quad|K|\,<\,1\\ -\,1.28857\ldots&{\rm for}\quad|K|\,=\,1\end{array}\right. \tag{6.29c}\] (Note, that \(\,f_{\Omega_{n}^{*+1}}^{F_{n}}(0)\,-\,F_{n}\,=\,0\)). c) Fig. 109 shows the periodic function \[u\,(t_{j})\,=\,\theta^{n}\,(t_{j})\,-\,t_{j}\,;\quad\quad j\,=\,0,1,2,\ldots \tag{6.30}\]that measures the time dependence of the cycle elements \[\theta^{n}(t_{j})\,\equiv\,\theta(j\cdot\,w_{n})\,\equiv\,f^{j}(0) \tag{6.31}\] for times \(t_{j}\,\equiv\,j\cdot\,w_{n}\) in the limit \(n\,\rightarrow\,\infty\). (Here, \(f^{j}(0)\) is taken at \(\Omega_{n}(K)\), and \(u(t_{j})\) is periodic since the property \(f(\theta\,+\,\,1)\,=\,f(\theta)\,+\,\,1\) leads to \(\theta(t_{j}\,+\,1)\,=\,\theta(t_{j})\,+\,1\)). For \(|K\,|\,<\,1\) and \(\Omega_{n}\rightarrow\Omega_{\infty}\), the variable \(u(t)\) varies smoothly with \(t\), but its behavior becomes "bumpy" for \(|K\,|\,=\,1\), which signals the transition from quasiperiodicity to chaos. d) The power spectrum \[A\,(\omega)\,=\,\frac{1}{F_{n\,+\,1}}\,\sum\limits_{j\,=\,0}^{F_{n\,+\,1}-1}u\, (t_{j})\,\epsilon^{2\pi i\omega t_{j}} \tag{6.32}\]for \(\omega=0\),...\(F_{n+1}\) is shown for \(n\rightarrow\infty\) in Fig. 109 e. It displays self-similarity (the major structure between any two adjacent peaks is essentially the same), and the main peaks occur at powers of the Fibonacci numbers reflecting the fact that the motion is almost periodic after \(F_{n}\) iterations. ### 6.2 Universal Properties of the Transition from Quasiperiodicity to Chaos \begin{table} \begin{tabular}{l l} \hline Period Doubling & Quasiperiodicity \\ \hline Logistic map & Circle map \\ \(x_{n+1}=f_{r}(x_{n})=r_{x_{n}}(1-x_{n})\) & \(\theta_{n+1}=f_{KD}(\theta_{n})=\theta_{n}+\Omega-\) \\ & \(-\frac{K}{2\pi}\sin(2\pi\theta_{n})\bmod 1\) \\ One control parameter \(r\) & Two control parameters \(K\), \(\Omega\) \\ At \(r=R_{n}\) superstable cycle of length \(2^{n}\) & At \(\Omega=\Omega_{n}\) superstable cycle of length \(F_{n+1}\) \\ \(R_{n}\) is calculated from & \(\Omega_{n}\) is calculated from \\ \(f_{R_{n}}^{2^{n}}(0)=0\) (cycle closes) & \(f_{K,\Omega_{n}}^{F_{n+1}}(0)-F_{n}=0\left(\text{ensures }w_{n}=\frac{F_{n}}{F_{n+1}}\right)\) \\ Parameter scaling \\ \(R_{n+1}-R_{n}\sim\delta^{-n}\) for \(n\gg 1\) & \(\Omega_{n+1}-\Omega_{n}\sim\delta^{-n}\) for \(n\gg 1\) \\ Scaling of distances between cycle elements \\ \(d_{n}=f_{R_{n}}^{2^{n}}(0)\) & \(d_{n}=f_{K,\Omega_{n}}^{F_{n}}(0)-F_{n-1}\) \\ (compare to \(f_{R_{n}}^{2^{n-1}}(0)=0\)) & (compare to \(f_{K,\Omega_{n}}^{F_{n+1}}(0)-F_{n}=0\)) \\ \(\frac{d_{n}}{d_{n+1}}=-a\) for \(n\gg 1\) & \(\frac{d_{n}}{d_{n+1}}=\delta\) for \(n\gg 1\) \\ \hline \end{tabular} \end{table} Table 11: Parallels between the Transitions to Chaos via Period Doubling and Quasiperiodicity. These results (especially a) and b)) appear very similar to those found for the perioddoubling route, and it is therefore natural to attempt a _renormalization-group treatment_ of this transition which establishes its universal features. The formal parallels between the transitions to chaos via period doubling and quasiperiodicity are summarized in table 11. (Note, that \(\bar{a}\) and \(\bar{\delta}\) in eqs. (6.28-29) are different from the Feigenbaum constants.) To derive the corresponding functional equations, we define (see eq. (6.29 b)) the functions \[f_{n}(x)\,\equiv\,\bar{a}^{\,n}f^{n}\,(\bar{a}^{\,-n}\,x)\qquad\mbox{where} \tag{6.33}\] \[f^{n}\,(x)\,\equiv\,f^{F_{n+1}}(x)\,-\,F_{n} \tag{6.34}\] such that eq. (6.29 b) becomes \[\lim_{n\to\infty}\bar{a}^{\,n}\,d_{n}\,\propto\,\lim_{n\to\infty}\bar{a}^{\,n} f^{n}\,(0)\,=\,\lim_{n\to\infty}f_{n}\,(0)\,=\,{\rm const}. \tag{6.35}\] As in the case of period doubling (see eq. 3.15), this relation indicates that the sequence \([f_{n}\,(x)]\) converges towards a universal function \[\lim_{n\to\infty}f_{n}\,(x)\,=\,f^{\,*}(x) \tag{6.36}\] where \(f^{\,*}\,(x)\) is again the solution of a fixed-point equation which we shall now derive. More precisely, we consider \(f_{n}\) at \(\Omega\,=\,\Omega_{\infty}\,\), which corresponds to \(i\to\infty\) in eq. (3.21). The function \(f^{n+1}\) can be obtained from \(f^{n}\) and \(f^{n-1}\) by a rule which is dictated by the recursion of the Fibonacci numbers (6.23) and the property \(f(x\,+\,1)\,=\,f(x)\,+1\): \[f^{n+1}\,(x) \,=\,f^{F_{n+2}}(x)\,-\,F_{n+1}\,=\] \[\,=\,f^{F_{n+1}}\,[f^{F_{n}}(x)]\,-\,(F_{n+1}\,+\,F_{n})\,= \tag{6.37}\] \[\,=\,f^{n}\,[f^{n-1}\,(x)]\.\] Because the operation of iteration is commutative, we also have \[f^{n+1}\,(x)\,=\,f^{n-1}\,[f^{n}(x)]. \tag{6.38}\] According to eqs. (6.37-38), there are now _two_ ways of calculating \(f_{n+1}\,(x)\): \[f_{n+1}\,(x)\,=\,\bar{a}f_{n}\,[\bar{a}f_{n-1}\,(\bar{a}^{\,-2}\,x)]\] (6.39a) and \[f_{n+1}\,(x)\,=\,\bar{a}^{\,2}f_{n-1}\,[\bar{a}^{\,-1}\,f_{n}\,(\bar{a}^{\,-1 }\,(x)]. \tag{6.39b}\]Both equations become equivalent for the initial conditions \[f_{0}\left[\bar{a}^{\,-1}f_{1}\left(\bar{a}x\right)\right]\ =\ \bar{a}^{\,-1}f_{1}\left[\bar{a}f_{0}\left(x\right)\right]\,. \tag{6.40}\] Taking the limit \(n\to\infty\) in (6.39a), we obtain for the fixed point function \[f^{\bullet}\left(x\right)\ =\ \bar{a}f^{\bullet}\left[\bar{a}f^{\bullet}\left(\bar{a}^{\,-2}x\right)\right]\,. \tag{6.41}\] One can immediately verify that \[\bar{f}^{\bullet}\left(x\right)\ =\ -1\ +\ x \tag{6.42}\] is a rigorous solution to this equation. If we substitute \(\bar{f}^{\bullet}\left(x\right)\) into (6.41), we obtain \[-1\ +\ x\ =\ -\bar{a}^{\,2}\ -\ \bar{a}\ +\ x\to\bar{a}\ =\ -\,w^{\bullet\ -1}\,. \tag{6.43}\] This value for \(\bar{a}\) (which is equal to the second solution of eq. (6.27)) agrees with the numerical result for \(\left|K\right|\ <\ 1\) (see eq. (6.29c)). For \(\left|K\right|\ =\ 1\), we expect that (6.41) has a different solution because the linear terms is then absent in our model equation (6.13): \[f(0)\ =\ \Omega\ +\ \theta^{3}\ \cdot\ \mbox{const.}\,\ \ \ \mbox{for}\ \ \ \ \theta\to 0\ ;\ \ \ \left|K\right|\ =\ 1\,. \tag{6.44}\] If for \(\left|K\right|\ =\ 1\) we try the ansatz \[f^{\bullet}\left(x\right)\ =\ 1\ +\ \alpha x^{3}\ +\ bx^{6}\ldots \tag{6.45}\] a value for \(\bar{a}\) is found which is consistent with eq. (6.29c). This establishes the universality of the \(\bar{a}\)'s for \(\left|K\right|\ \leq\ 1\). By analogy to period doubling, the \(\delta\)'s appear as eigenvalues of the linearized fix-edpoint equation. These equations are somewhat more complicated than in the Feigenbaum route because the recursion relations are of _second_ order; that is, \(f_{n}\)_and_\(f_{n-1}\) are required to produce \(f_{n+1}\) (for more details see, e. g., the article of Feigenbaum, Kadanoff and Shenker, 1982). ### Global Universality Let us now consider the globally universal properties of the set of \(\Omega\) values that is complementary to the Arnold tongues and corresponds to irrational winding numbers. The following numerical results have been obtained by Jensen, P. Bak and T. Bohr (1984):For \(K\to 0\) (from below), the complement \(C\) of the total length of the steps in the (incomplete) devil's staircase, i. e. \(C=1-\sum\limits_{\mu\alpha}\Delta\,\Omega\,(p/q,\,K)\), decreases to zero with a power law \[C\propto(1-K)^{\beta}\] where the exponent \(\beta\equiv 0.34\) is the same for all \(f(\theta)\) in eq. (6.13) which have a cubic inflection point at \(K=1\). * At \(K=1\), the \(\Omega\) values belonging to irrational winding numbers form a self similar thin cantor set (of zero measure) whose Hausdorff dimension \(D^{*}=0.87\) is again universal. Whereas there exists up to now no theoretical explanation for the value of \(\beta\), we will follow the work of Cvitanovich et al. (1985 b) and calculate \(D^{*}\) by introducing a whole family of universal functions that maintains a dependence on \(\Omega\) (which was lost in the previous R. G. formulation where we put \(\Omega=\Omega_{\infty}\) in eq. (6.36)). For simplicity, we explain this method first for the period doubling route and transfer it then to the circle map. Fig. 110 shows again the self similar structure of the bifurcation tree from section 3.1. In order to capture the change in \(x\)_and_\(r\), we follow the procedure of Cvitanovic (1984) and introduce the modified doubling operator \(\bar{\Upsilon}\). It denotes the operation of iteration twice, rescaling \(x\) by \(a\), shifting \(r\) to the corresponding values (with the same slope at the cycle points) at the next bifurcation, and rescaling it by \(\delta\): \[\bar{\Upsilon}f_{R_{*}-p,\lambda_{*}}(x) \equiv -af_{R_{*}-\lambda_{*}-\lambda_{*}}^{(n-1)}\,(-x/\alpha)\] \[= -af_{R_{*}-\lambda_{*}(1-p\delta_{*})}^{(n)}\,U_{R_{*}+\lambda_{*} (1-p\delta_{*})}^{(n)}\,(-x/\alpha)]\] Fig. 110: a) Self similarity of the bifurcation tree; b) stability intervals of \(2^{n}\), \(2^{n-1}\), and \(2^{n-2}\) cycles of \(g_{p}(x)\) with \(n\) “arbitrarily highâ€. (After Cvitanovic, 1984.)where \[f^{(n)}(x)\,=\,f^{2^{n}}(x)\,\ \Delta_{n}\,=\,R_{n\,+\,1}\,-\,R_{n}\,,\ \delta_{n}\,=\,\Delta_{n^{\prime}}/\Delta_{n\,+\,1} \tag{6.47}\] and \(0\leqslant p\leqslant 1\) is a parameter which interpolates the \(r\)'s between a \(2^{n}\) and a \(2^{n\,+\,1}\) cycle. If we call \(\lim\limits_{n\,-\infty}\delta_{n}\,=\,\delta\) and \[\lim\limits_{n\,-\infty}\,\hat{\mathbb{T}}^{\,n}f_{R_{0}\,+\,\rho\,\delta_{0}} (x)\,\equiv\,g_{p}(x)\, \tag{6.48}\] we obtain from the definitions (6.46-48) an equation for the universal family of functions \(g_{p}(x)\): \[g_{p}(x)\,=\,\hat{\mathbb{T}}\,g_{p}(x)\,=\,\,-\,a\,g_{1\,+\,p\,\delta}\,[g_{1 \,-\,p\,\delta}\,(-\,x/a)] \tag{6.49}\] with boundary conditions: \[g_{0}(0)\,=\,0\quad\text{and}\quad g_{1}\,(0)\,=\,1. \tag{6.50}\] The first condition means that the origin of \(p\) corresponds to the superstable fixed point. The second condition sets the scale of \(x\) and \(p\) by the superstable two cycle (see Fig. 111) Note that our fixed point equation (3.22) for the usual doubling operator can be obtained from by choosing \(p=p^{\ast}\) such that \(g_{p^{\ast}}=g_{1\,+\,p\,\ast,\delta}\) i.e. \(p^{\ast}=\delta/(\delta-1)\). The family of universal functions \(g_{p}(x)\) in eq. (6.49) is called the unstable manifold (Vul and Khanin, 1982) because operation with \(\hat{\mathbb{T}}\) drives \(g_{p}(x)\) away from the fixed point values \(g_{p^{\ast}}(x)\). The advantage of eq. (6.49) with respect to the usual fixed point equation (3.22) is twofold. First, one could obtain _both_ Feigenbaum constants \(a\) and \(\delta\) from eq. (6.49) by Figure 111: Boundary conditions for the functions \(g\,(x)\) and \(g_{1}\,(x)\) defined in the text (after Cvitanovic, 1984). expanding \(g_{p}(x)\) into a double power series in \(p\) and \(x\) and comparing the coefficients of equal powers, that is one needs no linearization around the fixed point. Since we have calculated their values already in section 3.2, we now concentrate on the second aspect of eq. (6.49), namely its stability interpretation. It follows from Fig. 110b and eq. (6.49) that if \(g_{p}(x)\) has a stable \(2^{n}\) cycle (with n "arbitrarily high") in the interval-\(p_{0}<p<p_{0}\), then \(g_{1-p,\delta}(x)\) has a stable \(2^{n+1}\) cycle in the \(p\)-interval around \(1\) whose width is reduced by a factor of \(1/\delta\). This somewhat trivial looking statement becomes rather powerful if we translate eqns. (6.46-6.49) to the circle map where they will allow us to obtain some insight into the self-similarity of the width of the mode-locking regions. The period doubling operation (eq. (3.22)) translates according to eqns. (6.37-39) into: \[\mbox{T}\,[U^{n-1},f^{n-2}]\,=\,\bar{a}\,f^{n-1}\,[f^{n-2}\,(x/\bar{a}^{2}] \tag{6.51}\] where \(f^{n}\,(x)\,\equiv\,f^{F_{n-1}}\,(x)\,-\,F_{n}\). Accordingly, the doubling plus \(x_{i}\)\(r\) rescaling transformation \(\bar{\mbox{T}}\) from eq. (6.46) changes into: \[\bar{\mbox{T}}U^{n-1}_{\bar{a}_{n-1}\,+\,p\,\bar{a}_{n-1}}\,(x),f^{n-2}_{\bar{ a}_{n-2}\,+\,p\,\bar{a}_{n-2}}\,(x)]\,\equiv\,\bar{a}\,f^{n-1}_{\bar{a}_{n-2}\,+\,p\,\bar{a}_{n -2}}\,[\bar{a}\,f^{n-2}_{\bar{a}_{n-2}\,+\,p\,\bar{a}_{n-2}}\,(x/\bar{a}^{2})] \tag{6.52}\] where \[\Delta_{n}\,=\,\Omega_{n-1}\,-\,\Omega_{n}\,;\,\,\bar{\delta}_{n}\,=\,\Delta_{ n}/\Delta_{n}\,+\,1\] and \(0\,\leq\,p\,\leq\,1\) interpolates the \(\Omega\)'s between a \(F_{n}\) and a \(F_{n}\) - \(\,1\) cycle. Calling \[\lim_{n\,\rightarrow\,m}\,\bar{\mbox{T}}\,^{n}\,[f^{n-1}_{\bar{a}_{0}\,-\,p\, \bar{a}_{0}},\,f^{n-2}_{\bar{a}_{0}\,-\,p\,\bar{a}_{-1}}]\,=\,\bar{g}_{p}\,(x) \tag{6.53}\] eqns. (6.52-53) yield (in analogy to eq. (6.49): \[\bar{g}_{p}\,(x)\,=\,\bar{a}\,\bar{g}_{1\,-\,p\,\bar{a}}\,[a\,\bar{g}_{1\,+\,p \,\bar{a}_{-\,p\,\bar{a}_{0}\,-\,p\,\bar{a}_{0}\,}}\,(x/\bar{a}^{2})] \tag{6.54}\] where the normalization conditions are again \(\bar{g}_{0}\,(0)\,=\,0\), \(\bar{g}_{1}\,(0)\,=\,1\). One could again determine from eq. (6.54) the parameters \(\bar{a}\) and \(\bar{\delta}\) for the route from quasiperiodicity to chaos. But we will use here the universal object \(\bar{g}_{p}\,(x)\) to investigate the structure of mode lockings. For \(p=0\), \(\bar{g}_{p}\,(x)\) has, by construction, a superstable fixed point at \(x=0\). Our arguments are now closely parallel to those which we used to interpret eq. (6.49) in connection with Fig. (106). The range of \(p\) around zero, for which \(\bar{g}_{p}\,(x)\) still has a fixed point, is the range of parameters for which the original map is locked (in some winding number \(w_{n}\) with "infinitely large" \(n\) (see Fig. 112). However, around \(p=1\), there is another locked state which corresponds to the next locked region in the sequence, and the width of this region is scaled down by a factor of \(1/\delta\) compared to the first (note \(\delta<0\) for the transition form quasiperiodicity to chaos). Around \(p\ =\ 1\ +\ 1/\delta\), there is another mode locked region scaled down by \(1/\delta^{2}\) compared to the first, etc. Thus, by studying the stability of the fixed point of \(\bar{g}_{p}(x)\), one can find an infinity of mode locked states which are universally located. Although these are not all mode locked regions (see Cvitanovic et al., 1985b), they are sufficient to yield an estimate for the Hausdorff Dimension \(D^{*}\) of the "holes" in the devil's staircase at \(K\ =\ 1\). Generally, the Hausdorff Dimension of a self similar fractal can be computed from the equation (Hentschel and Procaccia, 1983): \[\sum_{i}\ \left(\frac{s_{i}}{\bar{s}}\right)^{D}\ =\ 1 \tag{6.55}\] where \(\bar{s}\) is the length of a box that covers the whole set of points and \(s_{i}\) are the linear dimension of smaller boxes that also provide a complete coverage. Eq. (6.55) can be derived by noting that the number \(N\) of points of a fractal, which can be partitioned into boxes of size \(s_{i}\) containing \(N_{i}\) points, can be written as \[N\ =\ \sum_{i}\ \ N_{i}. \tag{6.57}\] Dividing this by \(N\) and using \((N_{i}/N)=(s_{i}/\bar{s})^{D}\) (which generalizes the formula that the number of points in a ordinary cube of linear dimension \(l\) grows like \(l^{3}\)), one obtains eq. (6.55). The values \(s_{i}\) and \(\bar{s}\) which are needed to compute \(D^{*}\), can be read from Fig. 112. The range of parameter values, for which \(\bar{g}_{p}(x)\) has a stable fixed Figure 112: Three steps in the universal devil’s staircase as generated by the stability intervals of \(\bar{g}_{p}(x)\) and eq. (6.54). The widths of the steps are \(p^{*}\), \(p^{*}/\delta\) and \(p^{*}/\delta^{2}\) for \(F_{n^{*}}/F_{n\ + 1}\), \(F_{n\ + 1}/F_{n\ + 2}\), \(F_{n\ + 2}/F_{n\ + 3}\), respectively, where \(n\) is “arbitrarily highâ€. Also indicated are the widths of the “holes†used in eq. (6.55) (after Cvitanovic et al., 1985). point, follows from \(\tilde{g}_{p^{\star}}(x^{\star})=x^{\star}\) and \(|\,\tilde{g}_{p^{\star}}^{\prime}(x^{\star})\,|=1\). Since \(\tilde{g}_{p}(x)\) is universal, so is \(p^{\star}\) and, therefore, \(D^{\star}\), which is computed from the universal quantities \(p^{\star}\) and \(\tilde{\delta}\). If we estimate \(p^{\star}\) crudely by \(p^{\star}=1/2\,\pi\) (which is just the width of the first \(\Omega\) step in the circle map, see eq. (6.17)). We obtain from eq. (6.55) and (Fig. 112), the value \(D^{\star}\equiv 0.92\) which is less than 10% off the numerical result \(D^{\star}=0.87\) found by Jensen et al. (1983 b). More accurate theoretical values for \(D^{\star}\) can be obtained by considering better approximations to \(g_{p}(x)\) and \(p^{\star}\) (see Cvitanovic et al., 1985 b). ### 6.3 Experiments and Circle Maps There exists a large variety of real systems (see below) whose dynamical behavior can be modeled by circle maps. Usually an analysis to detect circle map behavior proceeds as follows: * The power spectrum of the (Fourier transformed) measured signal shows two or three incommensurate frequencies before the onset of broadband noise. This indicates a transition from quasiperiodicity to chaos. * A reconstruction of the trajectory in phase space, from the measurement of a single variable, shows the destruction of a torus in favour of a strange attractor. By choosing a proper plane in phase space, the torus section appears (before the transition to chaos) as a closed curve which can be parametrized by \(\theta_{n}\). A plot of \(\theta_{n+1}\) versus \(\theta_{n}\) reveals the existence (or nonexistence) of a circle map \(\theta_{n+1}=f(\theta_{n})\) with \(f(\theta+1)=f(\theta)+1\). * An analysis of the time series of the measured angles \(\theta_{n}\) as a function of the experimental control of parameters reveals universal properties near the transition such as: * Devil's staircase for the mode locking intervals which is ordered by the Farey tree. * Hausdorff dimension \(D_{0}=0.87\) for the unlocked intervals at the critical line which corresponds to \(K=1\) in the circle map. * Nontrivial scaling (\(\tilde{a}=-1.289\)) near the golden mean winding number. In the following examples, we will show how this program actually works. ### Driven Pendulum One of the simplest physical systems whose dynamical description has been reduced to a circle map (Jensen et al., 1983a, 1984) is a periodically driven pendulum with an additional constant external torque \(B\) which is described by the differential equation: \[\ddot{\theta}\,+\,\,y\,\dot{\theta}\,+\,\sin\,\theta\,=\,A\,\cos\,(\omega\,t)\, +\,\,B \tag{6.58}\] Naive discretization of the time derivative in eq. (6.58) yields for \(\theta_{n}\,=\,\theta(t\,=\,(2\,\pi/\omega)\,\cdot\,n)\): \[\theta_{n\,+\,1}\,-\,2\,\theta_{n}\,+\,\,\theta_{n}\,+\,(1\,-\,b)\,\,(\theta_{n }\,-\,\theta_{n\,-1})\,+\,K\,\sin\,\theta_{n}\,=\,(1\,-\,b)\,\Omega \tag{6.59}\] \[\mbox{where}\,\,\,(1\,-\,b)\,=\,\,y\,\,\frac{2\,\pi}{\omega}\,\,;\quad\Omega\, =\,\frac{2\,\pi}{\omega}\,\,(A\,+\,B)/\gamma\,;\quad K\,=\,\frac{2\,\pi}{ \omega}\,\,.\] This is, for \(r_{n}\,=\,\theta_{n}\,-\,\,\theta_{n\,-1}\,+\,\,\Omega\), equivalent to the dissipative circle map: \[\theta_{n\,+\,1}\,=\,\theta_{n}\,+\,\,\Omega\,-\,K\,\sin\,\,\theta_{n}\,+\,b\,r _{n} \tag{6.60a}\] \[r_{n\,+\,1}\,=\,b\,r_{n}\,-\,\,K\,\sin\,\,\theta_{n}\,. \tag{6.60b}\] Eqs. (6.59-60) make it plausible that the pendulum has something to do with the circle map, but they do not, of course, establish a rigorous connection. A numerical proof has been given by Jensen et al. (1983a, 1984) who solved eq. (6.58) by using a computer. Fig. 113 shows that subsequent values of the angles \(\theta_{n}\,=\,\theta\,(t\,=\,(2\,\pi/\omega)\,n)\); \(n\,=\,0,\,1\), \(2\,\ldots\) taken at integer multiples of the driving period \(2\,\pi/\omega\) yield, for special parameter values, a one dimensional circle map. Let us briefly comment on how mode locking shows up in the solutions of eq. (6.58). Mode locking implies: \[\theta\,(t_{0}\,+\,q\,T)\,-\,\,\theta\,(t_{0})\,=\,2\,\pi\,\cdot\,p \tag{6.61}\] where \(T\,=\,\frac{2\,\pi}{\omega}\,\,\cdot\,\). This yields Furthermore, the universal Hausdorff dimension \(D=0.87\) of the Cantor set derived from the space between mode locked plateaus has also been measured directly in an electronic simulation of a driven pendulum by Yeh et al. (1984). They evaluated the following quantities: Figure 113: Poincaré maps obtained by numerical integration of the pendulum equation (6.58), \(\theta_{n}=\theta(t=[2\pi/\omega)n)\). 25 000 consecutive points have been plotted; the first 1000 points have been omitted. Parameters: \(A=1.0\), \(\omega=1.76\). a) \(\gamma=1.576\), \(\Omega=1.4\), the function \(f(\theta_{n})\) increases monotonically, and the inset is a magnification emphasizing the one dimensional character of the map. b) \(\gamma=1.253\), \(\Omega=1.2\), the map develops a cubic inflection point, indicating the transition to chaos. The inset shows an enlargement around the inflection point. c) \(\gamma=1.081\), \(\Omega=1.094\), the map develops a local minimum and wiggles (insets) indicating chaotic behavior. (After Jensen et al., 1984.) \(S(l)=\) total length of all mode locking steps larger than \(l\) \([1-S(l)]/l=N(l)=\) number of intervals of size \(l\) needed to cover the unlocked holes. From \(N(l)\) they obtained, via \(\lim\limits_{l\to 0}N(l)\propto l^{-D}\), the Hausdorff dimension \(D_{0}\) shown in Fig. 114. Let us also call attention to the colored plates, XVI and XVII, at the beginning of this book which show the parameter dependence of the largest Liapunov exponents of a driven pendulum and the corresponding quantity for the circle map. One sees how, in both cases, the Arnold tongues develop and finally merge together as the nonlinearity parameter is increased. Since eq. (6.58) also describes externally driven Josephson junctions and charge density waves under the influence of an dc and ac electric field (Jensen et al., 1984), one expects that the dynamical behavior of these systems can also be modeled by one dimensional circle maps. This has indeed been partly confirmed experimentally (see References to this section). ### Electrical Conductivity in Barium Sodium Niobate Another fine example, where the circle map and the devil's stair case (associated to mode locking) have been observed experimentally, is the Barium Sodium Niobate crystal that we described already on page 152 (Martin and Martienssen, 1986). The voltage across the crystal displays, under the influence of a constant dc current, spontaneous oscillations that can be modulated by an additional ac current as shown in Fig. 115. ## Bibliography * (1) Figure 115: a) BSN crystal in humidified oxygen atmosphere at a temperature of 535 °C with an ac current density \(j_{ac}\) superimposed onto a constant dc current density \(j_{ac}\); also indicated are the “domains†shown in phase V at the beginning of this book. b) Poincaré map constructed from the measured voltage signal. c) and d): the circle map (c) constructed from the measured voltage-(a) becomes nonlinear (d) if the dc current density is increased. c) Mode locked states, measured by varying the driving frequency, display a devil’s staircase behavior near the transition to chaos. (After Martin and Martícasson, 1986.) ### Dynamics of Cardiac Cells It has been found by M. R. Guevara, L. Glass, and A. Shrier (1981) that circle maps are also relevant for explaining the dynamics of cardiac cells. Fig. 116 shows the temporal behavior of the transmembrane electric potential from an aggregate of embryonic chick heart cells, which beat spontaneously. If the system is _periodically stimulated_ via a current pulse through a microelectrode, the nature of the response depends on the interstimulus interval. The main idea is to _reduce this response to a single stimulus_ by constructing an appropriate circle map. ### Experiments and Circle Maps Figure 116: Influence of periodic stimulation as a function of the interstimulus interval \(t_{s}\): a) Stable phase locked pattern (i) \(2\!:\!1\)\(t_{s}=210\) msec; (ii) \(1\!:\!1\), \(t_{s}=240\) msec; (iii) \(2\!:\!3\)\(t_{s}=600\) msec. b) Irregular dynamics displaying the Wenckebach phenomenon, \(t_{s}=280\) msec. (After Guevara et al., 1981; copyright 1981 by the AAAS.) Figure 117: Time course of the transmembrane electrical potential from an aggregate of embryonic heart cells. Left: Spontaneous pulses. Right: After administration of a brief depolarizing stimulus (off-scale response) which occurs \(\delta\) msec after the action potential upstroke. The graph sharply rises, and the spontaneous-state period \(\tau\) is shifted to a new value T. (From Guevara et al., 1981; copyright 1981 by the AAAS.)Fig. 117 shows that the influence of a single pulse changes the period of the spontaneous beats from \(t\) to \(T\). The assumption is now that their ratio _T/_t_ depends only on the phasenshift \(\theta=\delta/\tau\) of the stimulus with respect to the natural signal, that is, \[T/\tau\,=\,g\left(\theta\right)\,. \tag{6.63}\] This assumption is supported by the experimentally determined function \(g\left(\theta\right)\) displayed in Fig. 118. Next we consider a train of stimuli separated by a uniform time interval \(t_{s}\). Consultation of Fig. 119 leads to the relation \[\delta_{i+1}\,+\,T_{i}\,=\,\delta_{i}\,+\,t_{s}\,. \tag{6.64}\] Division by \(t\), and assuming that the influence of a single stimulus decays sufficiently fast such that eq. (6.63) holds for every \(i\), yields the phase relationship: \[\theta_{i+1}\,=\,\theta_{i}\,+\,\Omega\,-\,g\left(\theta_{i}\right)\,;\,\,\, \,\,\Omega\,\equiv\,t_{s}/\tau \tag{6.65}\] which has the form of a circle map (see Fig. 120) where the rate of rotation \(\Omega=t_{s}/\tau\) is set by the interstimulus distance \(t_{s}\). Figure 118: The function \(g\left(\theta\right)\) defined in eq. (6.63), as experimentally determined for embryonic chick heart cell aggregates (from Guevara et al., 1981; copyright 1981 by the AAAS). Using _g_(_th_) from Fig. 118, eq. (6.65) has been used to successfully predict the response to a train of stimuli as a function of \(t_{s}\) (see Fig. 121). The so-called Wenckebach phenomenon in Fig. 116c (i. e., the gradual prolongation of the time between a stimulus and the subsequent action potential until an active potential is skipped either irregularly or in a phase locked pattern) occurs also in human electrocardiograms (Fig. 122). There the external stimulus is replaced by the stimulus provided by the sinoatrial node. It appears, therefore, from the results in Fig. 121 that circle maps provide a promising tool for the investigation of human cardiac dysrhythmia. ### Experiments and Circle Maps Figure 121: Experimentally determined and theoretically computed responses to periodic stimulation of period \(t_{s}\) with the same pulse durations and amplitudes as in Fig. 116a). a) Experimentally determined dynamics: 2:1, 1:1, 2:3 mode locking regions and three zones \(a\), \(b\), \(y\) of complicated dynamics. b) Theoretically predicted dynamics obtained via eq. (6.65). (After Guevara et al., 1981; copyright 1981 AAAS.) Figure 120: Experimentally determined circle map that describes the dynamics of beating chicken heart cell aggregates. This graph is obtained by using _g_(_th_) from Fig. 118 in eq. (6.65). (From Guevara et al., 1981; copyright 1981 by the AAAS.) Forced Rayleigh-Benard Experiment Another example where the global metric properties of the attractor which occurs at the transition from quasiperiodicity to chaos at the golden mean winding number, have been measured in some detail is a forced Rayleigh Benard experiment by Jensen et al. (1985). One uses mercury as a fluid in a small Rayleigh-Benard cell that supports two convection rolls. The Rayleigh number is chosen in a range where the convection is oscillatory in time. A second frequency is introduced by passing an ac current through the fluid whose amplitude and frequency serve as control parameters. Fig. 123 a shows the reconstructed experimental orbit obtained at the point the breakdown of the torus which has a golden mean winding number. The dots in Fig. 123 b are the experimental points derived from the data shown in Fig. 123 a, and the full line is the \(f(a)\) curve obtained from the time series of the circle map at \(K=1\), \(w^{*}=(\left|/\bar{5}-1\right|)/2\). The agreement between both sets of data is rather obvious and leads to the conclusion that the experimental data in Fig. 123 a, which look not at all like a smooth circle, and the iterates of the circle map (6.13) belong, from the metric point of view, to the same universality class. Figure 123: a) The experiment attractor of a forced Rayleigh-Bénard system in two dimensions. 2500 points are plotted. Note the variation in the density of points on the attractor. Part of this variation is, however, due to the projection of the attractor onto the plane. The attractor is nonintersecting in three dimensions, in which it was embedded for the numerical analysis. In the absence of experimental noise, the points should fall on a single curve. The smearing of the observed data set is mostly due to the slow drift in the experimental system during the run over about 2 hours. b) full line: \(f(a)\) curve obtained from the iterates of the circle map eq. (6.13) at \(K=1\) and the golden mean winding number; dots: \(f(a)\) values obtained from the experimental data in a). (After Jensen et al., 1985.) We note that this experiment yields also via \(D_{-\infty}\) the first measurement of the nontrivial scaling parameter \(\bar{a}\) of the circle map. \(D_{-\infty}\) has, at the transition from quasiperiodicity to chaos, the value \[D_{-\infty}\;=\;-\frac{\log w^{\ast}}{\log\bar{a}}\;=\;1.8980\ldots \tag{6.67}\] which is obtained for circle maps, in analogy to eq. (5.81), by replacing the ratio in the number of subsequent cycles (which is 2 for period doubling) by \(F_{n+1}/F_{n}\;=\;1/w^{\ast}\) and using \(\bar{a}\) instead of \(a\). ### 6.4 Routes to Chaos Table 12 summarizes the three different routes to chaos which we have discussed up to now. \begin{table} \begin{tabular}{l c c} Feigenbaum & Manneville-Pomeau & Ruelle-Takens-Newhouse \\ Pitchfork bifurcation & Tangent bifurcation & Hopf bifurcation \\ Bifurcation diagrams & & \\ \(\bullet\) & & \\ \(\bullet\) & & \\ \(\bullet\) & & \\ \(\bullet\) & & \\ \(\bullet\) & & \\ \(\bullet\) & & \\ \end{tabular} \end{table} Table 12: Summary of three main routes to chaos. But this table should only be considered as a first approximation to the true variety of transition scenarios. (Let us only recall that we have already discussed three types of intermittency). While it is natural to focus on common features, it would be premature to make sweeping generalizations about routes to chaos, and it should be emphasized that the range of dynamical behavior observed is quite large. This situation arises, on the one hand, because experiments on hydrodynamic systems (Benard and Taylor instability) depend sensitively on the _aspect ratios_ (i. e. the ratio of the cell dimensions in the Benard experiment, and the ratio of the width between the inner and outer cylinder and the height of the cylinder in the Taylor experiment) such that, for a given set of control parameters, _one can have more than one stable state_. On the other hand, new types of transitions are possible when one has _more than one control parameter_ (Swinney, 1983). Let us finally present a transition to chaos not mentioned above. ### Crises Crises are collisions between a chaotic attractor and a coexisting unstable fixed point or periodic orbit. Grebogi, Ott and York (1983 b) were the first to observe that such collisions lead to _sudden_ changes in the chaotic attractor. A simple example occurs in the period-three window of the logistic map in Fig. 51, where three stable and three unstable fixed points are generated by tangent bifurcations. Fig. 124 shows that the unstable fixed points, having entered the chaotic regions, immediately repell the trajectory out of the sub-band in such a way that the regions between the bands are also filled chaotically. Similar crises also occur in two- and three-dimensional maps and in three-dimensional flows. Figure 124: Detail of the bifurcation diagram in the region of the period-three tangent bifurcation. The dashed curves denote the unstable period-three orbit created at the tangent bifurcation; the crises occur at \(r^{*}\) (schematic, after Grebogi et al., 1983 b). As the discontinuity is approached, one often finds transient chaos, i. e. seemingly chaotic orbits which decay exponentially towards periodic orbits with a decay rate that follows a power law of the distance (in parameter space) from the discontinuity. It has been conjectured by Grebogi et al. (1983 b) that "almost all" sudden changes in chaotic attractors are due to crises. **Abstract** We study the behavior of the \(\beta\)-function in the \Regular and Irregular Motion in Conservative Systems Up to now we have exclusively studied dissipative systems for which volume elements in phase space shrink with increasing time. Although there are many physical realizations of dissipative systems, which range from the onset of turbulence in fluids to electronic circuits, there exists another large class of physical systems for which chaotic motion has been found (by Poincare, 1982) before the discovery of the strange attractor for dissipative systems (Lorenz, 1963): These are the conservative systems which encompass all dynamical systems of classical mechanics. Because there already exist excellent review articles by Berry (1978) and Helleman (1980) and a recent book by Lichtenberg and Liebermann (1982) on this subject, our presentation will be rather brief (as compared to six chapters on dissipative systems). In the following, conservative systems are considered to be either systems that follow Hamilton's equations of motion, \[\dot{\vec{q}}\,=\,\frac{\partial H}{\partial\vec{p}}\,,\;\;\;\dot{\vec{p}}\,= \,-\frac{\partial H}{\partial\vec{q}} \tag{7.1}\] and for which, volume elements in phase space are conserved because of Liouville's theorem, \[\mbox{div}\,\vec{j}\,=\,\mbox{div}\,(\dot{\vec{q}}\,,\;\dot{\vec{p}})\,=\,\sum \limits_{i}\,\left(\frac{\partial^{2}\,H}{\partial q_{i}p_{i}}\,-\,\frac{ \partial^{2}\,H}{\partial p_{i}\,q_{i}}\right)\,=\,0 \tag{7.2}\] or, in a more general sense, volume preserving, discrete maps. The fact that volumes do not change in conservative systems implies immediately that they display (in contrast to dissipative systems) no attracting regions in phase space, i. e. no attracting fixed points, no attracting limit cycles, and no strange attractors (see Fig. 125 and Appendix G). Nevertheless, in conservative systems one also finds chaos with a positive \(K\)-entropy, i. e. there are "strange" or "chaotic" regions in phase space, but they are not attractive and can be densely interweaved with regular regions. We now present some motivation for the study of conservative systems and then give an overview over the rest of this chapter. For some time, attention has shifted from the calculation of individual orbits to consideration of the qualitative properties of families of orbits, as shown in Fig. 126. Today, we are mainly interested in the long-time behavior of conservative systems. There are several reasons for this: * We should, for example, be able to answer the question whether the solar systems and the galaxy are stable under mutual perturbations of their constituents, or whether they will eventually collapse or disperse to infinity. The long-time limit involved here is of the order of the age of the universe. But "long" times are much shorter in the storage rings used for high energy physics or in fusion experiments, where the particles make many revolutions in fractions of a second. In such systems irregular or chaotic motion is to be avoided at all costs, and this is only possible if the long-time behavior of these (conservative) systems is known. Figure 126: Problems of increasing globality in classical mechanics. I. Step by step integration of the equations of motion. II. a) Local stability; b) local instability. III. Topological nature of complete trajectories: a) periodic motion on a torus; b) motion on a torus with irrational frequency ratios. IV. Types of flow in phase space: a) non mixing; b) mixing. (After Balescu, 1975.) Figure 125: a) In dissipative systems trajectories are attracted to fixed point, and volume shrinks, b) In conservative systems the points rotate around an elliptic fixed point, volume is conserved. 2. another point concerns the foundations of statistical mechanics, where no attempt is made to follow the detailed motion of all constituents of a complicated manybody problem. Instead, the ergodic hypothesis is made, i. e. one assumes that in the course of time the system explores the entire region of phase space allowed (the energy surface) and eventually covers this region uniformly. Time averages can then be replaced by simpler phase-space averages. But is the ergodic hypothesis correct? To answer this question, the long-time behavior of Hamiltonian systems with \(N\) degrees of freedom in the limit \(N\to\infty\) (and N/volume = constant) must be known. In the first part of this section, we consider the classical mechanis of simple Hamiltonian systems with a few degress of freedom and show that in most cases their motion in phase space is extremely complicated and neither regular nor simply ergodic. In other words, it will be shown that the regular motion treated in most textbooks on classical mechanics is an exception and rather uncommon. In the second part, we discuss some simple model systems which behave ergodically although they have only a few degrees of freedom. Finally, a classification scheme for chaotic behavior in conservative systems is described. ### 7.1 Coexistence of Regular and Irregular Motion In the following, we investigate the stability of the trajectories of a nonintegrable Hamiltonian system in the long-time limit. For this purpose, we start from an integrable Hamiltonian and consider the effect of a small nonintegrable perturbation. #### Integrable Systems A Hamiltonian \(H^{\prime}_{0}\,(\bar{\rho},\,\bar{q})\) is called integrable if one can find a canonical transformation \(S(\bar{q},\,\bar{J})\) to new variables \(\bar{\theta},\,\bar{J}\): \[\bar{q},\,\bar{\rho}\,=\,\frac{\partial S(\bar{q},\,\bar{J})}{\partial\bar{q}} \,\leftrightarrow\,\bar{J}\,,\,\bar{\theta}\,=\,\frac{\partial S(\bar{q},\, \bar{J})}{\partial\bar{J}} \tag{7.3}\] such that in the new coordinates the Hamiltonian depends only on the new momenta \(\bar{J},\,i.\,e.\), \(S(\bar{q}\,,\,\bar{J})\) is a solution of the _Hamilton-Jacobi equation_ (see, e. g., Arnold, 1978): \[H^{\prime}_{0}\left[\,\bar{q}\,,\,\frac{\partial S(\bar{q}\,,\,\bar{J})}{ \partial\bar{q}}\,\right]\,=\,H_{0}\,(\bar{J}) \tag{7.4}\]and the equations of motion in the action-angle variables \(\bar{J}\) and \(\bar{\theta}\) \[\dot{\bar{J}} = -\frac{\delta H_{0}}{\delta\bar{\theta}}\;=\;0\] (7.5) \[\dot{\bar{\theta}} = \frac{\delta H_{0}}{\delta\bar{J}}\;=\;\bar{\omega}\,(\bar{J})\] can easily be integrated to \[\bar{J} = \;{\rm const.}\] \[\bar{\theta} = \;\bar{\omega}\cdot\,t\;+\;\bar{\delta}\;.\] (7.7) One of the simplest examples for an integrable system is a harmonic oscillator that has the Hamiltonian \[H^{\prime}_{0}\,=\,\frac{1}{2}\,(\rho^{2}\;+\;\omega^{\,2}\,q^{2})\;.\] (7.8) The Hamilton-Jacobi equation (7.4) then becomes \[\frac{1}{2}\left[\left(\frac{\partial S}{\partial q}\right)^{2}\;+ \;\omega^{\,2}\,q^{2}\right]\,=\,H_{0}\,(J)\] (7.9) \[\rightarrow \;\frac{\partial S}{\partial q}\;=\;\sqrt{2\,H_{0}\,-\,\omega^{ \,2}\,q^{\,2}}\] (7.10) and \(J\) is determined by \[J = \;\frac{1}{2\,\pi}\;\oint\frac{\partial S}{\partial q}\;{\rm d} \,q\;=\;\frac{H_{0}\,(J)}{\omega}\] (7.11) \[\rightarrow \;H_{0}\,(J)\;=\;J\omega\] (7.12) where the integral has been taken over one cycle of \(q\). The equations of motion in the action-angle variables are \[\dot{J} = \;\frac{\partial H_{0}}{\partial\theta}\;=\;0\;\rightarrow\;J\;=\;{ \rm const} \tag{7.13a}\] \[\dot{\theta} = \;\frac{\partial H_{0}}{\partial J}\;=\;\omega\;\rightarrow\; \theta\;=\;\omega\,t\;+\;\delta\;. \tag{7.13b}\]The motion in the variables \(p\) and \(q\) is obtained from \[\theta\,=\,\frac{\partial S}{\partial J}\,=\,\frac{\partial}{\partial J}\,\int\,{ \rm d}\,q\,\sqrt[2H_{0}\,-\,\omega^{2}\,q^{2}\,\,=\,\,{\rm arc}\,\cos\,\left(q \,\sqrt[2J]\,\right) \tag{7.14}\] \[\to\,q\,=\,\sqrt[2J]{\frac{2J}{\omega}}\,\cos\theta \tag{7.15}\] and \[p\,=\,\frac{\partial S}{\partial q}\,=\,\,-\,\sqrt[2J\omega]{\frac{2}{\omega} }\,\sin\theta\,. \tag{7.16}\] The corresponding trajectory in phase space is an ellipse that becomes a circle with polar coordinates \(\sqrt[2J]{J}\) and \(\theta\) after proper rescaling. Comparing eqns. (7.7) and (7.13) one sees that the equations of motion (in action-angle variables) of any integrable system with \(n\) degrees of freedom are practically the same as those of a set of \(n\) uncoupled harmonic oscillators. The only difference is that in a general integrable system the frequencies \(\omega_{i}\) are still functions of the actions \(J_{i}\) whereas they are independent of \(J_{i}\) for harmonic oscillators. The existence of \(n\) integrals of the motion (\(J_{1}\ldots J_{n}\)) confines the trajectory in the 2 \(n\)-dimensional phase space (\(q_{1}\ldots q_{n}\), \(p_{1}\ldots p_{n}\)) of an integrable system to an \(n\)-dimensional manifold which has \(\,-\,\) in analogy to a circle for a harmonic oscillator with \(n\,=\,1\) and a torus for two harmonic oscillators with \(n\,=\,2\,-\,\) the topology of an \(n\)-torus. In the following, we will confine ourselves to \(n\,=\,2\), but most results can be extended to more degrees of freedom. Fig. 127 shows the motion of an integrable system with two degrees of freedom (i. e. with a 4-dimensional phase space) on a torus. Closed orbits occur only if \[n\,\Delta\,\theta_{2}\,=\,2\,\pi\,\cdot\,m,\,{\rm i.\,e.}\,\frac{\omega_{2}}{ \omega_{1}}\,=\,\frac{m}{n}\,=\,{\rm rational}\,\,;\,m,\,n\,=\,1,\,2,\,3\,\ldots\,. \tag{7.17}\] ### Coexistence of Regular and Irregular Motion Fig. 127: Torus in phase space. For irrational frequency ratios, the orbit never repeats itself but approaches every point on the two-dimensional manifold infinitesimally close in the course of time. In other words, the motion is ergodic on the torus. (Note that the dimension 2 of the torus is different from the dimension 3 of the manifold defined by \(H(\vec{\rho},\,\vec{q})=E=\) const.) ### Perturbation Theory and Vanishing Denominators Let us now add to \(H_{0}\) a perturbation \(\varepsilon\,H_{1}\) and see how it effects the previously regular motion; that is, we consider the Hamiltonian \[H(\vec{J},\,\vec{\theta})\,=\,H_{0}\,(\vec{J})\,+\,\varepsilon\,H_{1}\,(\vec{J },\,\vec{\theta}) \tag{7.18}\] (where we expressed \(H_{1}\) in the action-angle variables \(\vec{J}=(J_{1}\,,\,J_{2})\), \(\vec{\theta}=(\theta_{1}\,,\,\theta_{2})\) of the unperturbed system), and we try to solve the Hamilton-Jacobi equation \[H\left[\,\frac{\partial S}{\partial\,\vec{\theta}}\,\,,\,\vec{\theta}\,\right] \,=\,H_{00}\,(\vec{J}^{\prime})\;. \tag{7.19}\] Writing the generating function \(S\) as \[S(\vec{J}^{\prime},\,\vec{\theta})\,=\,\vec{\theta}\cdot\,\vec{J}\,\,+\, \varepsilon\,S_{1}\,(\vec{J}^{\prime},\,\vec{\theta}) \tag{7.20}\] and expanding \(H\) to order \(\varepsilon\), we obtain \[H_{0}\,(\vec{J}^{\prime})\,+\,\,\varepsilon\,\frac{\partial H_{0}}{\partial \vec{J}}\,\cdot\,\frac{\partial S_{1}\,(\vec{J}^{\prime},\,\vec{\theta})}{ \partial\vec{\theta}}\,+\,\varepsilon\,H_{1}\,(\vec{J}^{\prime},\,\vec{\theta} )\,+\,\,\mbox{O}\,(\varepsilon^{2})\,=\,H_{00}\,(\vec{J}^{\prime}) \tag{7.21}\] \(S_{1}\) is determined by requiring that the left-hand side in (7.21) is independent of \(\vec{\theta}\), i. e. \[\vec{\omega}\cdot\frac{\partial S_{1}\,(\vec{J}^{\prime},\,\vec{\theta})}{ \partial\vec{\theta}}\,=\,\,-H_{1}\,(\vec{J}^{\prime},\,\vec{\theta}) \tag{7.22}\] where \(\vec{\omega}=\partial H_{0}/\partial\vec{J}^{\prime}\) are the frequencies of the unperturbed system. Eq. (7.21) can be solved by expanding \(S_{1}\) and \(H_{1}\) (both being periodic in the components of \(\vec{\theta}\)) into Fourier series: \[S_{1}\,(\vec{J}^{\prime},\,\vec{\theta})\,=\,\sum_{\vec{K}=0}S_{1,\vec{K}^{ \prime}}(\vec{J}^{\prime})\,\mbox{e}^{i\vec{K}\cdot\,\vec{\theta}} \tag{7.23a}\] \[H_{1}\,(\vec{J}^{\prime},\,\vec{\theta})\,=\,\sum_{\vec{K}=0}H_{1,\vec{K}}( \vec{J}^{\prime})\,\mbox{e}^{i\vec{K}\cdot\,\vec{\theta}} \tag{7.23b}\] with \(\vec{K}=2\pi\,(n_{1},\,n_{2})\); \(n_{1},\,n_{2}\) integers. Using both expressions in (7.22) and comparing equal Fourier components finally yields \[S(\vec{J}^{\prime},\,\vec{\theta})\,=\,\vec{\theta}\cdot\,\vec{J}^{\prime}\,+\,i \varepsilon\sum\limits_{\vec{K}\,\div\,0}\frac{H_{1,K}(\vec{J}^{\prime})}{\vec{K }\cdot\vec{\omega}\,(\vec{J}^{\prime})}\,\,\mathrm{e}^{i\vec{K}\cdot\,\vec{ \theta}}\,. \tag{7.24}\] Equation (7.24) shows that \(S\) diverges for \[\omega_{1}\,n_{1}\,+\,\omega_{2}\,n_{2}\,=\,0\,\,,\,\,\,\mathrm{i.\,e.}\,\, \frac{\omega_{1}}{\omega_{2}}\,=\,\,-\,\frac{n_{2}}{n_{1}}\,=\,\mathrm{rational }\,. \tag{7.25}\] This is the famous problem of vanishing denominators. It shows that the system cannot be integrated by perturbation theory for rational frequency ratios because of strong resonances, and it seems that it can at most be integrated for irrational values of \(\omega_{1}/\omega_{2}\) if the perturbation series in \(\varepsilon\) converges. In the following we consider two problems: * What happens if an integrable system with \(\omega_{1}/\omega_{2}\) close to an _irrational_ value is perturbed by \(\varepsilon\,H_{1}\)? * What happens under a perturbation \(\varepsilon\,H_{1}\) to the tori of a system for which \(\omega_{1}/\omega_{2}\) has a _rational_ value? ### Stable Tori and KAM Theorem The first question is answered by a celebrated theorem of Kolmogorov (1954), Arnold (1963), and Moser (1967), the so-called KAM theorem which we quote here for \(n\,=\,2\), without proof. (The theorem holds for an arbitrary number \(n\) of degrees of freedom and proofs can be found in the quoted references.) The theorem states that if, among other technical conditions, the Jacobian of the frequencies is nonzero, i. e. \[\left|\,\frac{\partial\omega_{i}}{\partial J_{j}}\,\right|\,\pm\,0 \tag{7.26}\] then those tori, whose frequency ratio \(\omega_{2}/\omega_{1}\) is sufficiently irrational such that \[\left|\,\frac{\omega_{1}}{\omega_{2}}\,-\,\frac{m}{s}\,\right|\,>\,\frac{k\,( \varepsilon)}{s^{2.5}}\qquad(k\,(\varepsilon\,\to\,0)\,\to\,0) \tag{7.27}\] holds (\(m\) and \(s\) are mutually prime integers), are stable under the perturbation \(\varepsilon\,H_{1}\) in the limit \(\varepsilon\,\ll\,1\). It is important to note that the set of frequency ratios, for which (7.27) holds and for which the motion is therefore regular, even after the perturbation, has a nonzero measure. This follows because the total length \(L\) of all intervals in \(0\leq\omega_{1}/\omega_{2}\leq 1\), say, for which (7.27) _does not hold_ can be estimated as \[L\,<\,\mathop{\sum}\limits_{s=1}^{\infty}\,\,\frac{k\left(\varepsilon\right)}{s^{ 2.5}}\cdot s\,=\,k\left(\varepsilon\right)\,\mathop{\sum}\limits_{s=1}^{\infty }\,\,s^{-1.5}\,=\,{\rm const.}\cdot k\left(\varepsilon\right)\to 0\,\,\mbox{for}\,\,\varepsilon\to 0\,\,. \tag{7.28}\] Here \(k\left(\varepsilon\right)/s^{2.5}\) is the length of an interval around the rational \(m/s\) where (7.27) does not apply, and \(s\) is the number of \(m\) values with \(m/s\leq 1\) (see Fig. 128). Eq. (7.28) means that the set of frequency ratios, for which (under a perturbation by \(\varepsilon\,H_{1}\)) the original motion on the torus is only slightly disturbed into the motion of a deformed torus, has the finite measure \(1\,-\,{\rm const.}\cdot\,k\left(\varepsilon\right)\). But, on the \(\omega_{1}/\omega_{2}\) axis, this set has holes around every rational \(\omega_{1}/\omega_{2}\). For large enough \(\varepsilon\) the perturbation \(\varepsilon\,{\rm H}_{1}\) destroys _all_ tori. The last KAM torus which will be destroyed is the one for which the frequency ratio is the "worst irrational number" \(\omega_{1}/\omega_{2}\,=\,0/5\,-\,1\,\,)/2 (see Sect. 6.2). The destruction of this KAM torus shows some similarity to the Ruelle-Takens route to chaos in dissipative systems. It has indeed been found by Shenker and Kadanoff (1982) and McKay (1983) who studied the conservative version (\(b\,=\,1\)) of the map (6.12) of the annulus onto itself that the decay of the last KAM trajectory shows scaling behavior and universal features. ### Unstable Tori and Poincare-Birkhoff Theorem Let us now discuss the situation when \(\omega_{1}/\omega_{2}\) is rational. We will show that in this case the original torus decomposes into smaller and smaller tori. Some of these newly created tori are again stable according to the KAM theorem. But, between the stable tori, the motion is completely irregular. It is convenient to visualize what happens (to \(H_{0}\) under a perturbation \(\varepsilon\,H_{1}\)) in a Poincare map that is, in general, defined by the intersection points of the orbit with a hyperplane in phase space. For the case in hand, we consider the intersections with the \(q_{1}\), \(p_{1}\) plane \(S\) shown in Fig. 129, which define an area-preserving two-dimensional map \[r_{i+1}\,=\,r_{i};\qquad r_{i}\,=\,r\,\left(t\,=\,i\cdot\frac{2\,\pi}{\omega_{ 2}}\right) \tag{7.29}\] \[\theta_{i+1}\,=\,\theta_{i}\,+\,2\,\pi\,\frac{\omega_{1}}{\omega_{2}}\] Fig. 128: Intervals of lengths \(k\left(\varepsilon\right)/s^{2.5}\) contributing to \(L\). since the point in phase space hits \(S\) after a period \(2\,\pi/\omega_{2}\) during which \(\theta\) changes by \(2\,\pi\,\omega_{1}/\omega_{2}\). The frequency ratio \(\omega_{1}/\omega_{2}\) depends only on the radius \(r\) because \[\left.\begin{array}{l}\frac{\omega_{1}}{\omega_{2}}\,=\,\frac{ \frac{\delta H_{0}\left(J_{1}\,,\,J_{2}\right)}{\delta J_{1}}}{\delta J_{2}}\, =\,f(J_{1}\,,\,J_{2})\\ \\ H_{0}\left(J_{1}\,,\,J_{2}\right)\,=\,E\to J_{2}\,=\,J_{2}\left(J_{1}\right)\\ \\ J_{1}\,=\,\frac{1}{2\,\pi}\,\oint\,p_{1}\,\mathrm{d}\,q_{1}\,=\,\frac{r^{2}}{2} \end{array}\right\}\,=\,\frac{\omega_{1}}{\omega_{2}}\,=\,a\left(r\right) \tag{7.30}\] (7.30) can therefore be written as \[\left.\begin{array}{l}r^{\prime}\,=\,r\\ \theta^{\prime}\,=\,\theta\,+\,2\,\pi\,a\left(r\right)\end{array}\right\}\, \equiv\,T\left(\begin{array}{l}r\\ \theta\end{array}\right)\,. \tag{7.31}\] This is Moser's twist map (Moser, 1973). We note that for a rational frequency ratio \(r/s\,=\,a\left(r_{0}\right)\) every point on the circle \(r_{0}\), \(\theta_{0}\) is a fixed point of \(T^{s}\) since \[T^{s}\,\left(\begin{array}{l}r_{0}\\ \theta_{0}\end{array}\right)\,=\,\left\{\begin{array}{l}r_{0}\\ \theta_{0}\,+\,2\,\pi\,\frac{r}{s}\,\cdot\,s\,=\,\theta_{0}\,+\,2\,\pi\,r\,\,. \end{array}\right. \tag{7.32}\] If we now perturb \(H_{0}\) by \(\varepsilon H_{1}\), the twist map becomes \[\left.\begin{array}{l}r_{i+1}\,=\,r_{i}\,+\,\varepsilon f(r_{i},\,\theta_{i} )\\ \theta_{i+1}\,=\,\theta_{i}\,+\,2\,\pi\,a\left(r_{i}\right)\,+\,\varepsilon g \left(r_{i},\,\theta_{i}\right)\end{array}\right\}\,=\,T_{\epsilon}\left( \begin{array}{l}r_{i}\\ \theta_{i}\end{array}\right) \tag{7.33}\] ### 7.1 Coexistence of Regular and Irregular Motion Figure 129: Poincaré map of orbits on the torus in the plane \(\left(q_{1}\,,\,p_{1}\right)\). where \(f\) and \(g\) depend on \(H_{1}\). As a consequence of Liouville's theorem (which also holds for the Hamiltonian \(H_{0}\ +\ \varepsilon H_{1}\)), the map \(T_{\varepsilon}\) is area-preserving. What can we say now about the fixed points of \(T_{\varepsilon}\)? Consider two circles \(C\), and \(C\)_ between which lies the circle \(C\) on which \(a=r/s\). On \(C_{+}\), \(a>r/s\) and on \(C_{-}\), \(a<r/s\). \(T^{s}\) therefore maps \(C_{+}\) anti-clockwise, \(C_{-}\) clockwise, and \(C\) not at all (see Fig. 130). Fig. 130: Action of \(T^{s}\) and \(T^{s}_{\varepsilon}\) on \(C_{+}\) and \(C_{-}\). Under the perturbed map \(T^{s}_{\varepsilon}\) these relative twists are preserved if \(\varepsilon\) is small enough. Thus, on any radius from \(0\) there must be one point whose angular coordinate is unchanged by \(T^{s}_{\varepsilon}\). These radially mapped points make up a curve \(R_{\varepsilon}\) close to \(C\). Fig. 131 shows the curve \(R_{\varepsilon}\) formed by these points, and its image \(T^{s}_{\varepsilon}\) (\(R_{\varepsilon}\)) which cuts \(R_{\varepsilon}\) in an even number of points because the area enclosed by \(R_{\varepsilon}\) and \(T^{s}_{\varepsilon}\) (\(R_{\varepsilon}\)) must be the same. The points common to \(R_{\varepsilon}\) and \(T^{s}_{\varepsilon}\) (\(R_{\varepsilon}\)) are the fixed points of \(T^{s}_{\varepsilon}\), and we can see in Fig. 132 that an alternating sequence of elliptic and hyperbolic fixed points emerges. This means that the original torus with rational frequency ratio is not completely destroyed under a perturbation, but there remains an even number of fixed points. This is the,,Poincare-Birkhoff theorem" (Birkhoff, 1935). Let us first consider the elliptic fixed points which are surrounded by rotating points (see Figs. 132, 132). The corresponding orbits are the Poincare sections of smaller tori for which all our arguments can be repeated; that is, some of these smaller tori are again stable according to the KAM theorem and other tori decompose into smaller ones according to the Poincare-Birkhoff theorem. This gives rise to the self-similar structure in Fig. 133. ### Homoclinic Points and Chaos Which role do the hyperbolic fixed points play? Fig. 134 shows that, near a hyperbolic fixed point \(H\), the motion becomes unstable, and orbits are driven away from it, in contrast to the stable rotational motion around an elliptic fixed point. The stable (\(W_{s}\)) and unstable (\(W_{u}\)) lines which lead to or emanate from \(H\) behave highly irregularly since: Figure 133: Tori with rational frequency ratio decay into smaller and smaller tori, and the pattern of newly created elliptic and hyperbolic fixed points shows self-similarity. * They cannot intersect themselves (otherwise the motion on a trajectory in phase space would not be unique for a given set of initial conditions), * but \(W_{u}\) can intersect \(W_{s}\) at a so-called homoclinic point (see Fig. 135). Because the map \(T_{\epsilon}^{s}\) is continuous, and a homoclinic point is no fixed point, repeated application of \(T_{\epsilon}^{s}\) produces new homoclinic points. Furthermore, \(T_{\epsilon}^{s}\) must be applied an infinite number of times to approach the hyperbolic fixed point \(H\) along \(W_{s}\) (Appendix G.) Between each homoclinic point \(H_{0}\) and \(H\) there is, therefore, an infinite number of other homoclinic points; that is, the curves \(W_{u}\) and \(W_{s}\) form an extremely complex network. Summarizing: If we disturb the regular orbits of an integrable system on a torus in phase space by adding a nonintegrable perturbation, then, depending on the different initial conditions (different \(\tilde{J}\), \(\tilde{\delta}\) in (6.7)) imply different \(\omega_{1}/\omega_{2}\) since \(\tilde{\omega}=\tilde{\omega}\,(\tilde{J})\), regular or completely irregular motion results. Although the measure of initial conditions, which lead to regular motion, is nonzero due to the KAM theorem, for every rational frequency ratio (which are densely distributed along the real axis) one obtains smaller and smaller stable tori and irregular orbits due to the hyperbolic fixed points. Thus, an arbitrarily small change in the initial conditions leads to a completely different long-time behavior; and for the motion in phase space, one obtains the complicated pattern in Fig. 136. It shows that in conservative systems regular and irregular motion are densely interweaved. Fig. 136: Regular and irregular motion in the phase space of a nonintegrable system. Finally, we also mention that for area-preserving maps one finds "period doubling", i. e. a successive creation of new pairs of _elliptic_ fixed points (Greene et al., 1981). We shall discuss this scenario in Appendix G and show that the corresponding Feigenbaum constants are larger than in the dissipative case. ### Arnold Diffusion So far in this section we have only dealt with systems having two degress of freedom for which the two-dimensional tori stratify the three-dimensional energy surface \(S_{E}\). The irregular orbits which traverse regions where rational tori have been destroyed are therefore trapped between irrational tori. They can only explore a region of the energy surface which, while three-dimensional, is nevertheless restricted and, in particular, disconnected from other irregular regions, as shown in Fig. 137. For more degrees of freedom, however, the tori do not stratify \(S_{E}\) (e. g. for three degrees of freedom the tori are three-dimensional, and the energy surface is five-dimensional). The gaps then form one single connected region. This offers the possibility of so-called _"Arnold diffusion"_ of irregular trajectories (Arnold, 1964). The existence of invariant tori for perturbed motion is, therefore, not a guarantee of stability of motion for systems with more than two degrees of freedom because irregular wandering orbits that are _not_ trapped exist arbitrarily close to the tori. ### 7.1 Coexistence of Regular and Irregular Motion Figure 137: Trapping of irregular orbits between stable KAM tori for a system with two degrees of freedom. Figure 138: Arnold diffusion for Hamiltonian systems with more than two degrees of freedom (schematically). Examples of Classical Chaos Finally, we present some experimental evidence for the coexistence of regular and irregular motion. Fig. 139 shows the Poincare map in \(S\) for the nonintegrable Henon-Heiles system, \[H\,=\,\frac{1}{2}\,\,{p_{1}}^{2}\,\,+\,\,{q_{1}}^{2}\,\,+\,\,{p_{2}}^{2}\,\,+\, \,{q_{2}}^{2}\,\,+\,\left[\,{q_{1}}^{2}\,{q_{2}}\,-\,\frac{{q_{2}}^{3}}{3}\,\right] \tag{7.34}\] which consists of an integrable pair of harmonic oscillators coupled by nonintegrable cubic terms (Henon, Heiles, 1964). The left-hand column shows the surfaces of section generated by eighth-order perturbation theory for various energies (after Gustavson, 1966). The right-hand side are the computed intersections of the trajectory with \(S\). For \(E=1/24\) and \(E\) 1/12, the mapping plane is covered with the intersections of (somewhat deformed) tori which signal regular motion and which are identical with those given by perturbation theory. Above \(E=1/9\), however, most, but not all, tori are destroyed, and all the dots which appear to be random are generated by one trajec Fig. 139: Poincaré maps for the Hénon-Heiles system (after Berry, 1978). tary as it crosses \(S\). The figure for \(E=1/8\) clearly shows the coexistence of regular and irregular motion. As a further example, we consider the motion of an asteroid around the sun, perturbed by the motion of Jupiter, as shown in Fig. 140. This three-body problem is nonintegrable, and according to eqns. (7.24-25) we expect that the asteroid motion becomes unstable if the ratio of the unperturbed frequency of the asteroid motion \(\omega\) and the angular frequency of Jupiter \(\omega_{J}\) becomes rational. Fig. 141 illustrates that, in fact, gaps occur in the asteroid distribution for rational \(\omega/\omega_{J}\) On the other hand, the existence of stable asteroid orbits (\(J\pm 0\)) can be considered as a confirmation of the KAM theorem. A second sort of solar-system gaps occurs in the _rings of Saturn_. In this system Saturn is the attractor; the perturber is any of the inner satellites, and the rest masses are the ring particles. One major resonance occurs within the "Cassini division" shown on Plate VII at the beginning of the book. ### 7.1 Coexistence of Regular and Irregular Motion Figure 141: Fraction \(f\) of asteroids in the belt between Mars and Jupiter as a function of \(\omega/\omega_{J}\) (after Berry, 1978). Figure 140: Perturbation of an asteroid’s motion by Jupiter. ### 7.2 Strongly Irregular Motion and Ergodicity In the previous section, we linked the origin of irregular motion in Hamiltonian systems to hyperbolic fixed points in the associated area-preserving maps. If we, therefore, want to construct models for strongly irregular motion, it is natural to search for maps for which all fixed points are hyperbolic. #### Cat Map One example of such a system is Arnold's cat map on a torus which is defined by \[x_{n\,+\,1} = x_{n}\,+\,y_{n}\bmod 1\] \[y_{n\,+\,1} = x_{n}\,+\,2y_{n}\bmod 1 \tag{7.35}\] #### Cat Map One example of such a system is Arnold's cat map on a torus which is defined by \[x_{n\,+\,1} = x_{n}\,+\,y_{n}\bmod 1\] \[y_{n\,+\,1} = x_{n}\,+\,2y_{n}\bmod 1\] Figure 142: Action of the map \(T\) on a cat on a torus. The torus a) is transposed into the unit square of b). \(\hat{T}\) is the map \(T\) without restriction to the torus. (After Arnold and Avez, 1968.)This map is area-preserving because the Jacobian of \(T\) is unity, and it has the eigenvalues \[\lambda_{1}\,=\,(3\,+\,\sqrt[1]{5}\,)/2\,>\,1\quad\mbox{and}\quad\lambda_{2}\,=\, \lambda_{1}\,^{-1}\,<\,1 \tag{7.36}\] so that all fixed points of \(T^{n}\) (\(n\,=\,1,\,2,\,3\,\ldots\)) are hyperbolic. Any point on the torus for which \(x_{0}\) and \(y_{0}\) are rational fractions is a fixed point of \(T^{n}\) for some \(n\) (e. g. (0, 0) is a fixed point of \(T_{s}\) and (2/5, 1/5) and (3/5, 4/5) are fixed points of \(T^{2}\), etc.), and these are the only fixed points because \(T\) has integral coefficients. The action of the cat map is illustrated in Fig. 142. After just one iteration the cat is wound around the torus in complicated filaments; its dissociation arises from the hyperbolic nature of \(T\) which causes initially close points to map far apart. ### 7.2 Strongly Irregular Motion and Ergodicity Figure 144: **a) Behavior of a volume element for nonmixing and for mixing transformations. b) Mixing of a drop of ink in a glass of water. (After Arnold and Avez, 1968.)** The axes of stretch (\(W_{u}\)) and compression (\(W_{s}\)) from (0, 0) lie along irrational directions and so wrap densely around the torus, never intersecting themselves but intersecting one another infinitely often, as shown in Fig. 143. Since any set of iterates (which starts from a point \((x_{0}\,,y_{0})\) with \(x_{0}/y_{0}\,=\,\) irrational) eventually covers the torus, "time" averages over the iterates are equal to "space" averages over the torus, and the motion generated by the cat map is ergodic. However, the cat map has even a stronger property \(-\) mixing. In other words, the map distorts any area element so strongly, that it is eventually spread over the whole torus, just as a drop of ink (its volume corresponds to an area element in the cat map) is homogeneously distributed throughout a glass of water after it has been stirred (see Fig. 144). ### Hierarchy of Classical Chaos The first entry in Table 13 contains the well-known Poincare-recurrence theorem for Hamiltonian systems. It is simply a consquence of area-preserving motion in a finite region. We can draw an anology to what happens if we take a walk in a snow \begin{table} \begin{tabular}{l l l} Property & Definition & Example \\ Recurrent & The trajectory returns to a given & Any Hamiltonian system (or \\ & neighborhood of a point an infinite number of times & area-preserving map) which \\ & finite number of times & maps a finite region of phase space onto itself \\ Ergodic & Time averages can be replaced by & \(x_{n+1}\,=\,x_{n}\,+\,b\,\,\text{mod}\,1\) \\ & averages over phase space \(\,\leftrightarrow\,\) Zero & \(b\,=\,\) irrational \\ & is a simple eigenvalue of the & \\ & Liouville operator L. & \\ Mixing & Correlation functions decay to zero & Cat map \\ & in the infinite time limit \(\,\rightarrow\,\) L has & \\ & one simple eigenvalue 0 and the & \\ & rest of the spectrum is continuous. & \\ \(K\)-system & The map has a positive \(K\)-entropy, & Cat map \\ & i.e. close orbits separate exponentially \(\rightarrow\,\) L has a Lebesque spectrum with denumerably infinite multiplicity. & \\ \end{tabular} \end{table} Table 13: Hierarchy of classical chaos. covered finite square: eventually the area will be covered with footprints; and after some time, one is forced to walk on one's own prints (again and again). Recurrence does not imply ergodicity because the allowed areas need not to be connected (there could be two squares). If the phase space is divided, the trajectory is confined to the region in which is started and does not cover the whole phase space. More formally, a map \(f\) is called mixing, if \[\lim_{n\to\infty}\,\rho\left[ {\left. {\left. {{\left. {{\left. {{\left. {{\left. {A}} \right.}} \right.}} \right.} \right.} \right.} \right.} \right.} \right.}\,=\,\rho\left( {{\left. {{\left. {{\left. {A}} \right.}} \right.} \right)\rho\left( {{\left. {B}} \right.} \right)}}\] for every pair of measurable sets \(A\) and \(B\). Here \(r\) is the invariant measure of \(f\). We used the abbreviation \[\rho\left( {{\left. {A} \right)}}\,=\,\int\limits_{A}\,{\rm d}x\,\rho\left( {x} \right)\] and assumed that the measure of the allowed phase space \(G\), on which \(f\) acts, is normalized to unity, i. e. \(\int\limits_{\Gamma}\,{\rm d}x\,\rho\left( {x} \right)\,=\,1\). If \(A\) and \(B\) correspond to the same point, eq. (7.37) reduces to \[\lim_{n\to\infty}\,\int\limits_{\Gamma}\,{\rm d}x\,\rho\left( {x} \right)f^{n}\left( {x} \right)x\,\equiv\,\left\langle {x_{n}x_{0}} \right\rangle\,=\,\left[ {\int\limits_{\Gamma}\,{\rm d}x\,\rho\left( {x} \right)x} \right]^{2}\,=\,\left\langle {x_{0}} \right\rangle^{2}\] i. e. _mixing means that the autocorrelation function_\(\left\langle \left( {x_{n}-\left\langle {x_{0}} \right\rangle} \right)\left( {x_{0}-\left\langle {x_{0}} \right\rangle} \right)\right\rangle\,=\,\left\langle {x_{n}x_{0}} \right\rangle\,-\,\left\langle {x_{0}} \right\rangle^{2}\) _decays to zero_ and "the system relaxes to thermal equilibrium". (The general proof can be found in the book by Arnold and Avez (1968) who actually show that a system is mixing if, and only if, \(\lim\limits_{n\to\infty}\left\langle {F^{*}\left[ {f^{n}\left( {x} \right)} \right]G\left( {x} \right)} \right\rangle\,=\,\left\langle {F^{*}\left( {x} \right)} \right\rangle\,\left\langle {G\,\left( {x} \right)} \right\rangle\) for any square integrable complex-valued functions \(F\) and \(G\).) Although ergodicity (of course) implies recurrence, it does not imply mixing. Consider, for example, the map \[x_{n\,+\,1}\,=\,x_{n}\,+\,b\,{\rm mod}\,1\,\equiv\,f(x_{n})\] which shifts a point \(x\)0 on a unit circle by \(b\). The map is ergodic for irrational values of \(b\), because then a given starting point \(x\)0 never returns to itself, as it does for rational \(b\) = _p/q_, (_p_, \(q\) integers) after \(q\) steps, and the images of \(x\)0 cover the circle uniformly. The Liapunov exponent for this map is \[\lambda\,=\,\lim_{n\to\infty}\,\frac{1}{n}\,\log\,\left| \,\frac{{\rm d}x_{n}}{{\rm d}x_{0}} \right|\,=\,0\] i. e. (7.40) is an example that shows a) ergodicity without sensitive dependence on the initial conditions, and b) ergodicity without mixing. The last statement follows because the overlap of the images _f_\({}^{n}\left( {A} \right)\) of a line element \(A\) with another (line) element \(B\) is either finite or zero (according to the number of iterations) and never reaches a finite equilibrium value as required by eq. (7.37) (see Fig. 145). (Note that for simplicity we have replaced in this example "area" elements by "line" elements.) Typical systems that show mixing are the cat map (Fig. 142) and the baker's transformation (Fig. 70a). In both cases a given volume element becomes distorted into finer and finer filaments that eventually cover the whole phase space uniformly. But, the rate at which volume elements become stretched need not be exponential (as in the examples quoted above), i. e. a system that shows mixing need not to be a \(K\)-system. These examples show that the properties in Table 12 indeed characterize increasingly chaotic motion. We have also indicated in this table that the hierarchy of classical chaos is mirrored by the spectrum of eigenvalues of the Liouville operator. Let us briefly explain this fundamental relation which allows a characterization of classical chaos without considering individual trajectories. The Liouville operator L determines the time evolution of the density \(\rho\,(\vec{\rho},\,\vec{q})\) in phase space: \[\frac{\mathrm{d}}{\mathrm{d}t}\,\,\rho\,(\vec{\rho},\,\vec{q})\,=\,\dot{\vec{q }}\,\frac{\partial\rho}{\partial g\,\dot{\vec{q}}}\,+\,\dot{\vec{\rho}}\,\frac {\partial\rho}{\partial\dot{\vec{\rho}}}\,= \tag{7.42}\] \[\,=\,\left[\,\frac{\partial H}{\partial\vec{\rho}}\,\,\frac{\partial}{ \partial\vec{q}}\,-\,\frac{\partial H}{\partial\vec{q}}\,\,\frac{\partial}{ \partial\vec{\rho}}\,\right]\rho\,\equiv\,-\,i\,\mathrm{L}\,\rho \tag{7.43}\] \[\,\rightarrow\,\rho\,(t)\,=\,\mathrm{e}\,^{-\,i\,\mathrm{L}}\,\rho\,(0)\,\,. \tag{7.44}\] Here we used Hamiltons's equations, and (7.43) defines L. It is useful to introduce the eigenvalues \(\lambda\) of L via \[\mathrm{e}\,^{-\,i\,\mathrm{L}}\,\varphi\,(\vec{x})\,=\,\mathrm{e}\,^{\,i\, \mathrm{L}}\,\varphi\,(\vec{x})\,\,;\,\,\,\,\,\vec{x}\,=\,(\vec{\rho},\,\vec{ q}) \tag{7.45}\] where \(\varphi\,(\vec{x})\) is a complex, square integrable function in phase space. According to Table 13, different degrees of classical chaos correspond to different spectra of \(\lambda\) (the arrows indicate the direction of the statement). We explain this correspondence by two examples and refer to the cited literature for the general proofs. Figure 145: Translations on a circle show ergodicity but they are not mixing. First we consider two uncoupled harmonic oscillators whose Hamiltonion reads in action-angle variables: \[\mathrm{H}_{\infty}\,=\,\omega_{1}\,J_{1}\,+\,\omega_{2}\,J_{2} \tag{7.46}\] where \(\omega_{1}\), \(\omega_{2}\) are the oscillator frequencies. Eqns. (7.43\(-\)45) then become \[-i\,\mathrm{L}_{\infty}\,\rho\,=\,\left\lfloor\,\omega_{1}\,\frac{\partial}{ \partial\theta_{1}}\,+\,\frac{\partial}{\partial\theta_{2}}\,\right\rfloor\rho \tag{7.47}\] \[\mathrm{e}\,^{-i\,\mathrm{L}_{\infty}}\,\varphi\,(\theta_{1}\,\,,\,\theta_{2}) \,=\,\mathrm{e}\,^{i\,\mathrm{\lambda}}\,\varphi\,(\theta_{1}\,\,,\,\theta_{2}) \tag{7.48}\] where \(\varphi\) is periodic in the angles \(\theta_{1}\) and \(\theta_{2}\). These equations have the obvious solutions \[\varphi\,(\theta_{1}\,\,,\,\theta_{2})\,\propto\,\mathrm{e}\,^{2\pi i(n_{1} \theta_{1}\,+\,n_{2}\,\theta_{2})} \tag{7.49}\] \[\rightarrow\,\lambda\,=\,2\,\pi\,(n_{1}\,\omega_{1}\,+\,\,n_{2}\,\omega_{2}) \tag{7.50}\] where \(n_{1}\) and \(n_{2}\) are integers. The motion of the two oscillators on the tours (see Fig. 127) is only ergodic if \(\omega_{1}/\omega_{2}\) is irrational, i. e. \(\lambda\,\propto\,n_{1}\,\omega_{1}\,+\,n_{2}\,\omega_{2}\,=\,0\) only for \(n_{1}\,=\,n_{2}\,=\,0\), and \(\lambda\,=\,0\) is a simple eigenvalue. For nonergodic motion \(\omega_{1}/\omega_{2}\,=\,\)rational, and \(\lambda\,=\,0\) is degenerate. It is quite plausible that ergodicity and a nondegenerate eigenvalue \(\lambda\,=\,0\) correspond to each other because only then the equation for the time invariant density \(\rho\), \[\mathrm{e}\,^{-i\,\mathrm{L}}\,\rho\,=\,\rho \tag{7.51}\] has a unique solution. Eq. (7.44) can be extended to maps \(\vec{x}_{n\,+\,1}\,=\,\vec{G}\,(\vec{x}_{n})\) by \[\mathrm{e}\,^{-i\,\mathrm{L}}\,\varphi\,(\vec{x})\,\equiv\,\varphi\,[\vec{G} \,^{-1}\,(\vec{x})]\,=\,\mathrm{e}\,^{i\,\mathrm{L}}\,\varphi\,(\vec{x}). \tag{7.52}\] As a further example, we consider the cat map (7.35) which acts on a torus so that we can expand \(\varphi\) as \[\varphi\,(\vec{x})\,=\,\sum_{\vec{m}}\,\mathrm{e}\,^{2\pi i\,\vec{m}\,\cdot\, \vec{x}}\,\bar{\varphi}\,(\vec{m}) \tag{7.53}\] where \(\vec{m}\,=\,(m_{1},\,m_{2})\); \(m_{1},\,m_{2}\) integers. Using the fact that the transformation matrix \(\mathrm{T}\) is symmetric, we obtain from (7.52\(-\)53) after straightforward manipulations \[\bar{\varphi}\,(\mathrm{T}\,\vec{m})\,=\,\mathrm{e}\,^{i\,\mathrm{L}}\,\bar{ \varphi}\,(\vec{m}). \tag{7.54}\] The point \(\vec{m}=0\) yields the only fixed point in (7.54), i. e. \(\lambda=0\) is again a simple eigenvalue that corresponds to a constant invariant density. The action of T on the other \(\vec{m}\)-values is explained in Fig. 146. If we relabel the \(\vec{m}\)'s according to their hyperbolas (\(a\)) and their place on it (\(j\)), i. e. \(\vec{m}\,\triangleq\,(a,j)\), then eq. (6.54) can be written as \[e^{-i\,L}\,\vec{\varphi}\,(a,j)\,\equiv\,\,\vec{\varphi}\,(a,j\,+\,1)\,=\,e^{i \lambda}\,\vec{\varphi}\,(a,j) \tag{7.55}\] i.e. \(e^{-i\,L}\) is a translation operator in the variable \(j\). The corresponding spectrum of L is continuous (note that the \(j\)'s are not limited) and denumerably infinite degenerate (via the \(a\)'s). A spectrum \(\lambda\), which contains every real number with the same multiplicity and for which the spectral weight is just \(d\lambda\), is called the Lebesque spectrum. The cat map is an example for a \(K\)-system. These systems have (also in general) Lebesque spectra with denumerably infinite multiplicity. ### Three Classical \(K\)-Systems Let us now present some physical examples of \(K\)-systems that exhibit ergodic and mixing behavior. First, we consider the famous hard-sphere fluid whose mixing was rigorously established by Sinai (1970). Because of the infinite contact potential, this is clearly not a perturbation to a simple system (e. g. of noninteracting particles). Fig. 147 a shows how exponential separation of the trajectories results from collisions between the spheres' convex surfaces. It is worth emphasizing that Sinai's proof is valid for _two_ discs moving on a torus, i. e. it does _not_ require the thermodynamic limit of infinitely many particles. Another system, which has only a few degrees of freedom, but which nevertheless exhibits ergodicity and mixing, is a free particle in a stadium, as shown in Fig. 147 b. The exponential separation of trajectories is generated by the particular form of the boundary (Bunimovich, 1979). Finally, we mention that the geodesic motion of a mass point on a compact surface with _overall_ negative Gaussian curvature is also mixing and ergodic (Anosov, 1969). It can be already seen from the saddle-shaped surface in Fig. 147 c (which has a negative curvature at _one_ point P) how nearby trajectories separate along geodesics. ### 7.2 Strongly Irregular Motion and Ergodicity Figure 147: Separation of trajectories for three chaotic systems: a) Sinai’s billiards, b) a free particle in a stadium, c) a free particle on a surface with negative curvature. **Abstract** We study the behavior of the \(\beta\)-function in the \ ## Chapter 8 Chaos in Quantum Systems? The existence of chaotic motion in classical conservative systems naturally leads to the question of how this irregularity manifests itself in the corresponding quantum systems. In a broader context, one might inquire about the nature of the solutions to the wave equations that arise, e. g., in plasma physics, optics, or acoustics, whose ray trajectories (WKB solution, geometric optics) are stochastic. The question about the behavior of quantum systems whose classical limit exhibits chaos has been posed since the early days of quantum mechanics (Einstein, 1917) because it raises the problem of how to quantize a system which executes nonperiodic motion (at that time, periodic system were quantized via the Bohr-Sommerfeld quantization rule \(\oint p\mathrm{d}q=nh\), where \(h\) is Planck's constant). Since the discovery and the establishment of wave mechanics, we know how to proceed if we wish to learn about the time evolution of any quantum system: solve the time-dependent Schrodinger equation \[\hat{\mathbf{H}}\,\Psi\,=\,-\,\frac{h}{i}\,\,\frac{\partial}{\partial t}\,\Psi \tag{8.1}\] where \(\hat{\mathbf{H}}\) is the Hamilton operator of the system, \(\Psi\)is its wave function, and \(h=h/2\,\pi\). In order to develop some intuition for the changes which will arise if we pass from a classically chaotic system to its quantum mechanical version, we recall several major differences between classical and quantum systems: * In contrast to classical mechanics (where a statistical description is only necessary if the system becomes chaotic in time), quantum mechanics allows _only_ statistical predictions. Although the Schrodinger equation is linear in \(\Psi\) and can, e. g., be solved exactly for a harmonic oscillator with the result that \(\Psi\) depends regularly on time (i. e., there is no chaotic time behavior), this does _not_ mean that the motion is completely deterministic, since \(|\,\Psi(\vec{x},\,t)|\,^{2}\) is only the probability density to find an electron at a space-time point (\(\vec{x},\,t\)). * Because of Heisenberg's uncertaintly principle \[\Delta\rho\,\Delta q>h/2\] (8.2)there are no trajectories in quantum mechanics (if one measures \(q\) with precision \(\Delta q\), one disturbs the momentum \(p\) by \(\Delta p\) according to (8.2)). Therefore, the characterization of chaos based on the exponentially fast separation of nearby trajectories becomes useless for quantum systems. * The uncertainty principle (8.2) implies also that points in 2 \(n\)-dimensional phase space within a volume \(h^{n}\) cannot be distinguished, i. e. the phase space becomes coarse grained. This means that regions in phase space in which the motion is classically chaotic (see Fig. 139), but which have volumes smaller than \(h^{n}\), are not "seen" in quantum mechanics; and for the corresponding quantum system, we expect a regular behavior in time. Thus the finite value of Planck's constant tends to suppress chaos. On the other hand, the limit \(h\to 0\) becomes difficult (for quantum systems which have a classical counterpart which displays chaos) because if \(h\) becomes smaller, more and more irregular structures will appear. In the following, we _distinguish_ between (time-independent) _stationary Hamiltonians_ and _time-dependent Hamiltonians_, which appear, for example, in the quantum version of the kicked rotator. For systems with stationary Hamiltonians \(\widehat{\mathbf{H}}\), the Schrodinger equation (8.1) can be reduced (with \(\Psi=\Psi_{0}\) exp \((-iEt/\hbar)\) to a linear eigenvalue problem for the energy levels \(E\): \[\widehat{\mathbf{H}}\,\Psi_{0}\,=\,E\,\Psi_{0}\;. \tag{8.3}\] As long as the levels are discrete, \(\Psi\) behaves regularly in time and there is no chaos. But, there remain the fundamental questions: under what circumstances will this be the case and whether there are still differences between the energy spectra of a quantum system with a regular classical limit and a quantum system whose classical version displays chaos? Information about the behavior of systems with time-dependent Hamiltonians are, for example, relevant for the problem of how energy is distributed in the energy ladder of a molecule excited by a laser beam, i. e. they are related to the practical problem of laser photochemistry. More specifically, the answers to the following questions are sought: Does quantum chaos exist? How can one characterize it? Is there an equivalent to the hierarchy shown in Table 12 in quantum mechanics? What happens to the KAM-theorem for quantized motion, etc.? Up to now there are more questions than answers. To get at least some insight into these problems, we consider several model systems. In Section 8.1 we investigate the quantized version of the cat map (whose classical motion is purely chaotic) and show that it displays no chaos because the finite value of Planck's constant, together with the doubly periodic boundary conditions, restrict the eigenvalues of the time-evolution operator to a discrete set, such that the motion becomes completely periodic. In the subsequent section we describe a calculation by McDonald and Kaufmann (1979), which shows that the energy spectrum of a free quantum particle in a stadium (for which the classical motion is chaotic) differs drastically from that of a free (quantum) particle in a circle (for which the classical motion is regular). Finally, in the last section we demonstrate (by mapping the system to an electron localization problem) that a kicked quantum rotator shows no diffusion, whereas its classical counterpart displays deterministic diffusion above a certain threshold. ### 8.1 The Quantum Cat Map To see how a conservative system, which classically behaves completely chaotically, changes its behavior for nonzero values of Planck's constant, we quantize a modification of Arnold's cat map. (The familiar cat map (7.35) cannot be quantized because the corresponding time-evolution operator does not preserve the periodicity of the wave function of the torus, see Hannay and Berry, 1980.) Let us recall that the allowed phase space of a classical cat map is the unit torus. In this example, the phase points develop according to the dynamical law \[\left(\begin{array}{c}p_{n-1}\\ q_{n-1}\end{array}\right)\,=\,\left(\begin{array}{cc}1&2\\ 2&3\end{array}\right)\,\left(\begin{array}{c}p_{n}\\ q_{n}\end{array}\right)\,. \tag{8.4}\] In quantum mechanics, eq. (8.4) becomes the Heisenberg equation of motion for the coordinate and momentum operators \(\hat{\mathfrak{q}}_{n}\), \(\hat{\mathfrak{p}}_{n}\) at time \(n\), and the restriction of the classical phase space to a torus implies periodic boundary conditions for the quantum-mechanical wave function in coordinate _and_ momentum space. In other words, the eigenvalues of _both_ operators \(\hat{\mathfrak{p}}\)_and_\(\hat{\mathfrak{q}}\) only have discrete values which cover the torus by a lattice of allowed phase points, as shown in Fig. 148. We will now show that the unit cell of this lattice is a square with a lattice constant, which is just Planck's quantum of the action \(h\). If the eigenvalues of \(\hat{\mathfrak{q}}\) have a spacing \(\Delta q\,=\,\frac{1}{N}\,\), i. e. \[q\,=\,0\,\,,\,\,\frac{1}{N}\,\,,\ldots 1\quad\mbox{where}\,N\,=\,\,\mbox{integer} \tag{8.5}\] this implies (via the double periodicity of the wave function) the maximum momentum eigenvalue \[p_{\rm max}\,=\,h\,2\,\pi/\,\left(\frac{1}{N}\right)\,=\,Nh \tag{8.6}\] and a spacing \(\Delta p\,=\,h\), i. e. the eigenvalues of \(\hat{\mathfrak{p}}\) are \[p\,=\,0,h,2\,h,\ldots,Nh\,\,. \tag{8.7}\]Because the allowed phase space has unit area, we have \[1\ =\ q_{\rm max}p_{\rm max}\ =\ Nh \tag{8.8}\] \[{\rm i.\,e.}\quad h\ =\ \frac{1}{N}\ \rightarrow\ \Delta p\ =\ \Delta q\ =\ h. \tag{8.9}\] This requirement makes the quantum version of the cat map somewhat unrealistic. But if we assume for a moment that Planck's constant \(h\) is a free parameter and the quantum case is only defined by \(h\ \neq\ 0\) such that eq. (8.9) makes sense, then it follows from (8.5) and (8.7) that in quantum mechanics only phase points with a rational ratio \(p/q\) are allowed. This means that the points with irrational ratios, which were the only ones in the classical cat maps which lead to chaotic trajectories, are forbidden in quantum mechanis. It is therefore reasonable to expect that the quantum version of the cat map will not exhibit chaos. It has indeed been found by Hannay and Berry (1980) that the time-evolution operator \(\tilde{\bf U}\) for the quantum cat map is periodic (i. e. for every \(N\) there exists an \(n\) (\(N\)) such that \(\tilde{\bf U}^{n}=\tilde{1}\)) and has a discrete spectrum of eigenvalues. This implies that all expectation values for the cat map are periodic in time. In other words, the finite values of Planck's constant and the doubly periodic boundary conditions restrict the eigenvalues (of the time-evolution operator) in the quantum version of Arnold's cat map such that chaotic motion becomes impossible. ### 8.2 A Quantum Particle in a Stadium Although we have seen in the previous section that a quantum system with a chaotic classical limit does not necessarily also behave chaotically, one nevertheless expects some difference between a quantum system having a classical counterpart, which shows irregular motion, and a quantum version of an integrable classical system having regular trajectories. To cast some light on this problem, McDonald and Kaufmann (1979) calculated numerically the wave functions and spectra of a free particle in a Figure 148: Allowed phase points for the quantized version of a cat map (schematic). stadium and in a circular disc by solving the Schrodinger equation for a free particle in two dimensions \[\vec{\nabla}^{2}\psi\ =\ E\psi \tag{8.10}\] with the boundary condition \(\psi\left(x,\,y\right)=0\) at the "walls". Their results are summarized in Fig. 149: ### A Quantum Particle in a Stadium Figure 149: Nodal curves [\(\psi\left(x,\,y\right)=0\)] for one quadrant of the (odd-odd parity) eigenfunctions in a disc (a) and in a stadium (with dimensions \(R=a\)) (b). Distribution \(N\left(\Delta E\right)\) of (odd-odd parity) energy level spacings for a circular boundary (c) and for a stadium boundary (d). (After McDonald and Kaufmann, 1979.) Note that \(\Delta E=E_{j+1}-E_{j}\) is the spacing between neighboring levels, \(j\) increases with energy. * The eigenfunctions of the stadium problem show irregular nodal curves (where \(\psi\left(x,\,y\right)=0\)) in contrast to the regular curves for the circle. * The distribution \(N(\Delta E)\) of the eigenvalue spacings \(\Delta E\) for the circle shows a maximum at \(\Delta E=0\), i. e. there is a high probability of level degeneracies, and one finds _level clustering_. It has been proved by Berry and Tabor (1977) that for integrable systems \(N(\Delta E)\propto\exp\left(-\,\Delta E\cdot\,\mbox{const}\right)\). (An exception is a quantum mechanical oscillator for which \(N(\Delta E)\) is a delta function at \(\Delta E=\hbar\,\omega_{0}\).) For the stadium \(N(\Delta E)\) has a maximum at \(\Delta E\neq 0\), i. e. there is _level repulsion_. This level repulsion has also been found for the quantum version of Sinai's billiard (Berry, 1983, O. Bohigas et al., 1984), and it seems to be a characteristic feature of quantum system whose classical limit shows chaos. It is related to the fact that no symmetries exist in these systems, i. e. there are no degeneracies (and no selection rules which prevent mutual interaction of the levels) such that \(\lim\limits_{\Delta E\to 0}\,N(\Delta E)=0\). Several theoretical explanations for this phenomenon have been offered, and an interesting connection to random matrix theory (which is used to explain level repulsion in nuclear spectra) has been suggested (Zaslavski, 1981, Berry, 1983, Bohigas et al., 1984). Note that the distribution of level spacings is related to the eigenvalue spectrum of the quantum version of the Liouville operator \(\hat{\mbox{L}}\) because \(\hat{\mbox{L}}\,|\,n\rangle\propto\langle\,m\,|\,\propto[\hat{\mbox{H}},|\,n \rangle\,\langle\,m\,|\,]=(E_{n}\,-E_{m})|\,n\rangle\,\langle\,m\,|\), where \(\hat{\mbox{H}}\) is the Hamilton operator, and \(|\,n\rangle\), \(|\,m\rangle\) are its eigenfunctions. ### 8.3 The Kicked Quantum Rotator We have already seen in Chapter 2 that deterministic diffusion serves as an indicator of chaos. It is, therefore, interesting to see whether this phenomenon also exists in quantum systems. (If the answer is yes, then we know that there is chaos in the quantum system). We show first that a classical kicked rotator, without damping, displays (for strong enough kicking forces) deterministic diffusion, and investigate subsequently its quantum version. According to eq. (1.26), the equations of motion for the angle \(\theta\) and the angular momentum \(p\) of a classical kicked rotator are \[p_{n+1}\,=\,p_{n}\,-\,V^{\prime}\left(\theta_{n}\right)\qquad n\,=\,0,\,1,\,2\,\ldots\] (8.11 a) \[\theta_{n-1}\,=\,\theta_{n}\,+\,p_{n-1}\,=\,\theta_{n}\,-\,V^{\prime}\left( \theta_{n}\right)\,+\,p_{n}\] (8.11 b) where \(V(\theta)\,=\,V(\theta\,+\,2\,\pi)\) is the potential function of the kicking force. Summation of (8.11 a) over \(n\) yields \[\langle\left(p_{n-1}\,-\,p_{0}\right)^{2}\rangle\,=\,\sum\limits_{i,j}^{n}\, \,\langle\,V^{\prime}\left(\theta_{i}\right)\,V^{\prime}\left(\theta_{j} \right)\rangle \tag{8.12}\]where \(\langle\,\ldots\,\rangle\) denotes the average over all initial points \(\theta_{0}\). If the correlations between the \(V^{\prime}\,(\theta_{j})\) are short ranged (with range \(n_{0}\)), eq. (8.12) becomes \[\langle(p_{n\,\,,\,1}\,-\,p_{0})^{2}\rangle\;=\;n\,\sum\limits_{j}^{n_{0}}\; \langle\,V^{\prime}\,(\theta_{j})\,V^{\prime}\,(\theta_{0})\rangle\;\propto\;n \quad\mbox{for}\quad n\;\gg\;1 \tag{8.13}\] i. e. the angular momentum of the kicked rotator diffuses. It has, for example, been found numerically that a kicking potential of the form \(V(\theta)\;=\;K\cdot\;\cos\theta\) generates deterministic diffusion (of the angular momentum) above a threshold \(K_{c}\;\simeq\;0.972\) (see Fig. 150). Another example is the "open cat map" in which the restriction of periodicity of the \(p_{n}\) is lifted. This can be viewed as a kicked rotator with a potential function \(V(\theta)\;=\;-\;\)_(K/2) \(\cdot\;(\theta\bmod 2\,\pi)^{2}\)_ and has the equations of motion \[p_{n\,+\,1}\;=\;p_{n}\;+\;K\theta_{n} \tag{8.14a}\] \[\theta_{n\,+\,1}\;=\;\theta_{n}(1\;+\;K)\;+\;p_{n} \tag{8.14b}\] where \(\theta_{n}\) is always modulo 2 \(\pi\). Including the modulo restriction, eq. (8.14b) appears (apart from \(p_{n}\), which does not seriously disturb our argument, and after division by 2 \(\pi\), which changes mod 2 \(\pi\) to mod 1) similar to the map (2.1) that produced the Bernoulli shift. This means that for \(K>0\), eq. (8.14b) generates chaotic motion of the angles which leads via (8.12) to deterministic diffusion. We now show that the quantum version of the kicked rotator does not diffuse. Instead, one finds either quantum resonance, i. e. the square of the angular momentum Figure 150: A phase portrait of a classical kicked rotator with a potential function \(K\) cos \(\theta\), obtained by iterating eq. (7.11) and plotting successive points. a) For \(K=0.96\) different orbits in the shaded regions are still separated. b) For \(K=1.13\) the islands overlap and the angular momentum can diffuse. (After Chirikov, 1979.) increases quadratically in time, or almost periodicity thus, the angular momentum is limited and recurs repeatedly arbitrarily close to its original value. To understand this result, we use the idea of Fishman, Grempel, and Prange (1982) and map the kicked quantum rotator into a one-dimensional electron localization problem (about which several results are known). (The following derivation is due to V. Emery (private communication).) The time-dependent Hamiltonian of a kicked rotator can be written as \[\hat{\bf H}\,=\,\left\{\begin{array}{ll}\hat{\bf V}\,(\theta)\\ \hline 1\,-\,\gamma\\ \frac{\hat{\bf T}}{\gamma}&\mbox{for}\quad 1\,-\,\gamma\,<\,t\,<\,1,\;\;\;\mbox{with}\;\;\hat{\bf T}\,=\,-\,\tau\,\frac{\delta^{2}}{\delta\theta^{2}}\end{array}\right. \tag{8.15}\] where we have ignored the kinetic energy \(\hat{\bf T}\) during the delta kick which corresponds to the limit \(\gamma\,\to\,1\) in (8.15). The time-evolution operator from time \(t\,=\,n\) to time \(t\,=\,n\,+\,1\), i. e. before and after one kick, therefore becomes \[\hat{\bf U}\,=\,{\rm e}\,^{-i\hat{\bf T}}\,{\rm e}\,^{-i\hat{\bf V}} \tag{8.16}\] and its eigenstates \(|\,\psi_{\lambda}\rangle\) are determined by \[\hat{\bf U}\,|\,\psi_{\lambda}\rangle\,=\,{\rm e}\,^{-i\lambda}\,|\,\psi_{ \lambda}\rangle \tag{8.17}\] where \(\lambda\) is the eigenvalue. This equation governs the time dependence of any state \(|\,\varphi\rangle\) that develops with \(\hat{\bf U}\), because \[|\,\varphi\,(n)\rangle\,=\,\hat{\bf U}\,^{n}\,|\,\varphi\rangle\,=\,\sum_{ \lambda}\;{\rm e}\,^{-in\lambda}\,c_{\lambda}\,|\,\psi_{\lambda}\rangle\,;\,c_ {\lambda}\,=\,\langle\psi_{\lambda}\,|\,\varphi\rangle\;. \tag{8.18}\] We now rewrite (8.17) in form of a Schrodinger equation for an electron in a one-dimensional random chain. By using the explicit expression (8.16) for \(\hat{\bf U}\), (8.17) reads \[{\rm e}\,^{-i\hat{\bf T}}\,{\rm e}\,^{-i\hat{\bf T}}\,|\,\psi_{\lambda}\rangle \,=\,{\rm e}\,^{-i\lambda}\,|\,\psi_{\lambda}\rangle \tag{8.19}\] which for \(\hat{\bf E}\,\equiv\,\lambda\;\hat{\bf 1}\,-\,\hat{\bf T}\) becomes \[{\rm e}\,^{i\hat{\bf E}}\,{\rm e}\,^{-i\hat{\bf T}}\,|\,\psi_{\lambda}\rangle\, =\,|\,\psi_{\lambda}\rangle\;. \tag{8.20}\] With \(|\,\psi_{\lambda}\rangle\,\equiv\,{\rm e}\,^{i(\hat{\bf V}\cdot 2)}\,|\,\omega\rangle\) this can be rewritten as: \[{\rm e}\,^{i(\hat{\bf V}\cdot 2)}\,|\,\omega\rangle\,-\,{\rm e}\,^{i\hat{\bf E}}\,{ \rm e}\,^{-i(\hat{\bf V}\cdot 2)}\,|\,\omega\rangle\,=\,0 \tag{8.21}\]or \[\left[\left(1\ -\ {\rm e}^{\,t\hat{E}}\right)\,\cos\,\frac{\hat{\rm V}}{2}\ +\ i\,(1\ +\ {\rm e}^{\,t\hat{E}})\,\sin\,\frac{\hat{\rm V}}{2}\right]|\,\omega\rangle\ =\ 0 \tag{8.22}\] from which we obtain \[i(1\ +\ {\rm e}^{\,t\hat{E}})\,\left[\,\frac{1}{i}\ \frac{1\ -\ {\rm e}^{\,t\hat{E}}}{1\ +\ {\rm e}^{\,t\hat{E}}}\ +\ \frac{\sin\,(\hat{\rm V}/2)}{\cos\,(\hat{\rm V}/2)}\,\right]\,\cos\,(\frac{\hat{\rm V}}{2})\,|\,\omega\rangle\ =\ 0. \tag{8.23}\] We, therefore, have to find the solutions of \[\left[\,\tan\,\frac{\hat{\rm E}}{2}\ -\ \tan\ \frac{\hat{\rm V}}{2}\right]|\,u\rangle\ =\ 0,\ \ \ {\rm when}\ \ \ \ |\,u\rangle\ =\ \cos\frac{\hat{\rm V}}{2}\,|\,\omega\rangle. \tag{8.24}\] The periodic boundary conditions \(\psi_{\lambda}\,(\theta\ +\ 2\pi)\ =\ \psi_{\lambda}\,(\theta)\) yield \(u\,(\theta\ +\ 2\pi)\ =\ u\,(\theta)\), i. e. \(u\,(\theta)\) can be expanded in a Fourier series: \[u\,(\theta)\ =\ \sum\limits_{m}\ \ u_{m}\ {\rm e}^{\,mu\theta}. \tag{8.25}\] Note that \({\rm e}^{\,im\theta}\) is simply the eigenfunction of the angular momentum operator. Thus, (8.24) can be written as \[T_{m}u_{m}\ +\ \sum\limits_{r\ \neq 0}\ \ W_{r}u_{m\,r}\ =\ \varepsilon\,u_{m}\ ;\,\varepsilon\ =\ W_{0} \tag{8.26}\] where \[T_{m}\ \equiv\ \tan\,\left|\,\frac{1}{2}\ (\lambda\ -\ \tau\,m^{\,2})\right|\ \ \ {\rm and}\ \ \ W_{r}\ =\ \frac{1}{2\,\pi}\ \int\limits_{-\pi}^{\pi}{\rm d}\,\theta\,{\rm e}^{\,\nu\theta}\ \tan\,\left|\,\frac{\hat{\rm V}\,(\theta)}{2}\right|\.\] Eq. (8.26) is the Schrodinger equation for an electron on a chain with on-site potentials \(T_{m}\) and hopping matrix elements \(W_{r}\). The integer eigenvalues \(m\) of the angular momentum of the kicked rotator correspond to the lattice sites in the conduction problem. Two cases must be distinguished: * For rational values of \(\tau/(2\,\pi)=p/q\), where \(p\) and \(q\) both are mutually prime integers, the electrons described by (8.26) move freely in a periodic potential and are completely delocalized. For the rotator problem this means that its angular momentum is unbounded in time, i. e. all eigenvalues \(m\) can be achieved. In fact the square of the angular momentum increases quadratically in time. This phenomenon is termed quantum resonance and occurs for all rational values of \(\tau/(2\,\pi)\). We will explain this phenomenon for the simplest case \(p/q=1\). The effect of the time-evolution operator on any periodic wave function \(\psi\) then becomes \[\tilde{U}\mid\psi\rangle\ \ \doteq\ e^{\,2\pi i(\delta^{2}\cdot\delta\delta^{2})}\ e^{\,-iV(\theta)}\ \psi\,(\theta)\ \ =\ \e^{\,-iV(\theta)}\ \psi\,(\theta)\] (8.27) since we can expand \(e^{\,-iV}\,\psi\) in a Fourier series: \[e^{\,-iV(\theta)}\psi\,(\theta)\ =\ \sum\limits_{m\,-\,-\,m}^{\infty}\ A_{m}\ e^{\,im\theta}\] (8.28) and \[(e^{\,-2\pi i(\delta^{2}\cdot\delta\delta^{2})})\ e^{\,im\,\theta}\ =\ e^{\,-2\pi im^{2}}\ e^{\,im\theta}\ =\ e^{\,im\,\theta}\.\] (8.29) For the expectation value of the square of the angular momentum with any periodic wave function after \(n\) kicks, we therefore find: \[\langle p^{\,2}\rangle\ \propto\ \langle\psi\mid(\tilde{U}^{\,+})^{n}\,\frac{\delta^{2}}{\delta\theta^{2}}\ \tilde{U}^{\,n}\mid\psi\rangle\ \propto\ \int\limits_{-\,\pi}^{\pi}\ d\theta\psi^{\,\star}\,(\theta)\ e^{\,imV(\theta)}\ \frac{\delta^{2}}{\delta\theta^{\,2}}\ e^{\,-in\,V(\theta)}\ \psi\,(\theta)\] \[\ \propto\ n^{\,2}\ \langle\psi\mid\left(\frac{\delta V}{\delta\theta}\right)^{2}\mid\psi\rangle\ +\ O\,(n)\.\] (8.30) This quadratic increase in time is clearly a quantum effect because (8.27) holds only for integer values of \(m\), i. e. for a quantized angular momentum. * Next we consider the case where \(\tau/(2\ \pi)\) is irrational. The potential \(T_{m}=\tan\) [(\(\lambda\ -\ m^{\,2}\tau\))/2] then becomes random instead of periodic because [(\(\lambda\ -\ m^{\,2}\tau\))/2] mod \(\pi\) behaves like a random number generator. (Note that \(\tan\)\(x\) is periodic with period \(\pi\) and its argument can, after division by \(\pi\), be written as \(x_{m}=\) [\(\lambda/(2\,\pi)\ -\ m^{\,2}\tau/(\pi)\)] mod 1. If \(\tau/(2\,\pi)\) is expressed in binary representation and one considers, for example, values \(m^{\,2}=2^{n}\), then it is seen that the \(x_{m\,=\,2\,\pi}\) are generated by a Bernoulli shift from a irrational number and are, hence, truly random.)Intuitively, one expects that an electron in a one-dimensional random potential has a strong tendency to localize since there is (in contrast to higher dimensions) only one way to move from one point to the next, and this could be easily blocked by a potential barrier. It is in fact well known from the work of Anderson (1958) and Ishii (1973) (but by no means trivial to prove) that all electrons in a one-dimensional random potential are localized (for short-ranged hopping matrix elements). The physical reason for this is that, in the one-dimensional case, the random potential changes the phase of the wave function at every site, and this random dephasing eventually leads to localization. The electron is, therefore, confined to a finite range of \(m\)'s, i. e. the angular momentum of the rotator is bounded and does not increase in time; in other words, there is no diffusion of momentum in contrast to the classical case. Fig. 152 shows the time dependence of the energy of a periodically kicked rotator, numerically calculated for an irrational value of \(\tau/2\,\pi\). It can be seen that the oscillations in energy are not only bounded but recur many times. It has been proved by Hogg and Hubermann (1982) that if the wave function can be normalized (i. e., if we know that the angular momentum does not diffuse) then both the wave function and the energy return arbitrarily close to their initial values arbitrarily often. This time dependence is called _almost periodic_ in contrast to the quasiperiodic motion mentioned in Chapter 6. (For almost periodic functions \(f\,(t)\) there exists a relatively dense set \(\{\tau_{c}\}\) such that \(|\,f\,(t\,+\,\tau_{c})\,-\,f\,(t)\,|\,<\varepsilon\) for any \(\varepsilon>0\). \(\{\tau_{c}\}\) is relatively dense if there exists a \(T_{c}\) such that each interval of length \(T_{c}\) on the real axis contains at least one \(\tau_{c}\).) We have seen, above, that up to now no quantum system seems to exist which exhibits deterministic chaos (indicated either by a continuous power spectrum or deter Figure 152: Numerical result for the expectation value of the energy \(E\propto\langle p\,^{2}\rangle\) (of a kicked quantum rotator) as a function of the number \(n\) of pulses for an irrational value of \(\tau/(2\,\pi)\) (after Hogg and Hubermann, 1982). ministic diffusion). Nevertheless, there is a difference in the behavior of quantum systems with a chaotic classical counterpart and those (quantum systems) with a regular classical limit. Let us finally mention an interesting calculation of Gutzwiller (1983) for an electron which is scattered from a non-compact surface with negative curvature. It shows that the phase shift as a function of momentum is essentially given by the phase angles of the Riemann zeta function on the imaginary axis, at a distance 0.5 from the famous critical line. This phase shift displays features of chaos because it is able to mimick any given smooth function. It, therefore, seems that the chaotic nature of quantum systems which are described by _wave_ mechanics is of a rather subtle and "softer" kind than the chaos in classical mechanics. These comments indicate that the question of stochasticity in quantum mechanics is still far from being solved. ## Outlook In this book, we presented an introduction to deterministic chaos, stressing the importance of self-similar structures and renormalization-group ideas. Let us now take a glance at future possible developments by indicating several topics not dealt within previous chapters. First of all, there is the problem of _chaotic motion in spatially coupled nonlinear systems,_ such as: coupled heart cells, chemical reactions (Vidal and Pacault, 1984) where the diffusion term is included (i. e. \(\dot{\bar{\epsilon}}=\bar{F}(\bar{\epsilon})\,+\,k\,\bar{\bar{\Gamma}}^{\,2}\,\bar{\epsilon}\) instead of eq. (1.7)), and the Navier-Stokes equations (Ruelle, 1983). Some pertinent questions are: How do these nonlinear oscillators influence each other? Do they synchronize? Does there exist something like spatial chaos? What is the influence of spatial motion on temporal chaos? What are the dimensions of strange attractors if one approaches fully developed turbulence?... In addition to the problems mentioned in the previous chapter, another major area is the question of the chaotic behavior of _quantum systems with dissipation,_ such as lasers or Josephson-junctions, etc. (Graham, 1984). It is also interesting to note that in _quantum systems with many particles_ the question of chaos is related to the fundamental problem of the "arrow of time" (Misra and Prigogine, 1980). This list is, of course, by no means complete. From the mathematical point of view, the _very nature of a random number_ is still an issue of interest (De Long, 1970), and we have not discussed the possible role that the close coexistence of chaos and regular motion could play for the _formation of structures in biology_ (Hess and Markus, 1984). It should also be noted that cellular automata (i. e. discrete approximations to partial differential equations in which all variables, time, space, and the signal only take integer values) seem to become an important tool for answering many of the questions mentioned above. (Farmer et al., 1984; Wolfram, 1985; Frisch et al., 1986). But the major conclusion should be clear: _Since nature is nonlinear, one has always to reckon with deterministic chaos._ This means, however, that prediction about the future development of the field of deterministic chaos are as difficult or short ranged as predictions of chaotic motion itself, i. e. there is (fortunately) much room for surprises. Interestingly enough, already about 100 years ago, James Clerk Maxwell (the founder of the theory of electromagnetism) wrote the following far-sighted remark about the predictability of nonlinear, i. e. unstable systems (quoted after Berry, 1978):"If, there/fore, those cultivators of the physical science from whom the intelligent public deduce their conception of the physicist \(\ldots\) are led in pursuit of the arcana of science to the study of the singularities and instabilities, rather than the continuities and stabilities of things, the promotion of natural knowledge may tend to remove that prejudice in favor of determinism which seems to arise from assuming that the physical science of the future is a mere magnified image of that of the past."_ ## Appendix A Derivation of the Lorenz Model **References: see Chapter 1** Here we present a rather short derivation of the Lorenz model that should provide the reader with a feeling for the approximations involved. For a more rigorous treatment, we refer the reader to the original articles by Saltzmann (1961) and Lorenz (1963) and the monograph by Chandrasekhar (1961). Consider the Rayleigh-Benard experiment as depicted in Fig. 118. The liquid is described by a velocity field \(\vec{v}\,(\vec{x}\,\,t)\) and a temperature field \(T\,(\vec{x},\,t)\). The basic equations which describe our system are a) the Navier-Stokes equations: \[\rho\,\frac{{\rm d}\,\vec{v}}{\delta t}\,=\,\vec{F}\,-\,\vec{V}p\,+\,\mu\,\vec{ \nabla}^{2}\vec{v}\] (A.1) b) the equation for heat conduction: \[\frac{{\rm d}\,T}{{\rm d}\,t}\,=\,\kappa\,\vec{\nabla}^{\,2}\,T\] (A.2)c) the continuity equation: \[\frac{\partial\rho}{\partial t}\ +\ \mathrm{div}\,(\rho\,\bar{v})\ =\ 0\] (A.3) with the boundary conditions \[T(x,y,z\ =\ 0,t)\ =\ T_{0}\ +\ \Delta\,T\] (A.4) \[T(x,y,z\ =\ h,t)\ =\ T_{0}\.\] Here \(\rho\) is the density of the fluid, \(\mu\) is its viscosity, \(\rho\) is the pressure, \(\kappa\) is the thermal conductivity, and \(F=\rho g\,\bar{\epsilon}_{z}\) is the external force in the \(\bar{\epsilon}_{z}\)-direction due to gravity. The _fundamental nonlinearity in hydrodynamics_ comes from the term \(\bar{\vartheta}=(\bar{v}\cdot\bar{\nabla})\,\bar{v}\ +\ \bar{\partial}\bar{v}/\partial t\) (which is quadratic in \(\bar{v}\)) in the Navier-Stokes equation (A.1). To simplify the calculation, it is assumed a) that the system is translationally invariant in the \(y\)-direction so that convection rolls extend to infinity as shown in Fig. 153, and b) that the \(\Delta\,T\)-dependence of all coefficients \(-\) except in \(\rho=\bar{\rho}\,(1\ -\ a\,\Delta\,T)\) - can be neglected (Boussinesq approximation). The continuity equation thus becomes \[\frac{\partial u}{\partial x}\ +\ \frac{\partial w}{\partial z}\ =\ 0\quad\text{with}\quad u\ =\ v_{x}\quad\text{and}\quad w\ =\ v_{z}\] (A.5) and, it is, therefore, convenient to introduce a function \(\psi\,(x,\,z,\,t)\) with \[u\ =\ -\frac{\partial\psi}{\partial z}\quad\text{and}\quad w\ =\ \frac{\partial\psi}{\partial x}\] (A.6) such that (A.5) is automatically fulfilled. As a next step we introduce the deviation \(\theta\,(x,\,z,\,t)\) from the linear temperature profile via \[T(x,z,t)\ =\ T_{0}\ +\ \Delta\,T\ -\ \frac{\Delta\,T}{h}\ z\ +\ \theta\,(x,\,z,\,t)\.\] (A.7) Using (A.6) and (A.7) the basic equations can, according to Saltzmann, be written as \[\frac{\partial}{\partial t}\ \bar{\nabla}^{2}\psi =\ -\frac{\partial\,(\psi,\,\bar{\nabla}^{2}\,\psi)}{\partial\,(x,\,z)}\ +\ v\,\,\bar{\nabla}^{4}\,\psi\ +\ g\,a\frac{\partial\theta}{\partial x}\] (A.8) \[\frac{\partial}{\partial t}\theta =\ -\frac{\partial\,(\psi,\,\theta)}{\partial\,(x,\,z)}\ +\ \frac{\Delta\,T}{h}\ \frac{\partial\psi}{\partial x}\ +\ \kappa\,\,\bar{\nabla}^{2}\,\theta\] (A.9)where \[\frac{\partial\left(a,b\right)}{\partial\left(x,z\right)}\,\equiv\,\frac{\partial a}{\partial x}\,\cdot\,\frac{\partial b}{\partial z}\,-\,\frac{\partial a}{\partial z}\,\cdot\,\frac{\partial b}{\partial x}\,\,,\] (A.10) \[\vec{V}^{\,4}\,\,\equiv\,\frac{\partial^{4}}{\partial x^{4}}\,+\,\frac{ \partial^{4}}{\partial z^{4}}\] \(v\equiv\mu/\bar{\rho}\) is the kinematic viscosity, and the pressure term was eleminated by taking the curl in the Navier-Stokes equations. In order to simplify (A.8) and (A.9), Lorenz used free boundary conditions: \[\theta\left(0,0,t\right) = \theta\left(0,h,t\right)\,=\,\,\psi\left(0,0,t\right)\,=\,\,\psi \left(0,h,t\right)\] (A.11) \[= \vec{V}^{\,2}\,\psi\left(0,0,t\right)\,=\,\,\vec{V}^{\,2}\,\psi \left(0,h,t\right)\,=\,0\] and retained only the lowest order terms in the Fourier expansions of \(\psi\) and \(\theta\) stability Analysis and the Onset of Convection and Turbulence in the Lorenz Model **References: see Chapter 1** Let us write the Lorenz equations (A.14) in the compact form \[\dot{\vec{X}}\,=\,\vec{F}(\vec{X})\] (B.1) and linearize around the fixed points \[\vec{X}_{1}\,=\,\vec{\partial}\,;\quad\,\vec{X}_{2}\,=\,(\pm\,\bigvee\overline {b\,(r\,-\,1)}\,;\quad\pm\,\bigvee\overline{b\,(r\,-\,1)}\,;\quad r\,-\,1)\] (B.2) which are determined by \[\vec{F}(\vec{X}_{1,2})\,=\,\vec{\partial}\,.\] (B.3) The first fixed point \(\vec{X}_{1}\,=\,\vec{\partial}\) corresponds to thermal conductivity without motion of the liquid, and its stability matrix \[\left.\frac{\partial F_{i}}{\partial X_{j}}\,\,\right|_{\vec{x}_{i}}\,=\, \left(\begin{array}{ccc}-\,\sigma&\sigma&0\\ r&-1&0\\ 0&0&-b\end{array}\right)\] (B.4) has the eigenvalues \[\lambda_{1,2}\,=\,-\frac{\sigma\,+\,1}{2}\,\pm\,\frac{1}{2}\,\bigvee\overline {(\sigma\,+\,1)^{2}\,+\,4\,(r\,-\,1)\,\sigma}\,\,\,;\quad\lambda_{3}\,=\,-\,b\,.\] (B.5) Thus, \(\vec{X}_{1}\,=\,\vec{\partial}\) is stable \(-\) i. e. all eigenvalues are negative \(-\) for \(0<r<1\). The Benard convection starts at \(r=1\) because then \(\lambda_{1}=0\), and this is just where the second fixed point \(\vec{X}_{2}\) (which corresponds to moving rolls, as shown in Fig. 154) takes over. The stability matrix for \(\vec{X}_{2}\) is \[\left.\frac{\partial F_{i}}{\partial X_{j}}\,\,\right|_{\vec{x}_{2}}\,=\, \left(\begin{array}{ccc}-\,\sigma&\sigma&0\\ 1&-1&c\\ c&c&-b\end{array}\right)\,;\quad c\,=\,\,\pm\,\bigvee\overline{b\,(r\,-\,1)}\,\,.\] (B.6) Its eigenvalues are the roots of the polynomial \[P(\lambda)\,=\,\lambda^{\,3}\,+\,(\sigma\,+\,b\,+\,1)\,\lambda^{\,2}\,+\,b\,( \sigma\,+\,r)\,\lambda\,+\,2\,b\,\sigma\,(r\,-\,1)\,=\,0\,.\] (B.7)One sees immediately that for \(r=1\) we have \(\lambda_{1}=0\), \(\lambda_{2}=-b\), and \(\lambda_{3}=-(\sigma+1)\), i. e. the convection fixed point is marginally stable, and Fig. 154 shows that it is stable for \(1<r<r_{1}\). At \(r_{1}<r_{c}\) two of the eigenvalues become complex, i. e. two limit cycles result which are stable so long as the real part of the complex eigenvalues is smaller than zero. For \(r=r_{c}\) these real parts become zero, i. e. we have two eigenvalues \(\lambda=\pm\ i\lambda_{0}\), which lead via (B.7) to \[r_{c}\,=\,\sigma\ \frac{\sigma\ +\ b\ +\ 3}{\sigma\ -\ b\ -\ 1}\ \left(\ =\ 24.7368\quad\mbox{for}\quad\sigma\ =\ 10,b\ =\ \frac{8}{3}\right)\,.\] Above \(r_{c}\) the limit cycles become unstable (the complex eigenvalues have positive real parts), and chaos sets in. This analysis is consistent with the numerical result obtained by Lorenz, who found chaotic behavior for \(\sigma=10\), \(b=8/3\) above \(r_{c}=24.74\). ## Appendix C The Schwarzian Derivative ### References: see Chapter 3 Not all unimodal functions (i. e. continuously differentiable maps \(f\) that map the unit interval [0,1] onto itself with a single maximum at \(x=1/2\) and are monotonic for 0 \(\leq x\leq 1/2\) and \(1/2<x\leq 1\)) display an infinite sequence of pitchfork bifurcations. In addition to being unimodular, the Schwarzian derivative of \(f\) \[Sf=\frac{f^{\prime\prime\prime}}{f^{\prime}}\,-\,\frac{3}{2}\,\left(\frac{f^{ \prime\prime}}{f^{\prime}}\right)^{2}\,\propto\,\frac{{\rm d}^{\,2}}{{\rm d}x^ {2}}\,\left[f^{\prime}\left(x\right)\right]^{-1/2}\] (C.1) Figure 154: Qualitative behavior of the polynomial \(P(\lambda)\). must be negative over the whole interval \([0,1]\). This is, for example, true for the logistic map, since \(f^{\prime\prime\prime}\left(x\right)=0\). To make this requirement, which at first sight appears unusual, more plausible, we note the important property that \(\mbox{S}f<0\) implies a negative Schwarzian derivative for all iterates of \(f\), i. e. \(\mbox{S}f^{n}<0\). This can be verified by direct calculation. As a consequence, it is found that at a fixed point \(x_{0}\) of \(f\) that just becomes unstable, i. e. \[f^{\prime}\left(x_{0}\right)\;=\;-1\] (C.2) and \[f^{2\prime}\left(x_{0}\right)\;=\;\left[f^{\prime}\left(x_{0} \right)\right]^{2}\;=\;1\] (C.3) \[f^{2\prime\prime}\left(x_{0}\right)\;=\;f^{\prime\prime}\left(x_ {0}\right)\left[\left.\left[f^{\prime}\left(x_{0}\right)\right]\right.\right. \right.\] the third derivative of \(f^{2}\left(x_{0}\right)\) becomes negative for \(\mbox{S}f<0\), and, near \(x_{0}=0\), \(f^{2}\left(x\right)\) behaves as shown in Fig. 155, which can lead to a pitchfork bifurcation. The same figure shows that a pitchfork bifurcation becomes impossible for \(\mbox{S}f>0\). The importance of the Schwarzian derivative had first been noted by Singer (1978), who showed that unimodal maps with \(\mbox{S}f<0\) cannot have more than one periodic attractor. Later Guckenheimer and Misurewicz proved that, in this case, all points in \([0,1]\) (i. e with the exception of a set of measure zero) become attracted to it. The proofs and references can be found in the monograph by Collet and Eckmann (1980). Renormalization of the One-Dimensional Ising Model _References: see Chapter 3_ The functional renormalization group which is used in this book has been constructed in analogy to the renormalization-group method for critical phenomena. This section explains the method for critical phenomena (which is simpler than the functional renormalization method) for the example of the one-dimensional Ising model. Although the one-dimensional Ising model has several strange features (its transition temperature is zero, etc., see below), these are outweighted by the fact that every renormalization-group step can be performed explicitely. It is assumed that the reader is familiar with the usual exact solution of this model that can be found in most textbooks on statistical mechanics. The partition function of the one-dimensional Ising model has the well-known form \[Z=\sum_{\sigma_{i}}e^{\frac{\Sigma}{i}\sigma_{i}\sigma_{i-1}}\] (D.1) where \(\beta=J/T\) is the ratio of coupling constant \(J\) and temperature \(T\); the spin variables \(\sigma_{i}\) take the values \(\sigma_{i}=\pm 1\), and the sites are \(i=0\ldots N\). The renormalizations-group steps are visualized in Fig. 156: First, we sum in (D.1) over all spin variables \(\sigma_{i}\) with odd \(i\). Then we relabel the remaining variables with even \(i\): \[\sigma_{2i}\rightarrow\sigma_{i}/a\] (D.2) Fig. 156: Renormalization-group steps for the one-dimensional Ising Model. a) Spins with odd indices are integrated out. b) The correlation length in the renormalized system (\(\beta\rightarrow\beta^{\prime}\), \(2i\rightarrow\hat{n}\)) becomes smaller. (for our simple example, we have \(a=1\); but already for the two-dimensional Ising model, one needs \(a\,\pm\,1\)). Fig. 156 shows that the system of residual spins exhibits the same pattern as before and only two factors have changed: all lengths are reduced by a factor of two and the coupling between the residual spins becomes renormalized (\(\beta\,\to\,\beta\)). At the transition temperature \(T=\,T_{c}=0\), the correlation length is infinite and the spin pattern is self-similar for all length scales, i. e. repeated applications of the renormalization-group procedure always lead to similar results. To perform these steps explicitly, we consider a typical sum over an odd variable in (D.1): \[Z_{3}\,=\,\sum_{\sigma_{3}}\,c^{\beta(\sigma_{2}\sigma_{3}+\,\sigma_{3}\sigma_{ 4})}\,=\,2\,[(\cosh\beta)^{2}\,+\,\sigma_{2}\,\sigma_{4}\,(\sinh\beta)^{2}]\.\] (D.3) This can be written as \[Z_{3}\,=\,c\cdot\,{\rm e}^{\beta\,\sigma_{2}\sigma_{4}}\,=\,c\,[\cosh\beta\,+ \,\sigma_{2}\,\sigma_{4}\sinh\beta]\] (D.4) with \[{\rm th}\,\beta^{\prime}\,=\,({\rm th}\,\beta)^{2}\.\] (D.5) Eq. (D.5) is obtained by comparing the right-hand sides of (D.3) and (D.4), keeping in mind that \(\sigma_{2}\) and \(\sigma_{4}\) have only the values \(\,\pm\,1\). In the next step, we relabel the spins according to (D.2) and obtain the renormalized version of \(Z\): \[Z\,(\beta)\,=\,Z\,(\beta^{\prime})\,=\,c^{N/2}\sum_{\sigma_{i}}\,c^{\beta^{ \prime}}\,\sum_{i}^{\,\lambda\,-\,2}\sigma_{i}\sigma_{i+1}\.\] (D.6) (The constant \(c\) will not be further considered because it cancels in all thermodynamic averages). The renormalized coupling \(\beta^{\prime}\), between the residual spins, is according to (D.6): \[\beta^{\prime}\,=\,{\rm Arth}\,[({\rm th}\,\beta)^{2}]\,\equiv\,R_{2}\,(\beta)\.\] (D.7) Iteration of the renormalization procedure yields \[\beta^{\prime\prime}\,=\,R_{2}\,[R_{2}\,(\beta)]\,=\,R_{4}\,(\beta)\.\] (D.8) The last equal sign means that two repeated renormalizations are equivalent to one renormalization where only every fourth spin is retained, i. e. the renormalization-group operators \(R\) form a semigroup ("semi" because there exists no inverse element). The fixed points of (D.7) are \[\beta^{\star}\,=\,\infty\quad{\rm and}\quad\beta^{\star}\,=\,0\] (D.9)i. e. they occur at zero temperature (the transition temperature of the one-dimensional Ising model) and at infinite temperature. In both limits, the spin pattern is self-similar (the spin system is completely disordered at \(T=\infty\), and at \(T=0\) all spins are aligned). For \(\beta>0\), the system is always driven (by repeated applications of \(R_{2}\)) to the stable fixed point \(\beta^{\star}=\infty\). Because the correlation length \(\xi\) is reduced by a factor of two, after one renormalization step, we can immediately determine the temperature dependence of \(\xi\) via the following scaling argument: \[\xi\,(\beta) = \,2\,\xi\,(\beta^{\prime})\] (D.10) \[\rightarrow \,\xi\,(\beta) = \,2\,\xi\,[\mbox{Arth }[(\mbox{th}\,\beta)^{2}]]\,=\,2^{\,n}\,\xi\,[ \mbox{Arth }[\mbox{th}\,\beta)^{2\star}]]\,\,.\] (D.11) For \(\beta\gg 1\), the variable \(n\) can be chosen such that \[(\mbox{th}\,\beta)^{2\star}=\mbox{ const.}\] (D.12) \[\rightarrow \,2^{\,n}\,\,\alpha\,\,1/\log\,(\mbox{th}\,\beta)\] (D.13) \[\rightarrow \,\xi\,\propto\,1/\log\,(\mbox{th}\,\beta)\] (D.14) This last relation can be verified by direct computation of the correlation function: \[\langle\,\sigma_{j}\,,\sigma_{j}\,\rangle \equiv \,\sum_{\sigma_{i}}\,\,\,\,\mbox{e}^{\beta\sum_{\sigma_{i}\sigma_{i +1}}}\,\,\sigma_{j+},\sigma_{j}/\Bigl{(}\sum_{\sigma_{i}}\,\,\,\mbox{e}^{\beta \sum_{\sigma_{i}\sigma_{i+1}}}\,\Bigr{)}\] (D.15) \[= \,(\mbox{th}\,\beta)^{\,r}\,\equiv\,\,\mbox{e}^{-r/\xi}\] (D.16) where we used \[\sum_{\sigma_{i+1}}\,\,\,\mbox{e}^{\beta\sigma_{i}\sigma_{i+1}}=\,2\,\mbox{ cosh}\,\beta\] (D.17) and \[\sum_{\sigma_{i+1}}\,\,\,\mbox{e}^{\beta\sigma_{i}\sigma_{i+1}}\,\,\sigma_{i+1 }=\,2\,\sigma_{i}\,\sinh\beta\,\,.\] (D.18) It should be noted that for more complicated systems (e. g. the two- or three-dimensional Ising model), the elimination of spin variables in one renormalization step leads to next nearest neighbor and higher order couplings (between the spins), and it is part of the art of renormalization to keep track of them. Decimation and Path Integrals for External Noise _References: see Chapter 3_ Here we present a derivation of the scaling form of the Liapunov exponent (3.91) that follows an important article by Feigenbaum and Hasslacher (1982). Our main aim is to explain their decimation method which has, on the one hand, a wide range of potential applications (for example to the transition from quasi-periodicity to chaos in Chapter 6), and, on the other hand, close parallels to the renormalization of the one-dimensional Ising model (explained in Appendix D). As a first step, we write the iterates of (3.87) \[x_{n-1}\,=\,f(x_{n})\,+\,\xi_{n}\] (E.1) as integrals over \(\delta\)-functions: \[x_{1}\,=\,f(x_{0})\,+\,\xi_{0}\,=\,\int{\rm d}x_{1}x_{1}\,\delta\,[x_{1}\,-\,f( x_{0})\,-\,\xi_{0}]\] (E.2a) \[x_{2}\,=\,f[f(x_{0})\,+\,\xi_{0}]\,+\,\xi_{1}\,=\] (E.2b) \[\,=\,\int{\rm d}x_{1}\,{\rm d}x_{2}x_{2}\,\delta\,[x_{2}\,-\,f(x_{1})\,-\,\xi_{1 }]\,\delta\,[x_{1}\,-\,f(x_{0})\,-\,\xi_{0}]\] \[\vdots\] \[x_{n}\,=\,\int\,\prod_{j=1}^{n}\,{\rm d}x_{j}x_{n}\delta\,[x_{j-1}\,-\,f(x_{j} )\,-\,\xi_{j}]\] (E.2c) The \(\xi_{j}\) are independent random variables with Gaussian probability distributions: The idea is to perform the integration over the \(x_{n}\)_step by step,_ i.e. the renormalization-group treatment consists of integrating out all \(x_{i}\) with odd \(i\)'s (this is called "decimation") and rescaling the variables such that the whole operation can be _repeated._ Let us choose \(n=2^{q}\), \(q\) integer, and separate variables with even and odd indices in (E.4): \[\langle x_{n}\rangle\,=\,\left\{\,\prod\limits_{i}^{n/2}\,\mathrm{d}x_{2i}x_{n} \,\prod\limits_{i}^{n\,:2}\,\mathrm{d}x_{2i-1}\,\prod\limits_{0}^{n\,:2-1}\,P \left[x_{2i+2}\,-f(x_{2i+1});\sigma^{2}\right]\,\times\right.\] \[\times\,P\left[x_{2i+1}\,-f(x_{2i});\sigma^{2}\right]\,.\] (E.5) The relevant integrals over the odd variables, \[I\,=\,\left\{\,\mathrm{d}x_{2i+1}\,\exp\,\left\{\,-\left[x_{2i+2}\,{}_{2}-f(x_{ 2i+1})\right]^{2}/2\,\sigma^{2}-\left[x_{2i+1}-f(x_{2i})\right]^{2}/2\,\sigma ^{2}\right\}^{2}\right.\] (E.6) are evaluated using the saddle-point approximation that is valid for small noise amplitudes \(\sigma\ll 1\). If we have an integral over a function which is sharply peaked at \(x^{*}\), the simplest form of the saddle-point approximation consists of replacing the integral by the integrand taken at \(x^{*}\). Consider, for example, for \(\mathcal{N}\gg 1\) the integral \[I_{0}\,=\,\left\{\,\mathrm{d}x\,\mathrm{e}\,^{-N\mathcal{E}(x)}\,.\right.\] (E.7) \[I\,=\,\,\exp\,\{[x_{2\,+\,2}\,-\,f^{\,2}\,(x_{2\,i})]^{\,2}/2\,\bar{\sigma}^{\,2}]\] (E.11) (where we have omitted all pre-exponential factors because they will cancel out in \(\langle x_{n}\rangle\)) and \[\bar{\sigma}^{\,2}\,=\,\sigma^{\,2}\,+\,\{f^{\,\prime}\,[f(x_{2\,i})]\,\}^{\,2} \sigma^{\,2}\,\,.\] (E.12) Thus, \(\bar{\sigma}\) depends on \(x_{2\,i}\) after one integration, i. e. when we repeat this procedure (see below) we will always encounter \(x\) dependent \(\sigma\)'s and instead of (E.6), we should therefore consider from the very beginning \[I\,=\,\Big{\{}\,{\rm d}\,x_{2\,i\,\,,\,1}\,\,\exp\,\{-\,\,[x_{2 \,i\,+\,2}\,-\,f(x_{2\,i\,+\,1})]^{\,2}/2\,\sigma^{\,2}\,(x_{2\,i\,+\,1})\] \[\qquad\qquad-\,\,[x_{2\,i\,+\,1}\,-\,f(x_{2\,i})]^{\,2}/2\, \sigma^{\,2}\,(x_{2\,i})]\,\,.\] (E.13) In analogy to our previous calculation, we also obtain eq. (E.11) for this \(I\), but with (E.12) replaced by \[\bar{\sigma}^{\,2}\,(x_{2\,i})\,=\,\sigma^{\,2}\,[f(x_{2\,i})]\,+\,\{f^{\,\prime }\,[f(x_{2\,i})]\,\}^{\,2}\,\cdot\,\sigma^{\,2}\,[x_{2\,i}]\,\,.\] (E.14) If we combine eqns. (E.5, 11, 14) and rescale and relabel the variables, i. e. \[x_{2\,i}\,\equiv\,\bar{x}_{i}/a\,,\qquad(a\,=\,-\,|\,a\,|)\] (E.15) we obtain \[\langle x_{n}\rangle\,\propto\,\Big{\{}\,\prod\limits_{1}^{n\,/2}{\rm d}\, \bar{x}_{i}\bar{x}_{n/2}\,\prod\limits_{0}^{n\,/2\,-\,1}\,P\,[\bar{x}_{i\,+\,1 }\,-\,{\rm T}f(\bar{x}_{i});\,\bar{\sigma}^{\,2}\,(\bar{\sigma}_{i})]\] (E.16) where \({\rm T}\) is again the doubling operator \[{\rm T}f(x)\,=\,af\,\Big{[}\,f\,\Big{(}\frac{x}{a}\Big{)}\Big{]}\] (E.17) and \[\bar{\sigma}^{\,2}\,(x)\,=\,\sigma^{\,2}\,\Big{\{}\sigma^{\,2}\,\Big{[}\,f \,\Big{(}\frac{x}{a}\Big{)}\Big{]}\,+\,\Big{[}\,f^{\,\prime}\,\Big{[}\,f\, \Big{(}\frac{x}{a}\Big{)}\Big{]}\Big{]}^{\,2}\,\cdot\,\sigma^{\,2}\,\Big{(} \frac{x}{a}\Big{)}\Big{\}}\,=\,\bar{\sf L}_{\,f}\sigma^{\,2}\,(x)\] (E.18) i. e. \(\bar{\sigma}^{\,2}\,(x)\) is obtained by acting on \(\sigma^{\,2}\,(x_{n})\) with a linear operator \(\bar{\sf L}_{f}\). We note that the rescaling and relabeling were necessary to bring the expression (E.16) for \(\langle x_{n}\rangle\) (after the odd variables had been integrated out) back into the _old form_ (E.4) such that the whole renormalization-group transformation can be iterated. After \(m\) renormalization steps we obtain finally \[\langle x_{n}\rangle\;\propto\;\prod\limits_{i}^{n/2^{m}}\,{\rm d}\bar{x}_{i}\bar{ x}_{n/2^{m}}\,\prod\limits_{0}^{n/2^{m-1}}\,P\,[\bar{x}_{i+1}\,-\,{\rm T}^{m}f(\bar{x}_{i});{\rm\hat{L}}_{{\rm T}^{m-1}\,\cdot\,\gamma}\ldots{\rm\hat{L}}_{\gamma}\,\cdot\,{\rm\hat{L}}_{\gamma}\sigma^{2}(\bar{x}_{i})]\,.\] (E.19) For \(m\gg 1\) we have again (see 3.53) \[{\rm T}^{m}f_{R}(x)\;=\;g(x)\;+\;r\delta^{m}\,a\,h(x)\quad{\rm with}\quad r\;=\;R_{\infty}\;-\;R\] (E.20) and in analogy to (3.36-3.43) \[{\rm\hat{L}}_{{\rm T}^{m-1}\,\cdot\,\ldots\,\hat{L}}_{\gamma}\sigma^{2}(x)\; \equiv\;{\rm\hat{L}}_{\pi}^{m}\,\sigma^{2}(x)\;\equiv\;{\rm\hat{\beta}}^{2\,m}\,{\rm\hat{\sigma}}^{2}(x)\] (E.21) where \({\rm\hat{\beta}}^{2}\) and \({\rm\hat{\sigma}}^{2}\) denote the largest eigenvalue and eigenfunction of \({\rm\hat{L}}_{\pi}\), respectively. Thus, \(\langle x_{n}\rangle\) can be written as \[\langle x_{n}\rangle\;\propto\;\prod\limits_{1}^{n/2^{m}}\,{\rm d}\bar{x}_{i} \bar{x}_{n/2^{m}}\,\prod\limits_{0}^{n/2^{-1}}\,P\,[\bar{x}_{i+1}\,-\,g\,(\bar{x}_{i})\,-\,r\delta^{m}\,a\,h\,(\bar{x}_{i});\,{\rm\hat{\beta}}^{2\,m}\,{\rm\hat{\sigma}}^{2}(\bar{x}_{i})]\,.\] (E.22) For the Liapunov exponent \(\lambda\), this yields \[\exp\,[n\,\lambda\,(r;\,\sigma)]\;=\;\left|\;\frac{{\rm d}}{{\rm d}x_{0}}\; \langle x_{n}\rangle\,\right|\;=\;\exp\,\,[(n/2^{m})\,\lambda\,[r\delta^{m};\, \sigma\,\cdot\,{\rm\hat{\beta}}^{m}]]\] (E.23) where \(\sigma\) denotes the initial noise amplitude. If we set \({\rm\hat{\beta}}^{m}\,\cdot\,\sigma\,=\,1\) and \(\lambda\,(x;\,1)\,=\,L\,(x)\), we obtain the desired scaling behavior for \(\lambda\): \[\lambda\,(r,\sigma)\;=\;\sigma^{\theta}L\,[r\sigma^{-\gamma}]\] (E.24) with \[\theta\;=\;\log\,2/\log\,{\rm\hat{\beta}}\;=\;0.367\quad{\rm and}\quad\gamma\; =\;\log\,\delta/\log\,{\rm\hat{\beta}}\;=\;0.815\,.\] (E.25) Note that the numerical value for \({\rm\hat{\beta}}\,({\rm\hat{\beta}}\,=\,6.618)\) that was obtained as the solution of the eigenvalue equation \[{\rm\hat{L}}_{\pi}{\rm\hat{\sigma}}^{2}(x)\;=\;{\rm\hat{\beta}}^{2}{\rm\hat{ \sigma}}^{2}(x)\] (E.26) agrees closely with the best value for \(\mu\,(\mu\,=\,6.557)\). This justifies our earlier treatment of external noise. Shannon's Measure of Information _References: see Chapter 5_ This short heuristic introduction into Shannon's measure of information should enable the reader to understand Chapters 2 and 5. For a more detailed treatment we recommend the book by Shannon and Weaver (1949). ### Information Capacity of a Store Fig. 157 a) shows a system with two possible states. If the position of the points is unknown, a priori, and we learn that it is in the left box, say, we gain by definition information amounting to one bit. If we obtain this information, we save one question (with possible answer yes or no which we would have needed to locate the point). Thus, the maximum information content of a system with two states is one bit. For a box with four possible states, one needs two questions to locate the point, i. e. its maximum information content \(I\) is \[I=2\;(\text{bits})\] (F.1) (we will drop the unit "bit" in the following). This can be written as the logarithm to the base two (ld) of the number of possible states: \[I=1\;\text{ld}\;4\;.\] (F.2) Fig. 157: Information capacity of a store: a) a box with two states. b) It takes two questions (and their answers) to locate a point in a system with four states: right or left? up or down? c) In order to locate a point on a checkerboard with \(64=2^{6}\) states, one needs six questions. According to Fig. 157c, this logarithmic relation between the maximum information content \(I\) and the number of states \(N\), \[I\,=\,\mbox{Id}\,\,N\] (F.3) is true in general. ### Information Gain Let us now calculate the average gain of information if one learns the outcome of statistical events. Suppose we toss a coin such that heads or tails occur with equal probabilities \[p_{1}\,=\,p_{2}\,=\,\frac{1}{2}\,\,.\] (F.4) The information \(I\) acquired by learning that the outcome of this experiment is heads, say, is \[I\,=\,1\] (F.5) because there are two equally probable states as in Fig. 157a. This result can be expressed via the \(\{p_{i}\}\) as \[I\,=\,-\,\left(\frac{1}{2}\,\,\mbox{Id}\,\,\frac{1}{2}\,+\,\frac{1}{2}\,\, \mbox{Id}\,\,\frac{1}{2}\right)\] (F.6) or \[I\,=\,\,-\,\,\sum\limits_{i}\,\,p_{i}\,\,\mbox{Id}\,p_{i}\,\,.\] (F.7) Eq. (F.7) can be generalized to situations where the \(p_{i}\)'s are different: \[p_{1}\,\neq\,p_{2}\,=\,1\,-\,p_{1}\,\,.\] (F.8) It then gives the average gain of information if we toss a deformed coin many times. Let \(p_{1}\,=\,r/q\), where \(r\) and \(q\) are mutually prime integers, and let us choose the number \(m\) of events such that \(mr/q\) is again an integer. The total number of distinct states which occur if one tosses a (deformed) coin \(m\) times is \[N\,=\,\frac{m!}{(p_{1}m)\,!\,(p_{2}m)!}\] (F.9)where we eliminated, by division, the permutations that correspond to a rearrangement of equal events. The sequences \(hht\) and \(hht\) with \(h\!=\!\) head and \(t\!=\!\) tail, where the \(h\)'s have been interchanged correspond to the same state. In the limit \(m\rightarrow\infty\) we can use Stirling's formula, and, for the average information, gain eq. (F.3) yields \[I = \!\!\!\!\frac{1}{m}\;{\rm Id}\,N\,=\,\frac{1}{m}\;{\rm Id}\left[ \left(\frac{m}{\rm e}\right)^{m}\left(\frac{\rm e}{\rho_{1}\,m}\right)^{\rho_{ 1}\,m}\left(\frac{\rm e}{\rho_{2}\,m}\right)^{\rho_{2}\,m}\right]\,=\] (F.10) \[= \!\!\!\!-\;(\rho_{1}\;{\rm Id}\,\rho_{1}\;+\;p_{2}\;{\rm Id}\,\rho _{2})\;.\] _References: see Chapter 6_ Let us consider the quadratic area-preserving Henon map \[x_{n\,+\,1}\,=\,\,1\,\,-\,\,a\,x_{n}^{\,2}\,-\,\,y_{n}\] (G.1 a) \[y_{n\,+\,1}\,=\,x_{n}\] (G.1 b) that describes (as we have seen in Chapter 1) a periodically kicked rotator for zero damping and small amplitudes. We want to show that this map (which represents a whole class of two-dimensional maps with a quadratic maximum) also leads to a cascade of period doublings, but with Feigenbaum constants that are larger than those for one-dimensional maps. It is convenient to transform (G.1 a, b) using \[x_{n}\,=\,\,-\,\frac{2}{a}\,\bar{x}_{n}\,+\,\beta;\,\,\,\,\,a\,\beta^{\,2}\,+\, 2\,\beta\,-\,1\,=\,0;\,\,\,\,\,C\,=\,\,-\,a\beta\] (G.2) into the form \[y_{n\,+\,1}\,=\,x_{n}\] (G.3) \[x_{n\,+\,1}\,=\,2\,Cx_{n}\,+\,2x_{n}^{\,2}\,-\,y_{n}\,\Bigg{\}} \,=\,T\,\binom{x_{n}}{y_{n}}\] (where we have omitted the bar notation). We will first discuss the fixed points of \(T\) and \(T^{2}\) and their stability, and finally introduce Helleman's renormalization scheme (Helleman, 1980), which sheds some light on the doubling mechanism and allows a convenient estimate of the relevant Feigenbaum constants. The fixed points of \(T\) are \[x_{1}^{\,\ast}\,=\,y_{1}^{\,\ast}\,=\,0\,\,\,\,\,\,\,\text{and}\,\,\,\,\,\,x_{ 2}^{\,\ast}\,=\,y_{2}^{\,\ast}\,=\,1\,-\,C\] (G.4) and those of the second iterate \(T^{2}\) where \[T^{2}\,\binom{x_{n}}{y_{n}^{\,\ast}}\,=\,\begin{cases}x_{n\,+\,2}\,=\,2\,C\,[2 \,Cx_{n}+2x_{n}^{\,2}-y_{n}]+2\,[2\,Cx_{n}+2x_{n}^{\,2}\,-\,y_{n}]^{\,2}\,-\,x_ {n}\\ y_{n\,+\,2}\,=\,2\,Cx_{n}\,+\,2x_{n}^{\,2}\,-\,y_{n}\end{cases}\]are the solution of \[(Cx\,+\,x^{2})^{2}\,+\,C\,(Cx\,+\,x^{2})\,-\,x\,=\,0\.\] (G.6) To solve this equation it is noted that the fixed points (G.4) of \(T\) are also fixed points of \(T^{2}\), i.e. (G.6) can be reduced to a quadratic equation with the solutions: \[x_{3,\,4}^{\bullet}\,=\,y_{3,\,4}^{\bullet}\,=\,\frac{1}{2}\,\left[-\,\,(C\,+\,1 )\,\pm\,\,\right]\,\,\left(\overline{C\,+\,1}\right)\left(\overline{C\,-\,3} \right)\,\.\] (G.7) The stability of the fixed points is (by analogy to the one-dimensional case) determined by the eigenvalues \(\lambda_{1,\,2}\) of the matrix of derivatives \[L\,(x^{\bullet},y^{\bullet})\,=\,\left(\begin{array}{cc}\frac{\partial\,T_{x }}{\partial x}&\frac{\partial\,T_{x}}{\partial y}\\ &\\ \frac{\partial\,T_{y}}{\partial x}&\frac{\partial\,T_{y}}{\partial y}\end{array} \right)_{x^{\bullet},y^{\bullet}}\ \ \ \ \ \ \ \ =\,\left(\begin{array}{cc}2\,C\,+\,4x^{\bullet}&-1\\ 1&0\end{array}\right)\] (G.8) which are \[\lambda_{1,\,2}\,=\,\frac{1}{2}\,\left[TrL\,\pm\,\,\right]\,\,\left(\overline{ TrL}\right)^{2}\,-\,4\,\right],\,\,TrL\,=\,2\,C\,+\,4x^{\bullet}\.\] (G.9) Since \(T\) is an area-preserving map, \(\det\,L\,=\,1\), i.e. \(\lambda_{2}=1/\lambda_{1}\). This leaves (apart from parabolic fixed points that we do not consider here because they are not generic) only two essentially different types of fixed points: * Hyperbolic fixed point: The \(\lambda\)'s are real and \(\lambda_{1}>1\) implies \(\lambda_{2}=1/\lambda_{1}<1\), i.e. along the directions of the eigenvectors \(e_{1}\), \(e_{2}\) the behavior shown in Fig. 159 is found, which can be described by \[\begin{array}{l}T\left[\begin{array}{c}x\\ y\end{array}\right]=\left[\begin{array}{c}x^{\bullet}\\ y^{\bullet}\end{array}\right]\,+\,\,L\,\,\left[\begin{array}{c}\Delta x\\ \Delta y\end{array}\right]\\ L\,\,\left[\begin{array}{c}\Delta x\\ \Delta y\end{array}\right]=\left[\begin{array}{cc}\lambda_{1}&\Delta x\\ 1/\lambda_{1}&\Delta y\end{array}\right]\end{array}\] (G.10) i. e. this fixed point is unstable since all points which are not on the stable manifold along \(e_{2}\) are driven away from \((x^{\bullet},y^{\bullet})\), and an infinite number of iterations is required to approach the fixed point along \(e_{2}\): \[\lim_{s\to\infty}\,L^{\bullet}\left[\begin{array}{c}0\\ \Delta y\end{array}\right]\,=\,\lim_{s\to\infty}\,\left[\begin{array}{c}0\\ (1/\lambda_{1})^{\bullet}\Delta y\end{array}\right]=\left[\begin{array}{c} 0\\ 0\end{array}\right]\] (G.11)2. Elliptic fixed point: The \(\lambda\)'s, as solutions of a quadratic equation, are complex conjugates and can be written as \[\lambda_{1,\,2}\,=\,e^{\,\pm\,i\varphi}\quad\text{because}\quad\text{det}\ L\,=\,\lambda_{1}\,\,\lambda_{1}\,=\,1\,\,.\] (G.12) After an appropriate coordinate transformation, \(L\) can be written as a simple rotation: \[L\,\left[\begin{matrix}\Delta x\\ \Delta y\end{matrix}\right]=\left[\begin{matrix}\cos\,\varphi,\,\,\,-\sin\, \varphi\\ \sin\,\varphi,\,\,\,\,\cos\,\varphi\end{matrix}\right]\,\left[\begin{matrix} \Delta x\\ \Delta y\end{matrix}\right]\] (G.13) and the fixed point is stable as shown in Fig. 160 because every point in its close vicinity remains there and is never driven away by applying \(L\). \[Tr\,L_{T^{2}} = Tr\,[L_{T}(x^{\bullet}_{3}\,y^{\bullet}_{3})\,\cdot\,L_{T}(x^{ \bullet}_{4},y^{\bullet}_{4})]\] \[= 2\,[-\ 2\,(C\ +\ 1)(C\ +\ 3)\ +\ 1]\,=\,\left\{\begin{array}{ ll}+\ 1&{\rm for}\quad C=\ -\ 1\\ -\ 1&{\rm for}\quad C=\ 1\ -\ 1\ ^{\sqrt[2]{5}}\end{array}\right.\] where we denoted the functional matrix of \(T^{2}\) by \(L_{T^{2}}\) and used \((x^{\bullet}_{3},y^{\bullet}_{3})=T(x^{\bullet}_{4},y^{\bullet}_{4})\). Collecting (G.15\(-\)16) together, we find: \((x^{\bullet}_{1}\,,\,y^{\bullet}_{1})\) is an attractor of period 1 and is stable for \(-\ 1<C<1\), and \((x^{\bullet}_{3}\,,\,x^{\bullet}_{3},\,x)\) is an attractor of period 2 and stable for \(1\ -\ 1\ \overline{5}\ <\ C{\rm<}\ -\ 1\). We, therefore, see the beginning of a bifurcation cascade. Let us now demonstrate the self-similarity which leads to the whole sequence of period doublings by introducing Hellemann's renormalization scheme. Its starts from (G.3) which can be written as \[x_{n-1}\ +\ x_{n-1}\ =\ 2\,Cx_{n}\ +\ 2\,x_{n}^{2}\.\] (G.17) A linearization of this equation around the fixed points of period two, \[x^{\bullet}_{n}=\ \frac{1}{2}\ [\ -\,(C\ +\ 1)\ +\ (-\ 1)^{n}\ |^{\sqrt[2]{(C\ +\ 1)(C\ -\ 3)}\ ]}\ ;\ \ \ n=0,1,2,3\] (G.18) yields \[\Delta x_{n+1}\ +\ \Delta x_{n-1}\ =\ (2\,C\ +\ 4\,x^{\bullet}_{n})\,\Delta x_{n}\ -\ 2\,(\Delta x_{n})^{2}\.\] (G.19) If we add (G.19), then for \(n=2\,m\ +\ 1\) and \(n=2\,m\ -\ 1\) we obtain \[\Delta x_{2m+2}\ +\ \Delta x_{2m-2}\ = -2\,\Delta x_{2m}\ +\ (2\,C\ +\ 4\,x^{\bullet}_{0})\,[\Delta x_{2m+1}\ +\ \Delta x_{2m-1}\] (G.20) \[+\ 2\,[(\Delta x_{2m-1})^{2}\ +\ (\Delta x_{2m+1})^{2}]\.\] Now we take (G.19) at \(n=2m\), \[\Delta x_{2m+1}\ +\ x_{2m+1}\ =\ (2\,C\ +\ 4\,x^{\bullet}_{1})\,\Delta x_{2m}\ +\ 2\,(\Delta x_{2m})^{2}\] (G.21) and add it to (G.20): \[\Delta x_{2m+2}\ +\ \Delta x_{2m-2}\ =\ 2\,C^{\prime}\,\Delta x_{2m}\ +\ 2\, \alpha\,(\Delta x_{2m})^{2}\ +\ O\,[(\Delta x)^{3}]\.\] (G.22) This equation can be put into the same as (G.17) by rescaling \(x^{\prime}_{m}=a\,\Delta x_{2m}\) : \[x^{\prime}_{m\,,\,1}\ +\ x^{\prime}_{m-1}\ =\ 2\,C^{\prime}\,x^{\prime}_{m}\ +\ 2\,x^{\prime}_{m}\] (G.23)where \[C^{\prime}\,=\,2\,(C\,+\,2x_{1}^{\bullet})\,(C\,+\,2x_{0}^{\bullet})\,- \,1\,=\,2\,C^{2}\,+\,4\,C\,+\,7\] (G.24) \[a\,=\,2\,(C\,+\,2x_{1}^{\bullet})\,+\,2\,(C\,+\,2x_{0}^{\bullet} )^{2}\;.\] (G.25) The meaning of eq. (G.23) is as follows: If the two-dimensional map is developed to second order around the two-cycle and the result is rescaled, one obtains the old map, i. e. the stability of \(x^{\bullet}=y^{\bullet}=0\) for \(|\,C|<1\) implies (because of the similarity of (G.17) and (G.23)) the stability of \(x^{\bullet\,\prime}=y^{\bullet\,\prime}=\Delta x=\Delta y=0\) i. e. of the two-cycle for \(|\,C|=|\,-\,2\,C^{2}\,+\,4\,C\,+\,7|<1\) or \(1\,-\,|\,5\,<\,C\,<\,-\,1\). Repeating this argument we see that (G.23) also holds for the derivatives around a four-cycle, etc. A cascade of bifurcations with cycles of period \(2^{n}\) is obtained which are stable for \(C_{n-1}<C<C_{n}\) where \[C_{n-1}\,=\,2\,C_{n}^{2}\,+\,4\,C_{n}\,+\,7\;.\] (G.26) Fig. 161: Orbits of the Hénon map: a) at \(a=0.95\) and b) at \(a=3.02\) (after Bountis, 1981). The bifurcation points accumulate at \(C_{\infty}\) which is determined by \[C_{\infty}\;=\;2\,C_{\infty}\;+\;4\,C_{\infty}\;+\;7\;\to\;C_{\infty}\;=\;-\,1.2 656(1.266311\ldots)\] (G.27) which yields \[a\;=\;a\,(C_{\infty})\;=\;-4.128(4.018077\ldots)\] (G.28) and the Feigenbaum constant \(\delta\) \[C_{n}\;=\;C_{\infty}\;+\;A\,\delta^{\,-n}\quad\mbox{with}\quad\delta\;=\;9.06 (8.72109\ldots)\] (G.29) where the numbers in parenthesis give the best current numerical values for the constants. Fig. 161 shows the orbits of the Henon map (G.1 a, b) near a stable fixed point and after the first bifurcation.
## Overview ### 1.0 Chaos, Fractals, and Dynamics There is a tremendous fascination today with chaos and fractals. James Gleick's book _Chaos_ (Gleick 1987) was a bestseller for months--an amazing accomplishment for a book about mathematics and science. Picture books like _The Beauty of Fractals_ by Peitgen and Richter (1986) can be found on coffee tables in living rooms everywhere. It seems that even nonmathematical people are captivated by the infinite patterns found in fractals (Figure 1.0.1). Perhaps most important of all, chaos and fractals represent hands-on mathematics that is alive and changing. You can turn on a home computer and create stunning mathematical images that no one has ever seen before. The aesthetic appeal of chaos and fractals may explain why so many people have become intrigued by these ideas. But maybe you feel the urge to go deeper--to learn the mathematics behind the pictures, and to see how the ideas can be applied to problems in science and engineering. If so, this is a textbook for you. The style of the book is informal (as you can see), with an emphasis on concrete examples and geometric thinking, rather than proofs and abstract arguments. It is also an extremely "applied" book--virtually every idea is illustrated by some application to science or engineering. In many cases, the applications are drawn from the recent research literature. Of course, one problem with such an applied approach is that not everyone is an expert in physics _and_ biology_and_ fluid mechanics \(\ldots\) so the science as well as the mathematics will need to be explained from scratch. But that should be fun, and it can be instructive to see the connections among different fields. Before we start, we should agree about something: chaos and fractals are part of an even grander subject known as _dynamics._ This is the subject that deals with change, with systems that evolve in time. Whether the system in question settles down to equilibrium, keeps repeating in cycles, or does something more complicated, it is dynamics that we use to analyze the behavior. You have probably been exposed to dynamical ideas in various places--in courses in differential equations, classical mechanics, chemical kinetics, population biology, and so on. Viewed from the perspective of dynamics, all of these subjects can be placed in a common framework, as we discuss at the end of this chapter. Our study of dynamics begins in earnest in Chapter 2. But before digging in, we present two overviews of the subject, one historical and one logical. Our treatment is intuitive; careful definitions will come later. This chapter concludes with a "dynamical view of the world," a framework that will guide our studies for the rest of the book. ### 1.1 Capsule History of Dynamics Although dynamics is an interdisciplinary subject today, it was originally a branch of physics. The subject began in the mid-1600s, when Newton invented differential equations, discovered his laws of motion and universal gravitation, and combined them to explain Kepler's laws of planetary motion. Specifically, Newton solved the two-body problem--the problem of calculating the motion of the earth around the sun, given the inverse-square law of gravitational attraction between them. Subsequent generations of mathematicians and physicists tried to extend Newton's analytical methods to the three-body problem (e.g., sun, earth, and moon) but curiously this problem turned out to be much more difficult to solve. After decades of effort, it was eventually realized that the three-body problem was essentially _impossible_ to solve, in the sense of obtaining explicit formulas for the motions of the three bodies. At this point the situation seemed hopeless. The breakthrough came with the work of Poincare in the late 1800s. He introduced a new point of view that emphasized qualitative rather than quantitative questions. For example, instead of asking for the exact positions of the planets at all times, he asked "Is the solar system stable forever, or will some planets eventually fly off to infinity?" Poincare developed a powerful _geometric_ approach to analyzing such questions. That approach has flowered into the modern subject of dynamics, with applications reaching far beyond celestial mechanics. Poincare was also the first person to glimpse the possibility of _chaos,_ in which a deterministic system exhibits aperiodic behavior that depends sensitively on the initial conditions, thereby rendering long-term prediction impossible. But chaos remained in the background in the first half of the twentieth century; instead dynamics was largely concerned with nonlinear oscillators and their applications in physics and engineering. Nonlinear oscillators played a vital role in the development of such technologies as radio, radar, phase-locked loops, and lasers. On the theoretical side, nonlinear oscillators also stimulated the invention of new mathematical techniques--pioneers in this area include van der Pol, Andronov, Littlewood, Cartwright, Levinson, and Smale. Meanwhile, in a separate development, Poincare's geometric methods were being extended to yield a much deeper understanding of classical mechanics, thanks to the work of Birkhoff and later Kolmogorov, Arnol'd, and Moser. The invention of the high-speed computer in the 1950s was a watershed in the history of dynamics. The computer allowed one to experiment with equations in a way that was impossible before, and thereby to develop some intuition about nonlinear systems. Such experiments led to Lorenz's discovery in 1963 of chaotic motion on a strange attractor. He studied a simplified model of convection rolls in the atmosphere to gain insight into the notorious unpredictability of the weather. Lorenz found that the solutions to his equations never settled down to equilibrium or to a periodic state--instead they continued to oscillate in an irregular, aperiodic fashion. Moreover, if he started his simulations from two slightly different initial conditions, the resulting behaviors would soon become totally different. The implication was that the system was _inherently_ unpredictable--tiny errors in measuring the current state of the atmosphere (or any other chaotic system) would be amplified rapidly, eventually leading to embarrassing forecasts. But Lorenz also showed that there was structure in the chaos--when plotted in three dimensions, the solutions to his equations fell onto a butterfly-shaped set of points (Figure 1.1.1). He argued that this set had to be "an infinite complex of surfaces"--today we would regard it as an example of a fractal. ### 1.1 CAPSULE HISTORY OF DYNAMICS Figure 1.1.1: Lorenz's work had little impact until the 1970s, the boom years for chaos. Here are some of the main developments of that glorious decade. In 1971, Ruelle and Takens proposed a new theory for the onset of turbulence in fluids, based on abstract considerations about strange attractors. A few years later, May found examples of chaos in iterated mappings arising in population biology, and wrote an influential review article that stressed the pedagogical importance of studying simple nonlinear systems, to counterbalance the often misleading linear intuition fostered by traditional education. Next came the most surprising discovery of all, due to the physicist Feigenbaum. He discovered that there are certain universal laws governing the transition from regular to chaotic behavior; roughly speaking, completely different systems can go chaotic in the same way. His work established a link between chaos and phase transitions, and enticed a generation of physicists to the study of dynamics. Finally, experimentalists such as Gollub, Libchaber, Swinney, Linsay, Moon, and Westervelt tested the new ideas about chaos in experiments on fluids, chemical reactions, electronic circuits, mechanical oscillators, and semiconductors. Although chaos stole the spotlight, there were two other major developments in dynamics in the 1970s. Mandelbrot codified and popularized fractals, produced magnificent computer graphics of them, and showed how they could be applied in a variety of subjects. And in the emerging area of mathematical biology, Winfree applied the geometric methods of dynamics to biological oscillations, especially circadian (roughly 24-hour) rhythms and heart rhythms. By the 1980s many people were working on dynamics, with contributions too numerous to list. Table 1.1 summarizes this history. ### 1.2 The Importance of Being Nonlinear Now we turn from history to the logical structure of dynamics. First we need to introduce some terminology and make some distinctions. There are two main types of dynamical systems: _differential equations_ and _iterated maps_ (also known as difference equations). Differential equations describe the evolution of systems in continuous time, whereas iterated maps arise in problems where time is discrete. Differential equations are used much more widely in science and engineering, and we shall therefore concentrate on them. Later in the book we will see that iterated maps can also be very useful, both for providing simple examples of chaos, and also as tools for analyzing periodic or chaotic solutions of differential equations. Now confining our attention to differential equations, the main distinction is between ordinary and partial differential equations. For instance, the equation for a damped harmonic oscillator \[m\frac{d^{2}x}{dt^{2}}+b\frac{dx}{dt}+kx=0 \tag{1}\] is an ordinary differential equation, because it involves only ordinary derivatives \(dx/dt\) and \(d^{2}x/dt^{2}\). That is, there is only one independent variable, the time \(t\). In contrast, the heat equation \[\frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}}\] is a partial differential equation--it has both time \(t\) and space \(x\) as independent variables. Our concern in this book is with purely temporal behavior, and so we deal with ordinary differential equations almost exclusively. A very general framework for ordinary differential equations is provided by the system \[\begin{array}{l}\dot{x}_{1}=f_{1}(x_{1},\ldots,x_{n})\\ \vdots\\ \dot{x}_{n}=f_{n}(x_{1},\ldots,x_{n}).\end{array} \tag{2}\] Here the overdots denote differentiation with respect to \(t\). Thus \(\dot{x}_{i}\equiv dx_{i}/dt\). The variables \(x_{1},\ldots,\,x_{n}\) might represent concentrations of chemicals in a reactor, populations of different species in an ecosystem, or the positions and velocities of the planets in the solar system. The functions \(f_{1},\ldots,f_{n}\) are determined by the problem at hand. For example, the damped oscillator (I) can be rewritten in the form of (2), thanks to the following trick: we introduce new variables \(x_{1}=x\) and \(x_{2}=\dot{x}\). Then \(\dot{x}_{1}=x_{2}\), from the definitions, and \[\begin{array}{l}\dot{x}_{2}=\ddot{x}=-\frac{b}{m}\dot{x}-\frac{k}{m}x\\ =-\frac{b}{m}\,x_{2}-\frac{k}{m}\,x_{1}\end{array}\] from the definitions and the governing equation (I). Hence the equivalent system (2) is \[\begin{array}{l}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=-\frac{b}{m}\,x_{2}-\frac{k}{m}\,x_{1}.\end{array}\] This system is said to be _linear_, because all the \(x_{i}\) on the right-hand side appear to the first power only. Otherwise the system would be _nonlinear_. Typical nonlinear terms are products, powers, and functions of the \(x_{i}\), such as \(x_{1}\,x_{2}\), \((x_{1})^{3}\), or \(\cos\,x_{2}\). For example, the swinging of a pendulum is governed by the equation \[\ddot{x}+\frac{g}{L}\sin\,x=0,\] where \(x\) is the angle of the pendulum from vertical, \(g\) is the acceleration due to gravity, and \(L\) is the length of the pendulum. The equivalent system is nonlinear. \[\dot{x}_{1} =x_{2}\] \[\dot{x}_{2} =-\frac{g}{L}\sin\,x_{1}.\] Nonlinearity makes the pendulum equation very difficult to solve analytically. The usual way around this is to fudge, by invoking the small angle approximation \(\sin\,x\approx x\) for \(x<<1\). This converts the problem to a linear one, which can then be solved easily. But by restricting to small \(x\), we're throwing out some of the physics, like motions where the pendulum whirls over the top. Is it really necessary to make such drastic approximations? It turns out that the pendulum equation _can_ be solved analytically, in terms of elliptic functions. But there ought to be an easier way. After all, the motion of the pendulum is simple: at low energy, it swings back and forth, and at high energy it whirls over the top. There should be some way of extracting this information from the system directly. This is the sort of problem we'll learn how to solve, using geometric methods. Here's the rough idea. Suppose we happen to know a solution to the pendulum system, for a particular initial condition. This solution would be a pair of functions \(x_{1}(t)\) and \(x_{2}(t)\), representing the position and velocity of the pendulum. If we construct an abstract space with coordinates \((x_{1},x_{2})\), then the solution \((x_{1}(t),x_{2}(t))\) corresponds to a point moving along a curve in this space (Figure 1.2.1). This curve is called a _trajectory_, and the space is called the _phase space_ for the system. The phase space is completely filled with trajectories, since each point can serve as an initial condition. Our goal is to run this construction _in reverse:_ given the system, we want to draw the trajectories, and thereby extract information about the solutions. In Figure 1.2.1: many cases, geometric reasoning will allow us to draw the trajectories _without actually solving the system_! Some terminology: the phase space for the general system (2) is the space with coordinates \(x_{1},\ldots,\,x_{n}\). Because this space is \(n\)-dimensional, we will refer to (2) as an _**n-dimensional system**_ or an _**nth-order**_ system. Thus \(n\) represents the dimension of the phase space. ### Nonautonomous Systems You might worry that (2) is not general enough because it doesn't include any explicit _time dependence_. How do we deal with time-dependent or _nonautonomous_ equations like the forced harmonic oscillator \(m\ddot{x}+b\dot{x}+kx=F\cos t\)? In this case too there's an easy trick that allows us to rewrite the system in the form (2). We let \(x_{1}=x\) and \(x_{2}=\dot{x}\) as before but now we introduce \(x_{3}=t\). Then \(\dot{x}_{3}=1\) and so the equivalent system is \[\begin{split}\dot{x}_{1}&=x_{2}\\ \dot{x}_{2}&=\frac{1}{m}(-kx_{1}-bx_{2}+F\cos x_{3}) \\ \dot{x}_{3}&=1\end{split} \tag{3}\] which is an example of a _three_-dimensional system. Similarly, an \(n\)th-order time-dependent equation is a special case of an (\(n+1\))-dimensional system. By this trick, we can always remove any time dependence by adding an extra dimension to the system. The virtue of this change of variables is that it allows us to visualize a phase space with trajectories _frozen_ in it. Otherwise, if we allowed explicit time dependence, the vectors and the trajectories would always be wiggling--this would ruin the geometric picture we're trying to build. A more physical motivation is that the _state_ of the forced harmonic oscillator is truly three-dimensional: we need to know three numbers, \(x,\ \dot{x}\), and \(t\), to predict the future, given the present. So a three-dimensional phase space is natural. The cost, however, is that some of our terminology is nontraditional. For example, the forced harmonic oscillator would traditionally be regarded as a second-order linear equation, whereas we will regard it as a third-order nonlinear system, since (3) is nonlinear, thanks to the cosine term. As we'll see later in the book, forced oscillators have many of the properties associated with nonlinear systems, and so there are genuine conceptual advantages to our choice of language. ### Why Are Nonlinear Problems So Hard? As we've mentioned earlier, most nonlinear systems are impossible to solve analytically. Why are nonlinear systems so much harder to analyze than linear ones? The essential difference is that _linear systems can be broken down into parts_. Then each part can be solved separately and finally recombined to get the answer. This idea allows a fantastic simplification of complex problems, and underlies such methods as normal modes, Laplace transforms, superposition arguments, and Fourier analysis. In this sense, a linear system is precisely equal to the sum of its parts. But many things in nature don't act this way. Whenever parts of a system interfere, or cooperate, or compete, there are nonlinear interactions going on. Most of everyday life is nonlinear, and the principle of superposition fails spectacularly. If you listen to your two favorite songs at the same time, you won't get double the pleasure! Within the realm of physics, nonlinearity is vital to the operation of a laser, the formation of turbulence in a fluid, and the superconductivity of Josephson junctions. ### 1.3 A Dynamical View of the World Now that we have established the ideas of nonlinearity and phase space, we can present a framework for dynamics and its applications. Our goal is to show the logical structure of the entire subject. The framework presented in Figure 1.3.1 will guide our studies thoughout this book. The framework has two axes. One axis tells us the number of variables needed to characterize the state of the system. Equivalently, this number is the _dimension of the phase space_. The other axis tells us whether the system is linear or _nonlinear_. For example, consider the exponential growth of a population of organisms. This system is described by the first-order differential equation \(\dot{x}=rx\) where \(x\) is the population at time \(t\) and \(r\) is the growth rate. We place this system in the column labeled "\(n=1\)" because _one_ piece of information--the current value of the population \(x\)--is sufficient to predict the population at any later time. The system is also classified as linear because the differential equation \(\dot{x}=rx\) is linear in \(x\). As a second example, consider the swinging of a pendulum, governed by \[\ddot{x}+\frac{g}{L}\sin x=0.\] In contrast to the previous example, the state of this system is given by _two_ variables: its current angle \(x\) and angular velocity \(\dot{x}\). (Think of it this way: we need the initial values of both \(x\) and \(\dot{x}\) to determine the solution uniquely. For example, if we knew only \(x\), we wouldn't know which way the pendulum was swinging.) Because two variables are needed to specify the state, the pendulum belongs in the \(n=2\) column of Figure 1.3.1. Moreover, the system is nonlinear, as discussed in the previous section. Hence the pendulum is in the lower, nonlinear half of the \(n=2\) column. One can continue to classify systems in this way, and the result will be something like the framework shown here. Admittedly, some aspects of the picture are Figure 1.3.1 debatable. You might think that some topics should be added, or placed differently, or even that more axes are needed--the point is to think about classifying systems on the basis of their dynamics. There are some striking patterns in Figure 1.3.1. All the simplest systems occur in the upper left-hand corner. These are the small linear systems that we learn about in the first few years of college. Roughly speaking, these linear systems exhibit growth, decay, or equilibrium when \(n=1\), or oscillations when \(n=2\). The italicized phrases in Figure 1.3.1 indicate that these broad classes of phenomena first arise in this part of the diagram. For example, an \(RC\) circuit has \(n=1\) and cannot oscillate, whereas an \(RLC\) circuit has \(n=2\) and can oscillate. The next most familiar part of the picture is the upper right-hand corner. This is the domain of classical applied mathematics and mathematical physics where the linear partial differential equations live. Here we find Maxwell's equations of electricity and magnetism, the heat equation, Schrodinger's wave equation in quantum mechanics, and so on. These partial differential equations involve an infinite "continuum" of variables because each point in space contributes additional degrees of freedom. Even though these systems are large, they are tractable, thanks to such linear techniques as Fourier analysis and transform methods. In contrast, the lower half of Figure 1.3.1--the nonlinear half--is often ignored or deferred to later courses. But no more! In this book we start in the lower left corner and systematically head to the right. As we increase the phase space dimension from \(n=1\) to \(n=3\), we encounter new phenomena at every step, from fixed points and bifurcations when \(n=1\), to nonlinear oscillations when \(n=2\), and finally chaos and fractals when \(n=3\). In all cases, a geometric approach proves to be very powerful, and gives us most of the information we want, even though we usually can't solve the equations in the traditional sense of finding a formula for the answer. Our journey will also take us to some of the most exciting parts of modern science, such as mathematical biology and condensed-matter physics. You'll notice that the framework also contains a region forbiddingly marked "The frontier." It's like in those old maps of the world, where the mapmakers wrote, "Here be dragons" on the unexplored parts of the globe. These topics are not completely unexplored, of course, but it is fair to say that they lie at the limits of current understanding. The problems are very hard, because they are both large and nonlinear. The resulting behavior is typically complicated in _both space and time_, as in the motion of a turbulent fluid or the patterns of electrical activity in a fibrillating heart. Toward the end of the book we will touch on some of these problems--they will certainly pose challenges for years to come. [MISSING_PAGE_EMPTY:12] ## Chapter 1 ONE-DIMENSIONAL FLOWS [MISSING_PAGE_EMPTY:14] ## 2.0 Introduction In Chapter 1, we introduced the general system \[\begin{array}{c}\dot{x}_{1}=f_{1}(x_{1},...,x_{n})\\ \vdots\\ \dot{x}_{n}=f_{n}(x_{1},...,x_{n})\end{array}\] and mentioned that its solutions could be visualized as trajectories flowing through an \(n\)-dimensional phase space with coordinates \((x_{1},...,x_{n})\). At the moment, this idea probably strikes you as a mind-bending abstraction. So let's start slowly, beginning here on earth with the simple case \(n=1\). Then we get a single equation of the form \[\dot{x}=f(x).\] Here \(x(t)\) is a real-valued function of time \(t\), and \(f(x)\) is a smooth real-valued function of \(x\). We'll call such equations _one-dimensional_ or _first-order systems_. Before there's any chance of confusion, let's dispense with two fussy points of terminology: 1. The word _system_ is being used here in the sense of a dynamical system, not in the classical sense of a collection of two or more equations. Thus a single equation can be a "system." 2. We do not allow \(f\) to depend explicitly on time. Time-dependent or "nonautonomous" equations of the form \(\dot{x}=f(x,t)\) are more complicated, because one needs _two_ pieces of information, \(x\) and \(t\), to predict the future state of the system. Thus \(\dot{x}=f(x,t)\) should really be regarded as a _two-dimensional_ or _second-order_ system, and will therefore be discussed later in the book. ### 2.1 A Geometric Way of Thinking Pictures are often more helpful than formulas for analyzing nonlinear systems. Here we illustrate this point by a simple example. Along the way we will introduce one of the most basic techniques of dynamics: _interpreting a differential equation as a vector field._ Consider the following nonlinear differential equation: \[\dot{x}=\sin x. \tag{1}\] To emphasize our point about formulas versus pictures, we have chosen one of the few nonlinear equations that can be solved in closed form. We separate the variables and then integrate: \[dt=\frac{dx}{\sin x},\] which implies \[t= \int\csc x\ dx\] \[= -\ln\left|\csc x+\cot x\right|+C.\] To evaluate the constant \(C\), suppose that \(x=x_{0}\) at \(t=0\). Then \(C=\ln\left|\csc x_{0}+\cot x_{0}\right|\). Hence the solution is \[t=\ln\left|\frac{\csc x_{0}+\cot x_{0}}{\csc x+\cot x}\right|. \tag{2}\] This result is exact, but a headache to interpret. For example, can you answer the following questions? 1. Suppose \(x_{0}=\pi/4\); describe the qualitative features of the solution \(x(t)\) for all \(t>0\). In particular, what happens as \(t\to\infty\)? 2. For an _arbitrary_ initial condition \(x_{0}\), what is the behavior of \(x(t)\) as \(t\to\infty\)? Think about these questions for a while, to see that formula (2) is not transparent. In contrast, a graphical analysis of (1) is clear and simple, as shown in Figure 2.1. We think of \(t\) as time, \(x\) as the position of an imaginary particle moving along the real line, and \(\dot{x}\) as the velocity of that particle. Then the differential equation \(\dot{x}=\sin x\) represents a _vector field_ on the line: it dictates the velocity vector \(\dot{x}\) at each \(x\). To sketch the vector field, it is convenient to plot \(\dot{x}\) versus \(x\), and then draw arrows on the \(x\)-axis to indicate the corresponding velocity vector at each \(x\). The arrows point to the right when \(\dot{x}>0\) and to the left when \(\dot{x}<0\). Here's a more physical way to think about the vector field: imagine that fluid is flowing steadily along the \(x\)-axis with a velocity that varies from place to place, according to the rule \(\dot{x}=\sin x\). As shown in Figure 2.1.1, the _flow_ is to the right when \(\dot{x}>0\) and to the left when \(\dot{x}<0\). At points where \(\dot{x}=0\), there is no flow; such points are therefore called _fixed points_. You can see that there are two kinds of fixed points in Figure 2.1.1: solid black dots represent _stable_ fixed points (often called _attractors_ or _sinks_, because the flow is toward them) and open circles represent _unstable_ fixed points (also known as _repellers_ or _sources_). Armed with this picture, we can now easily understand the solutions to the differential equation \(\dot{x}=\sin x\). We just start our imaginary particle at \(x_{0}\) and watch how it is carried along by the flow. This approach allows us to answer the questions above as follows: 1. Figure 2.1.1 shows that a particle starting at \(x_{0}=\pi/4\) moves to the right faster and faster until it crosses \(x=\pi/2\) (where \(\sin x\) reaches its maximum). Then the particle starts slowing down and eventually approaches the stable fixed point \(x=\pi\) from the left. Thus, the qualitative form of the solution is as shown in Figure 2.1.2. Note that the curve is concave up at first, and then concave down; this corresponds to the initial acceleration for \(x<\pi/2\), followed by the deceleration toward \(x=\pi\). 2. The same reasoning applies to any initial condition \(x_{0}\). Figure 2.1.1 shows that if \(\dot{x}>0\) initially, the particle heads to the right and asymptotically approaches the nearest stable fixed point. Similarly, if \(\dot{x}<0\) initially, the particle approaches the nearest stable fixed point to its left. If \(\dot{x}=0\), then \(x\) remains constant. The qualitative form of the solution for any initial condition is sketched in Figure 2.1.3. ### 2.1 A geometric way of thinking Figure 2.1.2: In all honesty, we should admit that a picture can't tell us certain _quantitative_ things: for instance, we don't know the time at which the speed \(\left|\,\dot{x}\,\right|\) is greatest. But in many cases _qualitative_ information is what we care about, and then pictures are fine. ### 2.2 Fixed Points and Stability The ideas developed in the last section can be extended to any one-dimensional system \(\,\dot{x}=f(x)\). We just need to draw the graph of \(f(x)\) and then use it to sketch the vector field on the real line (the \(x\)-axis in Figure 2.2.1). Figure 2.2.1 Figure 2.2.1 As before, we imagine that a fluid is flowing along the real line with a local velocity \(f(x)\). This imaginary fluid is called the phase fluid, and the real line is the phase space. The flow is to the right where \(f(x)>0\) and to the left where \(f(x)<0\). To find the solution to \(\dot{x}=f(x)\) starting from an arbitrary initial condition \(x_{0}\), we place an imaginary particle (known as a _phase point_) at \(x_{0}\) and watch how it is carried along by the flow. As time goes on, the phase point moves along the \(x\)-axis according to some function \(x(t)\). This function is called the _trajectory_ based at \(x_{0}\), and it represents the solution of the differential equation starting from the initial condition \(x_{0}\). A picture like Figure 2.2.1, which shows all the qualitatively different trajectories of the system, is called a _phase portrait_. The appearance of the phase portrait is controlled by the fixed points \(x\)*, defined by \(f(x^{\bullet})=0\); they correspond to stagnation points of the flow. In Figure 2.2.1, the solid black dot is a stable fixed point (the local flow is toward it) and the open dot is an unstable fixed point (the flow is away from it). In terms of the original differential equation, fixed points represent _equilibrium_ solutions (sometimes called steady, constant, or rest solutions, since if \(x=x^{\bullet}\) initially, then \(x(t)=x^{\bullet}\) for all time). An equilibrium is defined to be stable if all sufficiently small disturbances away from it damp out in time. Thus stable equilibria are represented geometrically by stable fixed points. Conversely, unstable equilibria, in which disturbances grow in time, are represented by unstable fixed points. **Example 2.2.1:** Find all fixed points for \(\dot{x}=x^{2}-1\), and classify their stability. _Solution:_ Here \(f(x)=x^{2}-1\). To find the fixed points, we set \(f(x^{\bullet})=0\) and solve for \(x^{\bullet}\). Thus \(x^{\bullet}=\pm 1\). To determine stability, we plot \(x^{2}-1\) and then sketch the vector field (Figure 2.2.2). The flow is to the right where \(x^{2}-1>0\) and to the left where \(x^{2}-1<0\). Thus \(x^{\bullet}=-1\) is stable, and \(x^{\bullet}=1\) is unstable. Note that the definition of stable equilibrium is based on _small_ disturbances; certain large disturbances may fail to decay. In Example 2.2.1, all small disturbances to \(x\)* = -1 will decay, but a large disturbance that sends \(x\) to the right of \(x=1\) will _not_ decay--in fact, the phase point will be repelled out to \(+\infty\). To emphasize this aspect of stability, we sometimes say that \(x\)* = -1 is _locally stable_, but not globally stable. **Example 2.2.2**: Consider the electrical circuit shown in Figure 2.2.3. A resistor \(R\) and a capacitor \(C\) are in series with a battery of constant dc voltage \(V_{\circ}\). Suppose that the switch is closed at \(t=0\), and that there is no charge on the capacitor initially. Let \(Q(t)\) denote the charge on the capacitor at time \(t\geq 0\). Sketch the graph of \(Q(t)\). _Solution:_ This type of circuit problem is probably familiar to you. It is governed by linear equations and can be solved analytically, but we prefer to illustrate the geometric approach. First we write the circuit equations. As we go around the circuit, the total voltage drop must equal zero; hence \(-V_{\circ}+RI+Q/C=0\), where \(I\) is the current flowing through the resistor. This current causes charge to accumulate on the capacitor at a rate \(\dot{Q}=I\). Hence \[-V_{\circ}+R\dot{Q}+Q/C=0\ \ \text{or}\] \[\dot{Q}=f(Q)=\frac{V_{\circ}}{R}-\frac{Q}{RC}.\] The graph of \(f(Q)\) is a straight line with a negative slope (Figure 2.2.4). The corresponding vector field has a fixed point where \(f(Q)=0\), which occurs at \(Q\)* = \(CV_{\circ}\). The flow is to the right where \(f(Q)>0\) and to the left where \(f(Q)<0\). Thus the flow is always toward \(Q\)*--it is a _stable_ fixed point. In fact, it is _globally stable_, in the sense that it is approached from _all_ initial conditions. To sketch \(Q(t)\), we start a phase point at the origin of Figure 2.2.4 and imagine how it would move. The flow carries the phase point monotonically toward \(Q\)*. Its speed \(\dot{Q}\) Figure 2.2.4: Figure 2.2.3: decreases linearly as it approaches the fixed point; therefore \(Q(t)\) is increasing and concave down, as shown in Figure 2.2.5. **Example 2.2.3:** Sketch the phase portrait corresponding to \(\dot{x}=x-\cos x\), and determine the stability of all the fixed points. _Solution:_ One approach would be to plot the function \(f(x)=x-\cos x\) and then sketch the associated vector field. This method is valid, but it requires you to figure out what the graph of \(x-\cos x\) looks like. There's an easier solution, which exploits the fact that we know how to graph \(y=x\) and \(y=\cos x\)_separately_. We plot both graphs on the same axes and then observe that they intersect in exactly one point (Figure 2.2.6). This intersection corresponds to a fixed point, since \(x*=\cos x*\) and therefore \(f(x*)=0\). Moreover, when the line lies above the cosine curve, we have \(x>\cos x\) and so \(\dot{x}>0\) : the flow is to the right. Similarly, the flow is to the left where the line is below the cosine curve. Hence \(x*\) is the only fixed point, and it is unstable. Note that we can classify the stability of \(x*\), even though we don't have a formula for \(x*\) itself! ### 2.3 Population Growth The simplest model for the growth of a population of organisms is \(\dot{N}=rN\), where \(N(t)\) is the population at time \(t\), and \(r>0\) is the growth rate. This model Figure 2.2.5: predicts exponential growth: \(N(t)=N_{0}e^{r^{t}}\), where \(N_{0}\) is the population at \(t=0\). Of course such exponential growth cannot go on forever. To model the effects of overcrowding and limited resources, population biologists and demographers often assume that the per capita growth rate \(\dot{N}/N\) decreases when \(N\) becomes sufficiently large, as shown in Figure 2.3.1. For small \(N\), the growth rate equals \(r\), just as before. However, for populations larger than a certain _carrying capacity_\(K\), the growth rate actually becomes negative; the death rate is higher than the birth rate. A mathematically convenient way to incorporate these ideas is to assume that the per capita growth rate \(\dot{N}/N\) decreases _linearly_ with \(N\) (Figure 2.3.2). This leads to the _logistic equation_ \[\dot{N}=rN\left(1-\frac{N}{K}\right)\] first suggested to describe the growth of human populations by Verhulst in 1838. This equation can be solved analytically (Exercise 2.3.1) but once again we prefer a graphical approach. We plot \(\dot{N}\) versus \(N\) to see what the vector field looks like. Note that we plot only \(N\geq 0\), since it makes no sense to think about a negative population (Figure 2.3.3). Fixed points occur at \(N^{\bullet}=0\) and \(N^{\bullet}=K\), as found by setting \(\dot{N}=0\) and solving for \(N\). By looking at the flow in Figure 2.3.3, we see that \(N^{\bullet}=0\) is an unstable fixed point and \(N^{\bullet}=K\) is a stable fixed point. In biological terms, \(N=0\) is an unstable equilibrium: a small population will grow exponentially fast and run away from \(N=0\). On the other hand, if \(N\) is disturbed slightly from \(K\), the disturbance will decay monotonically and \(N(t)\to K\) as \(t\to\infty\). In fact, Figure 2.3.3 shows that if we start a phase point at _any_\(N_{0}>0\), it will always flow toward \(N=K\). Hence _the population always approaches the carrying capacity._ The only exception is if \(N_{0}=0\); then there's nobody around to start reproducing, and so \(N=0\) for all time. (The model does not allow for spontaneous generation!) Figure 2.3.1: Growth rate Figure 2.3.3 also allows us to deduce the qualitative shape of the solutions. For example, if \(N_{0}<K/2\), the phase point moves faster and faster until it crosses \(N=K/2\), where the parabola in Figure 2.3.3 reaches its maximum. Then the phase point slows down and eventually creeps toward \(N=K\). In biological terms, this means that the population initially grows in an accelerating fashion, and the graph of \(N(t)\) is concave up. But after \(N=K/2\), the derivative \(\dot{N}\) begins to decrease, and so \(N(t)\) is concave down as it asymptotes to the horizontal line \(N=K\)(Figure 2.3.4). Thus the graph of \(N(t)\) is S-shaped or _sigmoid_ for \(N_{0}<K/2\). Something qualitatively different occurs if the initial condition \(N_{0}\) lies between \(K/2\) and \(K\); now the solutions are decelerating from the start. Hence these solutions are concave down for all \(t\). If the population initially exceeds the carrying capacity (\(N_{0}>K\)), then \(N(t)\) decreases toward \(N=K\) and is concave up. Finally, if \(N_{0}=0\) or \(N_{0}=K\), then the population stays constant. ### Critique of the Logistic Model Before leaving this example, we should make a few comments about the biological validity of the logistic equation. The algebraic form of the model is not to be taken literally. The model should really be regarded as a metaphor for populations that have a tendency to grow from zero population up to some carrying capacity \(K\). ### 2.3 Population Growth Figure 2.3.4: Figure 2.3.3: Originally a much stricter interpretation was proposed, and the model was argued to be a universal law of growth (Pearl 1927). The logistic equation was tested in laboratory experiments in which colonies of bacteria, yeast, or other simple organisms were grown in conditions of constant climate, food supply, and absence of predators. For a good review of this literature, see Krebs (1972, pp. 190-200). These experiments often yielded sigmoid growth curves, in some cases with an impressive match to the logistic predictions. On the other hand, the agreement was much worse for fruit flies, flour beetles, and other organisms that have complex life cycles involving eggs, larvae, pupae, and adults. In these organisms, the predicted asymptotic approach to a steady carrying capacity was never observed--instead the populations exhibited large, persistent fluctuations after an initial period of logistic growth. See Krebs (1972) for a discussion of the possible causes of these fluctuations, including age structure and time-delayed effects of overcrowding in the population. For further reading on population biology, see Pielou (1969) or May (1981). Edelstein-Keshet (1988) and Murray (2002, 2003) are excellent textbooks on mathematical biology in general. ### 2.4 Linear Stability Analysis So far we have relied on graphical methods to determine the stability of fixed points. Frequently one would like to have a more quantitative measure of stability, such as the rate of decay to a stable fixed point. This sort of information may be obtained by _linearizing_ about a fixed point, as we now explain. Let \(x*\) be a fixed point, and let \(\eta(t)=x(t)-x*\) be a small perturbation away from \(x*\). To see whether the perturbation grows or decays, we derive a differential equation for \(\eta\). Differentiation yields \[\dot{\eta}=\frac{d}{dt}(x-x*)=\dot{x},\] since \(x*\) is constant. Thus \(\dot{\eta}=\dot{x}=f(x)=f(x*+\eta)\). Now using Taylor's expansion we obtain \[f(x*+\eta)=f(x*)+\eta\,f^{\prime}(x*)+O(\eta^{2})\,,\] where \(O(\eta^{2})\) denotes quadratically small terms in \(\eta\). Finally, note that \(f(x*)=0\) since \(x*\) is a fixed point. Hence \[\dot{\eta}=\eta f^{\prime}(x*)+O(\eta^{2}).\] Now if \(f^{\prime}(x*)\neq 0\), the \(O(\eta^{2})\) terms are negligible and we may write the approximation \[\dot{\eta}\approx\eta f^{\prime}(x*)\,.\] This is a linear equation in \(\eta\), and is called the _linearization about_\(x*\). It shows that _the perturbation \(\eta(t)\) grows exponentially if \(f^{\prime}(x*)>0\) and decays if \(f^{\prime}(x*)<0\)_. If \(f^{\prime}(x*)=0\), the \(O(\eta^{2})\) terms are not negligible and a nonlinear analysis is needed to determine stability, as discussed in Example 2.4.3 below. The upshot is that the slope \(f^{\prime}(x*)\) at the fixed point determines its stability. If you look back at the earlier examples, you'll see that the slope was always negative at a stable fixed point. The importance of the _sign_ of \(f^{\prime}(x*)\) was clear from our graphical approach; the new feature is that now we have a measure of _how_ stable a fixed point is--that's determined by the _magnitude_ of \(f^{\prime}(x*)\). This magnitude plays the role of an exponential growth or decay rate. Its reciprocal \(1/|f^{\prime}(x*)|\) is a _characteristic time scale_; it determines the time required for \(x(t)\) to vary significantly in the neighborhood of \(x*\). Example 2.4.1: Using linear stability analysis, determine the stability of the fixed points for \(\dot{x}=\sin x\). _Solution:_ The fixed points occur where \(f(x)=\sin x=0\). Thus \(x*=k\pi\), where \(k\) is an integer. Then \[f^{\prime}(x*)=\cos k\pi=\begin{cases}\ \ 1,\ k\ \text{even}\\ -1,\ k\ \text{odd}.\end{cases}\] Hence \(x*\) is unstable if \(k\) is even and stable if \(k\) is odd. This agrees with the results shown in Figure 2.1.1. Example 2.4.2: Classify the fixed points of the logistic equation, using linear stability analysis, and find the characteristic time scale in each case. _Solution:_ Here \(f(N)=rN\left(1-\frac{N}{K}\right)\), with fixed points \(N*=0\) and \(N*=K\). Then \(f^{\prime}(N)=r-\frac{2\pi N}{K}\) and so \(f^{\prime}(0)=r\) and \(f^{\prime}(K)=-r\). Hence \(N*=0\) is unstable and \(N*=K\) is stable, as found earlier by graphical arguments. In either case, the characteristic time scale is \(1/|f^{\prime}(N*)|=1/r\). Example 2.4.3: What can be said about the stability of a fixed point when \(f^{\prime}(x*)=0\)? _Solution:_ Nothing can be said in general. The stability is best determined on a case-by-case basis, using graphical methods. Consider the following examples: \[\text{(a)}\ \ \dot{x}=-x^{3}\qquad\text{(b)}\ \ \dot{x}=x^{3}\qquad\text{(c)}\ \ \dot{x}=x^{2}\qquad\text{(d)}\ \ \dot{x}=0\]Each of these systems has a fixed point \(x*=0\) with \(f^{\prime}(x*)=0\). However the stability is different in each case. Figure 2.4.1 shows that (a) is stable and (b) is unstable. Case (c) is a hybrid case we'll call _half-stable_, since the fixed point is attracting from the left and repelling from the right. We therefore indicate this type of fixed point by a half-filled circle. Case (d) is a whole line of fixed points; perturbations neither grow nor decay. These examples may seem artificial, but we will see that they arise naturally in the context of _bifurcations_--more about that later. ### 2.5 Existence and Uniqueness Our treatment of vector fields has been very informal. In particular, we have taken a cavalier attitude toward questions of existence and uniqueness of solutions to the system \(\dot{x}=f(x)\). That's in keeping with the "applied" spirit of this book. Nevertheless, we should be aware of what can go wrong in pathological cases. Figure 2.4. **Example 2.5.1**: **:** Show that the solution to \(\dot{x}=x^{\prime 13}\) starting from \(x_{0}=0\) is _not_ unique. _Solution:_ The point \(x=0\) is a fixed point, so one obvious solution is \(x(t)=0\) for all \(t\). The surprising fact is that there is _another_ solution. To find it we separate variables and integrate: \[\int x^{-1/3}dx=\int dt\] so \(\frac{3}{2}x^{2/3}=t+C\). Imposing the initial condition \(x(0)=0\) yields \(C=0\). Hence \(x(t)=\left(\frac{2}{3}t\right)^{3/2}\) is also a solution! When uniqueness fails, our geometric approach collapses because the phase point doesn't know how to move; if a phase point were started at the origin, would it stay there or would it move according to \(x(t)=\left(\frac{2}{3}t\right)^{3/2}\)? (Or as my friends in elementary school used to say when discussing the problem of the irresistible force and the immovable object, perhaps the phase point would explode!) Actually, the situation in Example 2.5.1 is even worse than we've let on--there are _infinitely_ many solutions starting from the same initial condition (Exercise 2.5.4). What's the source of the non-uniqueness? A hint comes from looking at the vector field (Figure 2.5.1). We see that the fixed point \(x^{\bullet}=0\) is _very_ unstable--the slope \(f^{\prime}(0)\) is infinite. Chastened by this example, we state a theorem that provides sufficient conditions for existence and uniqueness of solutions to \(\dot{x}=f(x)\). **Existence and Uniqueness Theorem:** Consider the initial value problem \[\dot{x}=f(x),\hskip 28.452756ptx(0)=x_{0}.\] Suppose that \(f(x)\) and \(f^{\prime}(x)\) are continuous on an open interval \(R\) of the \(x\)-axis, and suppose that \(x_{0}\) is a point in \(R\). Then the initial value problem has a solution \(x(t)\) on some time interval \((-\tau,\tau)\) about \(t=0\), and the solution is unique. For proofs of the existence and uniqueness theorem, see Borrelli and Coleman (1987), Lin and Segel (1988), or virtually any text on ordinary differential equations. This theorem says that _if \(f(x)\) is smooth enough,_ then solutions exist and are unique. Even so, there's no guarantee that solutions exist forever, as shown by the next example. ### 2.5 Existence and Uniqueness Figure 2.5.1: **Example 2.5.2**: _Discuss the existence and uniqueness of solutions to the initial value problem \(\dot{x}=1+x^{2},\ x(0)=x_{0}\). Do solutions exist for all time?_ _Solution:_ Here \(f(x)=1+x^{2}\). This function is continuous and has a continuous derivative for all \(x\). Hence the theorem tells us that solutions exist and are unique for any initial condition \(x_{0}\). But _the theorem does not say that the solutions exist for all time;_ they are only guaranteed to exist in a (possibly very short) time interval around \(t=0\). For example, consider the case where \(x(0)=0\). Then the problem can be solved analytically by separation of variables: \[\int\frac{dx}{1+x^{2}}=\int dt,\] which yields \[\tan^{-1}x=t+C.\] The initial condition \(x(0)=0\) implies \(C=0\). Hence \(x(t)=\tan t\) is the solution. But notice that this solution exists only for \(-\pi/2<t<\pi/2\), because \(x(t)\rightarrow\pm\infty\) as \(t\rightarrow\pm\pi/2\). Outside of that time interval, there is no solution to the initial value problem for \(x_{0}=0\). The amazing thing about Example 2.5.2 is that the system has solutions that reach infinity _in finite time._ This phenomenon is called _blow-up_. As the name suggests, it is of physical relevance in models of combustion and other runaway processes. There are various ways to extend the existence and uniqueness theorem. One can allow \(f\) to depend on time \(t\), or on several variables \(x_{1},...,x_{n}\). One of the most useful generalizations will be discussed later in Section 6.2. From now on, we will not worry about issues of existence and uniqueness--our vector fields will typically be smooth enough to avoid trouble. If we happen to come across a more dangerous example, we'll deal with it then. ### 2.6 Impossibility of Oscillations Fixed points dominate the dynamics of first-order systems. In all our examples so far, all trajectories either approached a fixed point, or diverged to \(\pm\infty\). In fact, those are the _only_ things that can happen for a vector field on the real line. The reason is that trajectories are forced to increase or decrease monotonically, or remain constant (Figure 2.6.1). To put it more geometrically, the phase point never reverses direction. Thus, if a fixed point is regarded as an equilibrium solution, the approach to equilibrium is always _monotonic_--overshoot and damped oscillations can never occur in a first-order system. For the same reason, undamped oscillations are impossible. Hence _there are no periodic solutions to_ \(\dot{x}=f(x)\). These general results are fundamentally topological in origin. They reflect the fact that \(\dot{x}=f(x)\) corresponds to flow on a _line_. If you flow monotonically on a line, you'll never come back to your starting place--that's why periodic solutions are impossible. (Of course, if we were dealing with a _circle_ rather than a line, we _could_ eventually return to our starting place. Thus vector fields on the circle can exhibit periodic solutions, as we discuss in Chapter 4.) ### Mechanical Analog: Overdamped Systems It may seem surprising that solutions to \(\dot{x}=f(x)\) can't oscillate. But this result becomes obvious if we think in terms of a mechanical analog. We regard \(\dot{x}=f(x)\) as a limiting case of Newton's law, in the limit where the "inertia term" \(m\ddot{x}\) is negligible. For example, suppose a mass \(m\) is attached to a nonlinear spring whose restoring force is \(F(x)\), where \(x\) is the displacement from the origin. Furthermore, suppose that the mass is immersed in a vat of very viscous fluid, like honey or motor oil (Figure 2.6.2), so that it is subject to a damping force \(b\dot{x}\). Then Newton's law is \(m\ddot{x}+b\dot{x}=F(x)\). If the viscous damping is strong compared to the inertia term \(\left(b\dot{x}>>m\ddot{x}\right)\), the system should behave like \(b\dot{x}=F(x)\), or equivalently \(\dot{x}=f(x)\), where \(f(x)=b^{-1}F(x)\). In this _over damped_ limit, the behavior of the mechanical system is clear. The mass prefers to sit at a stable equilibrium, where\(f(x)=0\) and \(f^{\prime}(x)<0\). If displaced a bit, the mass is slowly dragged back to equilibrium by the restoring force. No overshoot can occur, because the damping is Figure 2.6.2: enormous. And undamped oscillations are out of the question! These conclusions agree with those obtained earlier by geometric reasoning. Actually, we should confess that this argument contains a slight swindle. The neglect of the inertia term \(m\ddot{x}\) is valid, but only after a rapid initial transient during which the inertia and damping terms are of comparable size. An honest discussion of this point requires more machinery than we have available. We'll return to this matter in Section 3.5. ### 2.7 Potentials There's another way to visualize the dynamics of the first-order system \(\dot{x}=f(x)\), based on the physical idea of potential energy. We picture a particle sliding down the walls of a potential well, where the _potential_\(V(x)\) is defined by \[f(x)=-\frac{dV}{dx}.\] As before, you should imagine that the particle is heavily damped--its inertia is completely negligible compared to the damping force and the force due to the potential. For example, suppose that the particle has to slog through a thick layer of goo that covers the walls of the potential (Figure 2.7.1). The negative sign in the definition of \(V\) follows the standard convention in physics; it implies that the particle always moves "downhill" as the motion proceeds. To see this, we think of \(x\) as a function of \(t\), and then calculate the time-derivative of \(V(x(t))\). Using the chain rule, we obtain Figure 2.7.1: \[\frac{dV}{dt}=\frac{dV}{dx}\frac{dx}{dt}.\] Now for a first-order system, \[\frac{dx}{dt}=-\frac{dV}{dx},\] since \(\dot{x}=f(x)=-dV/dx,\) by the definition of the potential. Hence, \[\frac{dV}{dt}=-\left(\frac{dV}{dx}\right)^{2}\leq 0.\] Thus \(V(t)\)_decreases along trajectories_, and so the particle always moves toward lower potential. Of course, if the particle happens to be at an _equilibrium_ point where \(dV/dx=0\), then \(V\) remains constant. This is to be expected, since \(dV/dx=0\) implies \(\dot{x}=0\); equilibria occur at the fixed points of the vector field. Note that local minima of \(V(x)\) correspond to _stable_ fixed points, as we'd expect intuitively, and local maxima correspond to unstable fixed points. **Example 2.7.1:** Graph the potential for the system \(\dot{x}=-x,\) and identify all the equilibrium points. _Solution:_ We need to find \(V(x)\) such that \(-dV/dx=-x.\) The general solution is \(V(x)=\frac{1}{2}x^{2}+C,\) where \(C\) is an arbitrary constant. (It always happens that the potential is only defined up to an additive constant. For convenience, we usually choose \(C=0.\)) The graph of \(V(x)\) is shown in Figure 2.7.2. The only equilibrium point occurs at \(x=0,\) and it's stable. **Example 2.7.2:** Graph the potential for the system \(\dot{x}=x-x^{3},\) and identify all equilibrium points. _Solution:_ Solving \(-dV/dx=x-x^{3}\) yields \(V=-\frac{1}{2}x^{2}+\frac{1}{4}x^{4}+C\). Once again we set \(C=0.\) Figure 2.7.3 shows the graph of \(V.\) The local minima at \(x=\pm 1\) correspond to stable equilibria, and the local maximum at \(x=0\) corresponds to an unstable equilibrium. The potential shown in Figure 2.7.3 is often called a _double-well potential,_ and the system is said to be _bistable,_ since it has two stable equilibria. ### 2.8 Solving Equations on the Computer Throughout this chapter we have used graphical and analytical methods to analyze first-order systems. Every budding dynamicist should master a third tool: numerical methods. In the old days, numerical methods were impractical because they required enormous amounts of tedious hand-calculation. But all that has changed, thanks to the computer. Computers enable us to approximate the solutions to analytically intractable problems, and also to visualize those solutions. In this section we take our first look at dynamics on the computer, in the context of _numerical integration_ of \(\dot{x}=f(x)\). Numerical integration is a vast subject. We will barely scratch the surface. See Chapter 17 of Press et al. (2007) for an excellent treatment. ### Euler's Method The problem can be posed this way: given the differential equation \(\dot{x}=f(x)\), subject to the condition \(x=x_{0}\) at \(t=t_{0}\), find a systematic way to approximate the solution \(x(t)\). Suppose we use the vector field interpretation of \(\dot{x}=f(x)\). That is, we think of a fluid flowing steadily on the \(x\)-axis, with velocity \(f(x)\) at the location \(x\). Imagine we're riding along with a phase point being carried downstream by the fluid. Initially we're at \(x_{0}\), and the local velocity is \(f(x_{0})\). If we flow for a short time \(\Delta t\), we'll have moved a distance \(f(x_{0})\Delta t\), because distance \(=\) rate \(\times\) time. Of course, that's not quite right, because our velocity was changing a little bit throughout the step. But over a sufficiently _small_ step, the velocity will be nearly constant and our approximation should be reasonably good. Hence our new position \(x(t_{0}+\Delta t)\) is approximately \(x_{0}+f(x_{0})\Delta t\). Let's call this approximation \(x_{1}\). Thus \[x(t_{0}+\Delta t)\approx x_{1}=x_{0}+f(x_{0})\,\Delta t\,.\] Now we iterate. Our approximation has taken us to a new location \(x_{1}\); our new velocity is \(f(x_{1})\); we step forward to \(x_{2}=x_{1}+f(x_{1})\Delta t\); and so on. In general, the update rule is \[x_{n+1}=x_{n}+f(x_{n})\Delta t\,.\] This is the simplest possible numerical integration scheme. It is known as _Euler's method_. Euler's method can be visualized by plotting \(x\) versus \(t\) (Figure 2.8.l). The curve shows the exact solution \(x(t)\), and the open dots show its values \(x(t_{n})\) at the discrete times \(t_{n}=t_{0}+n\Delta t\). The black dots show the approximate values given by the Euler method. As you can see, the approximation gets bad in a hurry unless \(\Delta t\) is extremely small. Hence Euler's method is not recommended in practice, but it contains the conceptual essence of the more accurate methods to be discussed next. ## 2.8 Solving equations on the computer ### 2.8 Solving equations on the computer \[x_{n+1} =x_{n}+\tfrac{1}{6}(k_{1}+2k_{2}+2k_{3}+k_{4}).\] This method generally gives accurate results without requiring an excessively small stepsize \(\Delta t\). Of course, some problems are nastier, and may require small steps in certain time intervals, while permitting very large steps elsewhere. In such cases, you may want to use a Runge-Kutta routine with an automatic stepsize control; see Press et al. (2007) for details. Now that computers are so fast, you may wonder why we don't just pick a tiny \(\Delta t\) once and for all. The trouble is that excessively many computations will occur, and each one carries a penalty in the form of _round-off error_. Computers don't have infinite accuracy--they don't distinguish between numbers that differ by some small amount \(\delta\). For numbers of order 1, typically \(\delta\approx 10^{-7}\) for single precision and \(\delta\approx 10^{-16}\) for double precision. Round-off error occurs during every calculation, and will begin to accumulate in a serious way if \(\Delta t\) is too small. See Hubbard and West (1991) for a good discussion. ### Practical Matters You have several options if you want to solve differential equations on the computer. If you like to do things yourself, you can write your own numerical integration routines in your favorite programming language, and plot the results using whatever graphics programs are available. The information given above should be enough to get you started. For further guidance, consult Press et al. (2007). A second option is to use existing packages for numerical methods. _Matlab, Mathematica_, and _Maple_ all have programs for solving ordinary differential equations and graphing their solutions. The final option is for people who want to explore dynamics, not computing. Dynamical systems software is available for personal computers. All you have to do is type in the equations and the parameters; the program solves the equations numerically and plots the results. Some recommended programs are _PPLane_ (written by John Polking and available online as a Java applet; this is a pleasant choice for beginners) and _XPP_ (by Bard Ermentrout, available on many platforms including iPhone and iPad; this is a more powerful tool for researchers and serious users). **Example 2.8.1**: _Solve the system \(\dot{x}=x(1-x)\) numerically._ _Solution:_ This is a logistic equation (Section 2.3) with parameters \(r=1\), \(K=1\). Previously we gave a rough sketch of the solutions, based on geometric arguments; now we can draw a more quantitative picture. As a first step, we plot the _slope field_ for the system in the \((t,x)\) plane (Figure 2.8.2). Here the equation \(\dot{x}=x(1-x)\) is being interpreted in a new way: for each point \((t,x)\), the equation gives the slope \(dx/dt\) of the solution passing through that point. The slopes are indicated by little line segments in Figure 2.8.2. Finding a solution now becomes a problem of drawing a curve that is always tangent to the local slope. Figure 2.8.3 shows four solutions starting from various points in the \((t,x)\) plane. These numerical solutions were computed using the Runge-Kutta method with a stepsize \(\Delta t=0.1\). The solutions have the shape expected from Section 2.3. Computers are indispensable for studying dynamical systems. We will use them liberally throughout this book, and you should do likewise. ### 2.8 Solving equations on the computer Figure 2.8.2: ### A Geometric Way of Thinking In the next three exercises, interpret \(\dot{x}=\sin x\) as a flow on the line. Find all the fixed points of the flow. At which points \(x\) does the flow have greatest velocity to the right? Find the flow's acceleration \(\ddot{x}\) as a function of \(x\). Find the points where the flow has maximum positive acceleration. (Exact solution of \(\dot{x}=\sin x\)) As shown in the text, \(\dot{x}=\sin x\) has the solution \(t=\ln|(\text{csc}\,x_{0}+\cot x_{0})/(\text{csc}\,x+\cot x)|\), where \(x_{0}=x(0)\) is the initial value of \(x\). Given the specific initial condition \(x_{0}=\pi/4\), show that the solution above can be inverted to obtain \[x(t)=2\tan^{-1}\left(\frac{e^{t}}{1+\sqrt{2}}\right).\] Conclude that \(x(t)\to\pi\) as \(t\to\infty\), as claimed in Section 2.1. (You need to be good with trigonometric identities to solve this problem.) Try to find the analytical solution for \(x(t)\), given an _arbitrary_ initial condition \(x_{0}\). (A mechanical analog) Find a mechanical system that is approximately governed by \(\dot{x}=\sin x\). Using your physical intuition, explain why it now becomes obvious that \(x^{\ast}=0\) is an unstable fixed point and \(x^{\ast}=\pi\) is stable. ### Fixed Points and Stability Analyze the following equations graphically. In each case, sketch the vector field on the real line, find all the fixed points, classify their stability, and sketch the graph of \(x(t)\) for different initial conditions. Then try for a few minutes to obtain the analytical solution for \(x(t)\); if you get stuck, don't try for too long since in several cases it's impossible to solve the equation in closed form! \[\begin{array}{llll}\mathbf{2.2.1}&\dot{x}=4x^{2}-16&\mathbf{2.2.2}&\dot{x}=1 -x^{14}\\ \mathbf{2.2.3}&\dot{x}=x-x^{3}&\mathbf{2.2.4}&\dot{x}=e^{-x}\sin x\\ \mathbf{2.2.5}&\dot{x}=1+\frac{1}{2}\cos x&\mathbf{2.2.6}&\dot{x}=1-2\cos x\\ \end{array}\] 2.7 \(\dot{x}=e^{x}-\cos x\) (Hint: Sketch the graphs of \(e^{x}\) and \(\cos x\) on the same axes, and look for intersections. You won't be able to find the fixed points explicitly, but you can still find the qualitative behavior.) #### 2.2.8 (Working backwards, from flows to equations) Given an equation \(\dot{x}=f(x)\), we know how to sketch the corresponding flow on the real line. Here you are asked to solve the opposite problem: For the phase portrait shown in Figure 1, find an equation that is consistent with it. (There are an infinite number of correct answers--and wrong ones too.) #### 2.2.9 (Backwards again, now from solutions to equations) Find an equation \(\dot{x}=f(x)\) whose solutions \(x(t)\) are consistent with those shown in Figure 2. #### 2.2.10 (Fixed points) For each of (a)-(e), find an equation \(\dot{x}=f(x)\) with the stated properties, or if there are no examples, explain why not. (In all cases, assume that \(f(x)\) is a smooth function.) Every real number is a fixed point. Every integer is a fixed point, and there are no others. There are precisely three fixed points, and all of them are stable. There are no fixed points. There are precisely 100 fixed points. #### 2.2.11 (Analytical solution for charging capacitor) Obtain the analytical solution of the initial value problem \(\dot{Q}=\frac{V_{0}}{R}-\frac{Q}{RC}\), with \(Q(0)=0\), which arose in Example 2.2.2. (A nonlinear resistor) Suppose the resistor in Example 2.2.2 is replaced by a nonlinear resistor. In other words, this resistor does not have a linear relation between voltage and current. Such nonlinearity arises in certain solid-state devices. Instead of \(I_{R}=V/R\), suppose we have \(I_{R}=g(V)\), where \(g(V)\) has the shape shown in Figure 3. Redo Example 2.2.2 in this case. Derive the circuit equations, find all the fixed points, and analyze their stability. What qualitative effects does the nonlinearity introduce (if any)? (Terminal velocity) The velocity \(v(t)\) of a skydiver falling to the ground is governed by \(mv=mg-kv^{2}\), where \(m\) is the mass of the skydiver, \(g\) is the acceleration due to gravity, and \(k>0\) is a constant related to the amount of air resistance. * Obtain the analytical solution for \(v(t)\), assuming that \(v(0)=0\). * Find the limit of \(v(t)\) as \(t\rightarrow\infty\). This limiting velocity is called the _terminal velocity_. (Beware of bad jokes about the word _terminal_ and parachutes that fail to open.) * Give a graphical analysis of this problem, and thereby re-derive a formula for the terminal velocity. * An experimental study (Carlson et al. 1942) confirmed that the equation \(m\dot{v}=mg-kv^{2}\) gives a good quantitative fit to data on human skydivers. Six men were dropped from altitudes varying from 10,600 feet to 31,400 feet to a terminal altitude of 2,100 feet, at which they opened their parachutes. The long free fall from 31,400 to 2,100 feet took 116 seconds. The average weight of the men and their equipment was 261.2 pounds. In these units, \(g=32.2\) ft/sec\({}^{2}\). Compute the average velocity \(V_{avg}\). * Using the data given here, estimate the terminal velocity, and the value of the drag constant \(k\). (Hints: First you need to find an exact formula for \(s(t)\), the distance fallen, where \(s(0)=0\), \(\dot{s}=v\), and \(v(t)\) is known from part (a). You should get \(s(t)=\frac{v^{2}}{g}\ln\bigl{(}\cosh\frac{gt}{v}\bigr{)}\), where \(V\) is the terminal velocity. Then solve for \(V\) graphically or numerically, using \(s=29\),300, \(t=116\), and \(g=32.2\).) A slicker way to estimate \(V\) is to suppose \(V\approx V_{avg}\) as a rough first approximation. Then show that \(gt/V\approx 15\). Since \(gt/V>>1\), we may use the approximation \(\ln(\cosh x)\approx x-\ln 2\) for \(x>>1\). Derive this approximation and then use it to obtain an analytical estimate of \(V\). Then \(k\) follows from part (b). This analysis is from Davis (1962). ### 2.3 Population Growth (Exact solution of logistic equation) There are two ways to solve the logistic equation \(\dot{N}=rN(1-N/K)\) analytically for an arbitrary initial condition \(N_{o}\). 1. Separate variables and integrate, using partial fractions. 2. Make the change of variables \(x=1/N\). Then derive and solve the resulting differential equation for \(x\). #### 2.3.2 (Autocatalysis) Consider the model chemical reaction \[A+X\mathop{\rightleftharpoons}\limits_{k_{-1}}^{k_{1}}2X\] in which one molecule of \(X\) combines with one molecule of \(A\) to form two molecules of \(X\). This means that the chemical \(X\) stimulates its own production, a process called _autocatalysis_. This positive feedback process leads to a chain reaction, which eventually is limited by a "back reaction" in which \(2X\) returns to \(A+X\). According to the _law of mass action_ of chemical kinetics, the rate of an elementary reaction is proportional to the product of the concentrations of the reactants. We denote the concentrations by lowercase letters \(x=[X]\) and \(a=[A]\). Assume that there's an enormous surplus of chemical \(A\), so that its concentration \(a\) can be regarded as constant. Then the equation for the kinetics of \(x\) is \[\dot{x}=k_{1}ax-k_{-1}x^{2}\] where \(k_{1}\) and \(k_{-1}\) are positive parameters called rate constants. 1. Find all the fixed points of this equation and classify their stability. 2. Sketch the graph of \(x(t)\) for various initial values \(x_{0}\). 3.3 (Tumor growth) The growth of cancerous tumors can be modeled by the Gompertz law \(\dot{N}=-aN\ln(bN)\), where \(N(t)\) is proportional to the number of cells in the tumor, and \(a\), \(b>0\) are parameters. 1. Interpret \(a\) and \(b\) biologically. 2. Sketch the vector field and then graph \(N(t)\) for various initial values. The predictions of this simple model agree surprisingly well with data on tumor growth, as long as \(N\) is not too small; see Aroesty et al. (1973) and Newton (1980) for examples. 3.4 (The Allee effect) For certain species of organisms, the effective growth rate \(\dot{N}/N\) is highest at intermediate \(N\). This is called the Allee effect (Edelstein-Keshet 1988). For example, imagine that it is too hard to find mates when \(N\) is very small, and there is too much competition for food and other resources when \(N\) is large. 1. Show that \(\dot{N}/N=r-a(N-b)^{2}\) provides an example of the Allee effect, if \(r\), \(a\), and \(b\) satisfy certain constraints, to be determined. 2. Find all the fixed points of the system and classify their stability. 3. Sketch the solutions \(N(t)\) for different initial conditions. 4. Compare the solutions \(N(t)\) to those found for the logistic equation. What are the qualitative differences, if any?3.5 (Dominance of the fittest) Suppose \(X\) and \(Y\) are two species that reproduce exponentially fast: \(\dot{X}=aX\) and \(\dot{Y}=bY\), respectively, with initial conditions \(X_{0},Y_{0}>0\) and growth rates \(a>b>0\). Here \(X\) is "fitter" than \(Y\)in the sense that it reproduces faster, as reflected by the inequality \(a>b\). So we'd expect \(X\)to keep increasing its share of the total population \(X+Y\) as \(t\to\infty\). The goal of this exercise is to demonstrate this intuitive result, first analytically and then geometrically. 1. Let \(x(t)=X(t)/[X(t)+Y(t)]\) denote \(X\)'s share of the total population. By solving for \(X(t)\) and \(Y(t)\), show that \(x(t)\) increases monotonically and approaches \(1\) as \(t\to\infty\). 2. Alternatively, we can arrive at the same conclusions by deriving a differential equation for \(x(t)\). To do so, take the time derivative of \(x(t)=X(t)/[X(t)+Y(t)]\) using the quotient and chain rules. Then substitute for \(\dot{X}\) and \(\dot{Y}\) and thereby show that \(x(t)\) obeys the logistic equation \(\dot{x}=(a-b)x(1-x)\). Explain why this implies that \(x(t)\) increases monotonically and approaches \(1\) as \(t\to\infty\). 3.6 (Language death) Thousands of the world's languages are vanishing at an alarming rate, with 90 percent of them being expected to disappear by the end of this century. Abrams and Strogatz (2003) proposed the following model of language competition, and compared it to historical data on the decline of Welsh, Scottish Gaelic, Quechua (the most common surviving indigenous language in the Americas), and other endangered languages. Let \(X\)and \(Y\)denote two languages competing for speakers in a given society. The proportion of the population speaking \(X\) evolves according to \[\dot{x}=(1-x)P_{Y\,X}-xP_{XY}\] where \(0\leq x\leq 1\) is the current fraction of the population speaking \(X\), \(1-x\) is the complementary fraction speaking \(Y\), and \(P_{Y\,X}\) is the rate at which individuals switch from \(Y\)to \(X\). This deliberately idealized model assumes that the population is well mixed (meaning that it lacks all spatial and social structure) and that all speakers are monolingual. Next, the model posits that the attractiveness of a language increases with both its number of speakers and its perceived status, as quantified by a parameter \(0\leq s\leq 1\) that reflects the social or economic opportunities afforded to its speakers. Specifically, assume that \(P_{Y\,X}=sx^{a}\) and, by symmetry, \(P_{XY}=(1-s)(1-x)^{a}\), where the exponent \(a>1\) is an adjustable parameter. Then the model becomes \[\dot{x}=s(1-x)x^{a}-(1-s)x(1-x)^{a}\,.\] 1. Show that this equation for \(\dot{x}\) has three fixed points. 2. Show that for all \(a>1\), the fixed points at \(x=0\) and \(x=1\) are both stable. 3. Show that the third fixed point, \(0<x^{\ast}<1\), is unstable. This model therefore predicts that two languages cannot coexist stably--one will eventually drive the other to extinction. For a review of generalizations of the model that allow for bilingualism, social structure, etc., see Castellano et al. (2009). ### _Linear Stability Analysis_ Use linear stability analysis to classify the fixed points of the following systems. If linear stability analysis fails because \(f^{\prime}(x*)=0\), use a graphical argument to decide the stability. \[\begin{array}{llll}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm} \vspace{-0.1cm}\vspace{-0. Consider the equation \(\dot{x}=rx+x^{3},\) where \(r>0\) is fixed. Show that \(x(t)\to\pm\infty\) in finite time, starting from any initial condition \(x_{0}\neq 0.\) (Infinitely many solutions with the same initial condition) Show that the initial value problem \(\dot{x}=x^{1/3},\)\(x(0)=0,\) has an infinite number of solutions. (Hint: Construct a solution that stays at \(x=0\) until some arbitrary time \(t_{0},\) after which it takes off) (A general example of non-uniqueness) Consider the initial value problem \(\dot{x}=\left|x\right|^{p/q},\)\(x(0)=0,\) where \(p\) and \(q\) are positive integers with no common factors. * Show that there are an infinite number of solutions for \(x(t)\) if \(p<q.\) * Show that there is a unique solution if \(p>q.\) (The leaky bucket) The following example (Hubbard and West 1991, p. 159) shows that in some physical situations, non-uniqueness is natural and obvious, not pathological. Consider a water bucket with a hole in the bottom. If you see an empty bucket with a puddle beneath it, can you figure out when the bucket was full? No, of course not! It could have finished emptying a minute ago, ten minutes ago, or whatever. The solution to the corresponding differential equation must be non-unique when integrated backwards in time. Here's a crude model of the situation. Let \(h(t)=\) height of the water remaining in the bucket at time \(t;\)\(a=\) area of the hole; \(A=\) cross-sectional area of the bucket (assumed constant); \(v(t)=\) velocity of the water passing through the hole. * Show that \(av(t)=A\dot{h}(t).\) What physical law are you invoking? * To derive an additional equation, use conservation of energy. First, find the change in potential energy in the system, assuming that the height of the water in the bucket decreases by an amount \(\Delta h\) and that the water has density \(\rho.\) Then find the kinetic energy transported out of the bucket by the escaping water. Finally, assuming all the potential energy is converted into kinetic energy, derive the equation \(v^{2}=2gh.\) * Combining (b) and (c), show \(\dot{h}=-C\sqrt{h}\), where \(C=\sqrt{2g}\left(\frac{a}{A}\right)\). * Given \(h(0)=0\) (bucket empty at \(t=0\)), show that the solution for \(h(t)\) is non-unique _in backwards time_, i.e., for \(t<0.\) Explain this paradox: a simple harmonic oscillator \(m\ddot{x}=-kx\) is a system that oscillates in one dimension (along the \(x\)-axis). But the text says one-dimensional systems can't oscillate. (No periodic solutions to \(\dot{x}=f(x)\)) Here's an analytic proof that periodic solutions are impossible for a vector field on a line. Suppose on the contrary that \(x(t)\) is a nontrivial periodic solution, i.e., \(x(t)=x(t+T)\) for some \(T>0,\)and \(x(t)\approx x(t+s)\) for all \(0<s<T\). Derive a contradiction by considering \(\int_{t}^{t+T}f(x)\frac{dx}{dt}dt\). ### 2.7 Potentials For each of the following vector fields, plot the potential function \(V(x)\) and identify all the equilibrium points and their stability. **2.7.1**: \(\dot{x}=x(1-x)\) **2.7.2**: \(\dot{x}=3\) **2.7.3**: \(\dot{x}=\sin x\) **2.7.4**: \(\dot{x}=2+\sin x\) **2.7.5**: \(\dot{x}=-\sinh x\) **2.7.6**: \(\dot{x}=r+x-x^{3}\), for various values of \(r\). **2.7.7**: (Another proof that solutions to \(\dot{x}=f(x)\) can't oscillate) Let \(\dot{x}=f(x)\) be a vector field on the line. Use the existence of a potential function \(V(x)\) to show that solutions \(x(t)\) cannot oscillate. ### 2.8 Solving Equations on the Computer **2.8.1**: (Slope field) The slope is constant along horizontal lines in Figure 2.8.2. Why should we have expected this? **2.8.2**: Sketch the slope field for the following differential equations. Then "integrate" the equation manually by drawing trajectories that are everywhere parallel to the local slope. * \(\dot{x}=x\) b) \(\dot{x}=1-x^{2}\) c) \(\dot{x}=1-4x(1-x)\) d) \(\dot{x}=\sin x\) **2.8.3**: (Calibrating the Euler method) The goal of this problem is to test the Euler method on the initial value problem \(\dot{x}=-x\), \(x(0)=1\). * Solve the problem analytically. What is the exact value of \(x(1)\)? * Using the Euler method with step size \(\Delta t=1\), estimate \(x(1)\) numerically--call the result \(\hat{x}(1)\). Then repeat, using \(\Delta t=10^{-a}\), for \(n=1\), \(2\), \(3\), \(4\). * Plot the error \(E=\left|\hat{x}(1)-x(1)\right|\)as a function of \(\Delta t\). Then plot ln \(E\) vs. ln \(t\). Explain the results. **2.8.4**: \(\quad\) **Redo Exercise 2.8.3, using the improved Euler method. **2.8.5**: \(\quad\) **Redo Exercise 2.8.3, using the Runge-Kutta method. **2.8.6**: (Analytically intractable problem) Consider the initial value problem \(\dot{x}=x+e^{-x}\), \(x(0)=0\). In contrast to Exercise 2.8.3, this problem can't be solved analytically. * Sketch the solution \(x(t)\) for \(t\geq 0\). * Using some analytical arguments, obtain rigorous bounds on the value of \(x\) at \(t=1\). In other words, prove that \(a<x(1)<b\), for \(a,b\) to be determined. By being clever, try to make \(a\) and \(b\) as close together as possible. (Hint: Bound the given vector field by approximate vector fields that can be integrated analytically.)3. Now for the numerical part: Using the Euler method, compute \(x\) at \(t=1\), correct to three decimal places. How small does the stepsize need to be to obtain the desired accuracy? (Give the order of magnitude, not the exact number.) 4. Repeat part (b), now using the Runge-Kutta method. Compare the results for stepsizes \(\Delta t=1\), \(\Delta t=0.1\), and \(\Delta t=0.01\). (Error estimate for Euler method) In this question you'll use Taylor series expansions to estimate the error in taking one step by the Euler method. The exact solution and the Euler approximation both start at \(x=x_{0}\) when \(t=t_{0}\). We want to compare the exact value \(x(t_{t})\equiv x(t_{0}+\Delta t)\) with the Euler approximation \(x_{1}=x_{0}+f(x_{0})\Delta t\). 1. Expand \(x(t_{t})=x(t_{0}+\Delta t)\) as a Taylor series in \(\Delta t\), through terms of \(O(\Delta t^{2})\). Express your answer solely in terms of \(x_{\varphi}\). \(\Delta t\), and \(f\) and its derivatives at \(x_{\varphi}\). 2. Show that the local error \(\left|x(t_{1})-x_{1}\right|\sim C(\Delta t)^{2}\) and give an explicit expression for the constant \(C\). (Generally one is more interested in the global error incurred after integrating over a time interval of fixed length \(T=n\Delta t\). Since each step produces an \(O(\Delta t)^{2}\) error, and we take \(n=T/\Delta t=O(\Delta t^{-1})\) steps, the global error \(\left|x(t_{n})-x_{n}\right|\) is \(O(\Delta t)\), as claimed in the text.) (Error estimate for the improved Euler method) Use the Taylor series arguments of Exercise 2.8.7 to show that the local error for the improved Euler method is \(O(\Delta t^{3})\). (Error estimate for Runge-Kutta) Show that the Runge-Kutta method produces a local error of size \(O(\Delta t^{5})\). (Warning: This calculation involves massive amounts of algebra, but if you do it correctly, you'll be rewarded by seeing many wonderful cancellations. Teach yourself _Mathematica_, _Maple_, or some other symbolic manipulation language, and do the problem on the computer.) ## References ### 3.0 Introduction As we've seen in Chapter 2, the dynamics of vector fields on the line is very limited: all solutions either settle down to equilibrium or head out to \(\pm\infty\). Given the triviality of the dynamics, what's interesting about one-dimensional systems? Answer: _Dependence on parameters._ The qualitative structure of the flow can change as parameters are varied. In particular, fixed points can be created or destroyed, or their stability can change. These qualitative changes in the dynamics are called _bifurcations_, and the parameter values at which they occur are called _bifurcation points._ Bifurcations are important scientifically--they provide models of transitions and instabilities as some _control parameter_ is varied. For example, consider the buckling of a beam. If a small weight is placed on top of the beam in Figure 3.0.1, the beam can support the load and remain vertical. But if the load is too heavy, the vertical position becomes unstable, and the beam may buckle. Here the weight plays the role of the control parameter, and the deflection of the beam from vertical plays the role of the dynamical variable \(x\). ### 3.0 Introduction Figure 3.0.1:One of the main goals of this book is to help you develop a solid and practical understanding of bifurcations. This chapter introduces the simplest examples: bifurcations of fixed points for flows on the line. We'll use these bifurcations to model such dramatic phenomena as the onset of coherent radiation in a laser and the outbreak of an insect population. (In later chapters, when we step up to two- and three-dimensional phase spaces, we'll explore additional types of bifurcations and their scientific applications.) We begin with the most fundamental bifurcation of all. ### 3.1 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{1}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.1 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{2}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.2 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{3}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.3 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{4}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.4 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{5}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.5 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{6}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.6 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{7}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.7 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{8}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.8 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{9}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.9 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{10}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.10 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{11}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). As \(r\) approaches 0 from below, the parabola moves up and the two fixed points move toward each other. When \(r=0\), the fixed points coalesce into a half-stable fixed point at \(x^{\ast}=0\) (Figure 3.1.1b). This type of fixed point is extremely delicate--it vanishes as soon as \(r>0\), and now there are no fixed points at all (Figure 3.1.1c). In this example, we say that a _bifurcation_ occurred at \(r=0\), since the vector fields for \(r<0\) and \(r>0\) are qualitatively different. ### 3.11 Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism by which fixed points are _created and destroyed_. As a parameter is varied, two fixed points move toward each other, collide, and mutually annihilate. The prototypical example of a saddle-node bifurcation is given by the first-order system \[\dot{x}=r+x^{2} \tag{12}\] where \(r\) is a parameter, which may be positive, negative, or zero. When \(r\) is negative, there are two fixed points, one stable and one unstable (Figure 3.1.1a). However, the most common way to depict the bifurcation is to invert the axes of Figure 3.1.3. The rationale is that \(r\) plays the role of an independent variable, and so should be plotted horizontally (Figure 3.1.4). The drawback is that now the \(x\)-axis has to be plotted vertically, which looks strange at first. Arrows are sometimes included in the picture, but not always. This picture is called the _bifurcation diagram_ for the saddle-node bifurcation. ### Terminology Bifurcation theory is rife with conflicting terminology. The subject really hasn't settled down yet, and different people use different words for the same thing. For example, the saddle-node bifurcation is sometimes called _a fold bifurcation_ (because the curve in Figure 3.1.4 Figure 3.1.4: Figure 3.1.2: has a fold in it) or a _turning-point bifurcation_ (because the point (\(x\),\(r\)) = (0,0) is a "turning point.") Admittedly, the term _saddle-node_ doesn't make much sense for vector fields on the line. The name derives from a completely analogous bifurcation seen in a higher-dimensional context, such as vector fields on the plane, where fixed points known as saddles and nodes can collide and annihilate (see Section 8.1). The prize for most inventive terminology must go to Abraham and Shaw (1988), who write of a _blue sky bifurcation_. This term comes from viewing a saddle-node bifurcation in the other direction: a pair of fixed points appears "out of the clear blue sky" as a parameter is varied. For example, the vector field \[\dot{x}=r-x^{2}\] has no fixed points for \(r<0\), but then one materializes when \(r=0\) and splits into two when \(r>0\) (Figure 3.1.5). Incidentally, this example also explains why we use the word "bifurcation": it means "splitting into two branches." **Example 3.1.1:** Give a linear stability analysis of the fixed points in Figure 3.1.5. _Solution:_ The fixed points for \(\dot{x}=f(x)=r-x^{2}\) are given by \(x^{*}=\pm\sqrt{r}\). There are two fixed points for \(r>0\), and none for \(r<0\). To determine linear stability, we compute \(f^{\prime}(x^{*})=-2x^{*}\). Thus \(x^{*}=+\sqrt{r}\) is stable, since \(f^{\prime}(x^{*})<0\). Similarly \(x^{*}=-\sqrt{r}\) is unstable. At the bifurcation point \(r=0\), we find \(f^{\prime}(x^{*})=0\); the linearization vanishes when the fixed points coalesce. **Example 3.1.2:** Show that the first-order system \(\dot{x}=r-x-e^{-x}\) undergoes a saddle-node bifurcation as \(r\) is varied, and find the value of \(r\) at the bifurcation point. _Solution:_ The fixed points satisfy \(f(x)=r-x-e^{-x}=0\). But now we run into a difficulty--in contrast to Example 3.1.1, we can't find the fixed points explicitly as a function of \(r\). Instead we adopt a geometric approach. One method would be to graph the function \(f(x)=r-x-e^{-x}\) for different values of \(r\), look for its roots \(x^{*}\), and then sketch the vector field on the \(x\)-axis. This method is fine, but there's an easier way. The point is that the two functions \(r-x\) and \(e^{-x}\) have much more familiar graphs than their difference \(r-x-e^{-x}\). So we plot \(r-x\) and \(e^{-x}\) on the same Figure 3.1.5: picture (Figure 3.1.6a). Where the line \(r-x\) intersects the curve \(e^{-x}\), we have \(r-x=e^{-x}\) and so \(f(x)=0\). _Thus, intersections of the line and the curve correspond to fixed points for the system_. This picture also allows us to read off the direction of flow on the \(x\)-axis: the flow is to the right where the line lies above the curve, since \(r-x>e^{-x}\) and therefore \(\dot{x}>0\). Hence, the fixed point on the right is stable, and the one on the left is unstable. Now imagine we start decreasing the parameter \(r\). The line \(r-x\) slides down and the fixed points approach each other. At some critical value \(r=r_{c}\), the line becomes _tangent_ to the curve and the fixed points coalesce in a saddle-node bifurcation (Figure 3.1.6b). For \(r\) below this critical value, the line lies below the curve and there are no fixed points (Figure 3.1.6c). To find the bifurcation point \(r_{c}\), we impose the condition that the graphs of \(r-x\) and \(e^{-x}\) intersect _tangentially_. Thus we demand equality of the functions _and_ their derivatives: \[e^{-x}=r-x\] and \[\frac{d}{dx}e^{-x}=\frac{d}{dx}(r-x)\,.\] The second equation implies \(-e^{-x}=-1\), so \(x=0\). Then the first equation yields \(r=\ 1\). Hence the bifurcation point is \(r_{c}=1\), and the bifurcation occurs at \(x=0\). ### Normal Forms In a certain sense, the examples \(\dot{x}=r-x^{2}\) or \(\dot{x}=r+x^{2}\) are representative of _all_ saddle-node bifurcations; that's why we called them "prototypical." The idea is that, close to a saddle-node bifurcation, the dynamics typically look like \(\dot{x}=r-x^{2}\) or \(\dot{x}=r+x^{2}\). ### Saddle-node bifurcation Figure 3.1.6: For instance, consider Example 3.1.2 near the bifurcation at \(x=0\) and \(r=1\). Using the Taylor expansion for \(e^{-x}\) about \(x=0\), we find \[\dot{x} =r-x-e^{-x}\] \[=r-x-\left[1-x+\frac{x^{2}}{2!}+...\right]\] \[=(r-1)-\frac{x^{2}}{2}+...\] to leading order in \(x\). This has the same algebraic form as \(\dot{x}=r-x^{2}\), and can be made to agree exactly by appropriate rescalings of \(x\) and \(r\). It's easy to understand why saddle-node bifurcations typically have this algebraic form. We just ask ourselves: how can two fixed points of \(\dot{x}=f(x)\) collide and disappear as a parameter \(r\) is varied? Graphically, fixed points occur where the graph of \(f(x)\) intersects the \(x\)-axis. For a saddle-node bifurcation to be possible, we need two nearby roots of \(f(x)\); this means \(f(x)\) must look locally "bowl-shaped" or parabolic (Figure 3.1.7). Now we use a microscope to zoom in on the behavior near the bifurcation. As \(r\) varies, we see a parabola intersecting the \(x\)-axis, then becoming tangent to it, and then failing to intersect. This is exactly the scenario in the prototypical Figure 3.1.1. Here's a more algebraic version of the same argument. We regard \(f\)as a function of both \(x\) and \(r\), and examine the behavior of \(\dot{x}=f(x,r)\) near the bifurcation at \(x=x^{*}\) and \(r=r_{{}_{c}}\). Taylor's expansion yields \[\dot{x} =f(x,r)\] \[=f(x^{*},r_{{}_{c}})+(x-x^{*})\frac{\partial f}{\partial x}\bigg{|} _{(x^{*},r_{{}_{c}})}+(r-r_{{}_{c}})\frac{\partial f}{\partial r}\bigg{|}_{(x^ {*},r_{{}_{c}})}+\tfrac{1}{2}(\boldsymbol{x}-x^{*})^{2}\frac{\partial^{2}f}{ \partial x^{2}}\bigg{|}_{(x^{*},r_{{}_{c}})}+...\] Figure 3.1.7: where we have neglected quadratic terms in \((r-r_{c})\) and cubic terms in \((x-x*)\). Two of the terms in this equation vanish: \(f(x*,r_{c})=0\) since \(x*\) is a fixed point, and \(\partial f/\partial x\big{|}_{(x^{*},r_{c})}=0\) by the tangency condition of a saddle-node bifurcation. Thus \[\dot{x}=a(r-r_{c})+b(x-x*)^{2}+\cdots \tag{3}\] where \(a=\left.\partial f/\partial x\right|_{x^{*},r_{c})}\) and \(b=\frac{1}{2}\partial^{2}f/\partial x^{2}\big{|}_{(x^{*},r_{c})}\) Equation (3) agrees with the form of our prototypical examples. (We are assuming that \(a,b\approx 0\), which is the typical case; for instance, it would be a very special situation if the second derivative \(\partial^{2}f/\partial x^{2}\) also happened to vanish at the fixed point.) What we have been calling prototypical examples are more conventionally known as _normal forms_ for the saddle-node bifurcation. There is much, much more to normal forms than we have indicated here. We will be seeing their importance throughout this book. For a more detailed and precise discussion, see Guckenheimer and Holmes (1983) or Wiggins (1990). ### 3.2 Transcritical Bifurcation There are certain scientific situations where a fixed point must exist for all values of a parameter and can never be destroyed. For example, in the logistic equation and other simple models for the growth of a single species, there is a fixed point at zero population, regardless of the value of the growth rate. However, such a fixed point may _change its stability_ as the parameter is varied. The transcritical bifurcation is the standard mechanism for such changes in stability. The normal form for a transcritical bifurcation is \[\dot{x}=rx-x^{2}. \tag{1}\] This looks like the logistic equation of Section 2.3, but now we allow \(x\) and \(r\) to be either positive or negative. Figure 3.2.1 shows the vector field as \(r\) varies. Note that there is a fixed point at \(x*=0\) for _all_ values of \(r\). ### 3.2 Transc Figure 3.2.1: For \(r<0\), there is an unstable fixed point at \(x*=r\) and a stable fixed point at \(x*=0\). As \(r\) increases, the unstable fixed point approaches the origin, and coalesces with it when \(r=0\). Finally, when \(r>0\), the origin has become unstable, and \(x*=r\) is now stable. Some people say that an _exchange of stabilities_ has taken place between the two fixed points. Please note the important difference between the saddle-node and transcritical bifurcations: in the transcritical case, the two fixed points don't disappear after the bifurcation--instead they just switch their stability. Figure 3.2.2 shows the bifurcation diagram for the transcritical bifurcation. As in Figure 3.1.4, the parameter \(r\) is regarded as the independent variable, and the fixed points \(x*=0\) and \(x*=r\) are shown as dependent variables. **Example 3.2.1:** Show that the first-order system \(\dot{x}=x(1-x^{2})-a(1-e^{-bx})\) undergoes a transcritical bifurcation at \(x=0\) when the parameters \(a\), \(b\) satisfy a certain equation, to be determined. (This equation defines a _bifurcation curve_ in the \((a,b)\) parameter space.) Then find an approximate formula for the fixed point that bifurcates from \(x=0\), assuming that the parameters are close to the bifurcation curve. _Solution:_ Note that \(x=0\) is a fixed point for all \((a,b)\). This makes it plausible that the fixed point will bifurcate transcritically, if it bifurcates at all. For small \(x\), we find \[1-e^{-bx} = 1-\left[1-bx+\tfrac{1}{2}b^{2}x^{2}+O(x^{3})\right]\] \[=bx-\tfrac{1}{2}b^{2}x^{2}+O(x^{3})\] and so \[\dot{x} = x-a(bx-\tfrac{1}{2}b^{2}x^{2})+O(x^{3})\] \[= (1-ab)x+(\tfrac{1}{2}ab^{2})x^{2}+O(x^{3}).\] Figure 3.2.2: Hence a transcritical bifurcation occurs when \(ab=1\); this is the equation for the bifurcation curve. The nonzero fixed point is given by the solution of \(1-ab+(\frac{1}{2}ab^{2})x\approx 0\), i.e., \[x*\approx\frac{2(ab-1)}{ab^{2}}\,.\] This formula is approximately correct only if \(x*\) is small, since our series expansions are based on the assumption of small \(x\). Thus the formula holds only when \(ab\) is close to \(1\), which means that the parameters must be close to the bifurcation curve. Example 3.2.2: Analyze the dynamics _of_\(\dot{x}=r\ln x+x-1\) near \(x=1\), and show that the system undergoes a transcritical bifurcation at a certain value of \(r\). Then find new variables \(X\) and \(R\) such that the system reduces to the approximate normal form \(\dot{X}\approx RX-X^{2}\) near the bifurcation. _Solution:_ First note that \(x=1\) is a fixed point for all values of \(r\). Since we are interested in the dynamics near this fixed point, we introduce a new variable \(u=x-1\), where \(u\) is small. Then \[\dot{u} =\dot{x}\] \[=r\ln(1+u)+u\] \[=r\big{[}u-\tfrac{1}{2}u^{2}+O(u^{3})\big{]}+u\] \[\approx(r+1)u-\tfrac{1}{2}ru^{2}+O(u^{3}).\] Hence a transcritical bifurcation occurs at \(r_{c}=-1\). To put this equation into normal form, we first need to get rid of the coefficient of \(u^{2}\). Let \(u=av\), where \(a\) will be chosen later. Then the equation for \(v\) is \[\dot{v}=(r+1)v-(\tfrac{1}{2}ra)v^{2}+O(v^{3}).\] So if we choose \(a=2/r\), the equation becomes \[\dot{v}=(r+1)v-v^{2}+O(v^{3}).\] Now if we let \(R=r+1\) and \(X=v\), we have achieved the approximate normal form \(\dot{X}\approx RX-X^{2}\), where cubic terms of order \(O(X^{3})\) have been neglected. In terms of the original variables, \(X=v=u/a=\frac{1}{2}r(x-1)\). To be a bit more accurate, the theory of normal forms assures us that we can find a change of variables such that the system becomes \(\dot{X}=RX-X^{2}\), with strict_, rather than approximate, equality. Our solution above gives an approximation to the necessary change of variables. For careful treatments of normal form theory, see the books of Guckenheimer and Holmes (1983), Wiggins (1990), or Manneville (1990). ### 3.3 Laser Threshold Now it's time to apply our mathematics to a scientific example. We analyze an extremely simplified model for a laser, following the treatment given by Haken (1983). #### Physical Background We are going to consider a particular type of laser known as a solid-state laser, which consists of a collection of special "laser-active" atoms embedded in a solid-state matrix, bounded by partially reflecting mirrors at either end. An external energy source is used to excite or "pump" the atoms out of their ground states (Figure 3.3.1). Each atom can be thought of as a little antenna radiating energy. When the pumping is relatively weak, the laser acts just like an ordinary _lamp_: the excited atoms oscillate independently of one another and emit randomly phased light waves. Now suppose we increase the strength of the pumping. At first nothing different happens, but then suddenly, when the pump strength exceeds a certain threshold, the atoms begin to oscillate in phase--the lamp has turned into a _laser_. Now the trillions of little antennas act like one giant antenna and produce a beam of radiation that is much more coherent and intense than that produced below the laser threshold. This sudden onset of coherence is amazing, considering that the atoms are being excited completely at random by the pump! Hence the process is _self-organizing_: the coherence develops because of a cooperative interaction among the atoms themselves. #### Bifurcations Figure 3.3.1: ### Model A proper explanation of the laser phenomenon would require us to delve into quantum mechanics. See Milonni and Eberly (1988) for an intuitive discussion. Instead we consider a simplified model of the essential physics (Haken 1983, p. 127). The dynamical variable is the number of photons _n_(_t_) in the laser field. Its rate of change is given by \[\begin{array}{l}\dot{n}=\text{gain}-\text{loss}\\ =GnN-kn.\end{array}\] The gain term comes from the process of _stimulated emission_, in which photons stimulate excited atoms to emit additional photons. Because this process occurs via random encounters between photons and excited atoms, it occurs at a rate proportional to \(n\) and to the number of excited atoms, denoted by _N_(_t_). The parameter \(G\) > 0 is known as the gain coefficient. The loss term models the escape of photons through the endfaces of the laser. The parameter \(k\) > 0 is a rate constant; its reciprocal \(t\) = 1/_k_ represents the typical lifetime of a photon in the laser. Now comes the key physical idea: after an excited atom emits a photon, it drops down to a lower energy level and is no longer excited. Thus \(N\) decreases by the emission of photons. To capture this effect, we need to write an equation relating \(N\) to \(n\). Suppose that in the absence of laser action, the pump keeps the number of excited atoms fixed at \(N\)0. Then the _actual_ number of excited atoms will be reduced by the laser process. Specifically, we assume \[N(t)=N_{0}-\alpha\,n,\] where \(a\) > 0 is the rate at which atoms drop back to their ground states. Then \[\begin{array}{l}\dot{n}=Gn(N_{0}-\alpha n)-kn\\ =(GN_{0}-k)n-(\alpha G)n^{2}.\end{array}\] We're finally on familiar ground--this is a first-order system for _n_(_t_). Figure 3.3.2 shows the corresponding vector field for different values of the pump strength \(N\)0. Note that only positive values of \(n\) are physically meaningful. ### 3.3 LASER THRESHOLD Figure 3.3.2:When \(N_{0}<k/G\), the fixed point at \(n^{\bullet}=0\) is stable. This means that there is no stimulated emission and the laser acts like a lamp. As the pump strength \(N_{0}\) is increased, the system undergoes a transcritical bifurcation when \(N_{0}=k/G\). For \(N_{0}>k/G\), the origin loses stability and a stable fixed point appears at \(n^{\bullet}=(GN_{0}-k)/\alpha G>0\), corresponding to spontaneous laser action. Thus \(N_{0}=k/G\) can be interpreted as the _laser threshold_ in this model. Figure 3.3.3 summarizes our results. Although this model correctly predicts the existence of a threshold, it ignores the dynamics of the excited atoms, the existence of spontaneous emission, and several other complications. See Exercises 3.3.1 and 3.3.2 for improved models. ### 3.4 Pitchfork Bifurcation We turn now to a third kind of bifurcation, the so-called pitchfork bifurcation. This bifurcation is common in physical problems that have a _symmetry_. For example, many problems have a spatial symmetry between left and right. In such cases, fixed points tend to appear and disappear in symmetrical pairs. In the buckling example of Figure 3.0.1, the beam is stable in the vertical position if the load is small. In this case there is a stable fixed point corresponding to zero deflection. But if the load exceeds the buckling threshold, the beam may buckle to _either_ the left or the right. The vertical position has gone unstable, and two new symmetrical fixed points, corresponding to left- and right-buckled configurations, have been born. There are two very different types of pitchfork bifurcation. The simpler type is called _supercritical_, and will be discussed first. ### 3.5 Supercritical Pitchfork Bifurcation The normal form of the supercritical pitchfork bifurcation is \[\dot{x}=rx-x^{3}\,. \tag{1}\] Figure 3.3.3: Note that this equation is _invariant_ under the change of variables \(x\to-x\). That is, if we replace \(x\) by \(-x\) and then cancel the resulting minus signs on both sides of the equation, we get (I) back again. This invariance is the mathematical expression of the left-right symmetry mentioned earlier. (More technically, one says that the vector field is _equivariant_, but we'll use the more familiar language.) Figure 3.4.1 shows the vector field for different values of \(r\). When \(r<0\), the origin is the only fixed point, and it is stable. When \(r=0\), the origin is still stable, but much more weakly so, since the linearization vanishes. Now solutions no longer decay exponentially fast--instead the decay is a much slower algebraic function of time (recall Exercise 2.4.9). This lethargic decay is called _critical slowing down_ in the physics literature. Finally, when \(r>0\), the origin has become unstable. Two new stable fixed points appear on either side of the origin, symmetrically located at \(x^{*}=\pm\sqrt{r}\). The reason for the term "pitchfork" becomes clear when we plot the bifurcation diagram (Figure 3.4.2). Actually, pitchfork trifurcation might be a better word! ### 3.4 PITCHFORK BIFURCAT Figure 3.4.1: Figure 3.4.2: **Example 3.4.1**: Equations similar to \(\dot{x}=-x+\beta\tanh x\) arise in statistical mechanical models of magnets and neural networks (see Exercise 3.6.7 and Palmer 1989). Show that this equation undergoes a supercritical pitchfork bifurcation as \(\beta\) is varied. Then give a _numerically accurate_ plot of the fixed points for each \(\beta\). _Solution:_ We use the strategy of Example 3.1.2 to find the fixed points. The graphs of \(y=x\) and \(y=\beta\tanh x\) are shown in Figure 3.4.3; their intersections correspond to fixed points. The key thing to realize is that as \(\beta\) increases, the \(\tanh\) curve becomes steeper at the origin (its slope there is \(\beta\)). Hence for \(\beta<1\) the origin is the only fixed point. A pitchfork bifurcation occurs at \(\beta=1\), \(x\)* = 0, when the \(\tanh\) curve develops a slope of 1 at the origin. Finally, when \(\beta>1\), two new stable fixed points appear, and the origin becomes unstable. Now we want to compute the fixed points \(x\)* for each \(\beta\). Of course, one fixed point always occurs at \(x\)* = 0; we are looking for the other, nontrivial fixed points. One approach is to solve the equation \(x\)* = \(\beta\)\(\tanh\)\(x\)* numerically, using the Newton-Raphson method or some other root-finding scheme. (See Press et al. (2007) for a friendly and informative discussion of numerical methods.) But there's an easier way, which comes from changing our point of view. Instead of studying the dependence of \(x\)* on \(\beta\), we think of \(x\)* as the _independent_ variable, and then compute \(\beta=x\)*/tanh \(x\)*. This gives us a table of pairs \((x\)*,\(\beta)\). For each pair, we plot \(\beta\) horizontally and \(x\)* vertically. This yields the bifurcation diagram (Figure 3.4.4). The shortcut used here exploits the fact that \(f(x,\beta)=-x+\beta\tanh x\) depends more simply on \(\beta\) than on \(x\). Figure 3.4.4: Figure 3.4.3:This is frequently the case in bifurcation problems--the dependence on the control parameter is usually simpler than the dependence on \(x\). Plot the potential \(V(x)\) for the system \(\dot{x}=rx-x^{3}\), for the cases \(r<0\), \(r=0\), and \(r>0\). _Solution:_ Recall from Section 2.7 that the potential for \(\dot{x}=f(x)\) is defined by \(f(x)=-dV/dx\). Hence we need to solve \(-dV/dx=rx-x^{3}\). Integration yields \(V(x)=-\frac{1}{2}rx^{2}+\frac{1}{4}x^{4}\), where we neglect the arbitrary constant of integration. The corresponding graphs are shown in Figure 3.4.5. When \(r<0\), there is a quadratic minimum at the origin. At the bifurcation value \(r=0\), the minimum becomes a much flatter quartic. For \(r>0\), a local _maximum_ appears at the origin, and a symmetric pair of minima occur to either side of it. ### Subcritical Pitchfork Bifurcation In the supercritical case \(\dot{x}=rx-x^{3}\) discussed above, the cubic term is _stabilizing_: it acts as a restoring force that pulls \(x(t)\) back toward \(x=0\). If instead the cubic term were _destabilizing_, as in \[\dot{x}=rx+x^{3}, \tag{2}\] then we'd have a _subcritical_ pitchfork bifurcation. Figure 3.4.6 shows the bifurcation diagram. ### 3.4 Pitchfork Bifurcation Figure 3.4.5: Compared to Figure 3.4.2, the pitchfork is inverted. The nonzero fixed points \(x^{\bullet}=\pm\sqrt{-r}\) are _unstable_, and exist only _below_ the bifurcation (\(r<0\)), which motivates the term "subcritical." More importantly, the origin is stable for \(r<0\) and unstable for \(r>0\), as in the supercritical case, but now the instability for \(r>0\) is not opposed by the cubic term--in fact the cubic term lends a helping hand in driving the trajectories out to infinity! This effect leads to _blow-up_: one can show that \(x(t)\to\pm\infty\) in finite time, starting from any initial condition \(x_{0}\approx 0\) (Exercise 2.5.3). In real physical systems, such an explosive instability is usually opposed by the stabilizing influence of higher-order terms. Assuming that the system is still symmetric under \(x\to-x\), the first stabilizing term must be \(x^{5}\). Thus the canonical example of a system with a subcritical pitchfork bifurcation is \[\hat{x}=rx+x^{3}-x^{5}. \tag{3}\] There's no loss in generality in assuming that the coefficients of \(x^{3}\) and \(x^{5}\) are 1 (Exercise 3.5.8). The detailed analysis of (3) is left to you (Exercises 3.4.14 and 3.4.15). But we will summarize the main results here. Figure 3.4.7 shows the bifurcation diagram for (3). For small \(x\), the picture looks just like Figure 3.4.6: the origin is locally stable for \(r<0\), and two backward-bending branches of unstable fixed points bifurcate from the origin when \(r=0\). The new feature, due to the \(x^{5}\) term, is that the unstable Figure 3.4.6: Figure 3.4.7: branches turn around and become stable at \(r=r_{s}\), where \(r_{s}<0\). These stable _large-amplitude_ branches exist for all \(r>r_{s}\). There are several things to note about Figure 3.4.7: 1. In the range \(r_{s}<r<0\), two qualitatively different stable states coexist, namely the origin and the large-amplitude fixed points. The initial condition \(x_{0}\) determines which fixed point is approached as \(t\rightarrow\infty\). One consequence is that the origin is stable to small perturbations, but not to large ones--in this sense the origin is _locally_ stable, but not _globally_ stable. 2. The existence of different stable states allows for the possibility of _jumps_ and _hysteresis_ as \(r\) is varied. Suppose we start the system in the state \(x\)* = 0, and then slowly increase the parameter \(r\) (indicated by an arrow along the \(r\)-axis of Figure 3.4.8). Then the state remains at the origin until \(r=0\), when the origin loses stability. Now the slightest nudge will cause the state to _jump_ to one of the large-amplitude branches. With further increases of \(r\), the state moves out along the large-amplitude branch. If \(r\) is now decreased, the state remains on the large-amplitude branch, even when \(r\) is decreased below 0! We have to lower \(r\) even further (down past \(r_{s}\)) to get the state to jump back to the origin. This lack of reversibility as a parameter is varied is called _hysteresis_. 3. The bifurcation at \(r_{s}\) is a saddle-node bifurcation, in which stable and unstable fixed points are born "out of the clear blue sky" as \(r\) is increased (see Section 3.1). ### Terminology As usual in bifurcation theory, there are several other names for the bifurcations discussed here. The supercritical pitchfork is sometimes called a forward bifurcation, and is closely related to a continuous or second-order phase transition Figure 3.4.8: in statistical mechanics. The subcritical bifurcation is sometimes called an inverted or backward bifurcation, and is related to discontinuous or first-order phase transitions. In the engineering literature, the supercritical bifurcation is sometimes called soft or safe, because the nonzero fixed points are born at small amplitude; in contrast, the subcritical bifurcation is hard or dangerous, because of the jump from zero to large amplitude. ### 3.5 Overdamped Bead on a Rotating Hoop In this section we analyze a classic problem from first-year physics, the bead on a rotating hoop. This problem provides an example of a bifurcation in a mechanical system. It also illustrates the subtleties involved in replacing Newton's law, which is a second-order equation, by a simpler first-order equation. The mechanical system is shown in Figure 3.5.1. A bead of mass \(m\) slides along a wire hoop of radius \(r\). The hoop is constrained to rotate at a constant angular velocity \(\omega\) about its vertical axis. The problem is to analyze the motion of the bead, given that it is acted on by both gravitational and centrifugal forces. This is the usual statement of the problem, but now we want to add a new twist: suppose that there's also a frictional force on the bead that opposes its motion. To be specific, imagine that the whole system is immersed in a vat of molasses or some other very viscous fluid, and that the friction is due to viscous damping. Let \(\phi\) be the angle between the bead and the downward vertical direction. By convention, we restrict \(\phi\) to the range \(-\pi<\phi\leq\pi\), so there's only one angle for each point on the hoop. Also, let \(\rho=r\sin\phi\) denote the distance of the bead from the vertical axis. Then the coordinates are as shown in Figure 3.5.2. Now we write Newton's law for the bead. There's a downward gravitational force \(m\)g, a sideways centrifugal force \(m\rho\,\omega^{2}\), and a tangential damping force \(b\dot{\phi}\). (The constants \(g\) and \(b\) are taken to be positive; negative signs will be added later as needed.) The hoop is assumed to be rigid, so we only have to resolve the forces along the tangential direction, as shown in Figure 3.5.3. After substituting \(\rho=r\sin\phi\) in the centrifugal term, and recalling that the tangential acceleration is \(r\ddot{\phi}\), we obtain the governing equation \[mr\ddot{\phi}=-b\dot{\phi}-mg\sin\phi+mr\omega^{2}\sin\phi\cos\phi. \tag{1}\] Figure 3.5.1: This is a _second-order_ differential equation, since the second derivative \(\ddot{\phi}\) is the highest one that appears. We are not yet equipped to analyze second-order equations, so we would like to find some conditions under which we can safely neglect the \(mr\ddot{\phi}\) term. Then (1) reduces to a first-order equation, and we can apply our machinery to it. Of course, this is a dicey business: we can't just neglect terms because we feel like it! But we will for now, and then at the end of this section we'll try to find a regime where our approximation is valid. ### Analysis of the First-Order System Our concern now is with the first-order system \[\begin{split} b\dot{\phi}&=-mg\sin\phi+mr\omega^{2 }\sin\phi\cos\phi\\ &=mg\sin\phi\left(\frac{r\omega^{2}}{g}\cos\phi-1\right).\end{split} \tag{2}\] The fixed points of (2) correspond to equilibrium positions for the bead. What's your intuition about where such equilibria can occur? We would expect the bead to remain at rest if placed at the top or the bottom of the hoop. Can other fixed points occur? And what about stability? Is the bottom always stable? Equation (2) shows that there are always fixed points where sin \(\phi\) = 0, namely \(\phi\)* = 0 (the bottom of the hoop) and \(\phi\)* = \(\pi\) (the top). The more interesting result is that there are two _additional_ fixed points if \[\frac{r\omega^{2}}{g}>1,\] that is, _if the hoop is spinning fast enough._ These fixed points satisfy \(\phi\)* = \(\pm\cos^{-1}(g/r\omega^{2})\). To visualize them, we introduce a parameter \[\gamma=\frac{r\omega^{2}}{g}\] and solve cos \(\phi\)* = 1/\(\gamma\) graphically. We plot cos \(\phi\) vs. \(\phi\), and look for intersections with the constant function l/\(\gamma\), shown as a horizontal line in Figure 3.5.4. For \(\gamma<1\) there are no intersections, whereas for \(\gamma>1\) there is a symmetrical pair of intersections to either side of \(\phi\)* = 0. ### Overda Figure 3.5.3: As \(\gamma\to\infty\), these intersections approach \(\pm\pi/2\). Figure 3.5.5 plots the fixed points on the hoop for the cases \(\gamma<1\) and \(\gamma>1\). To summarize our results so far, let's plot _all_ the fixed points as a function of the parameter \(\gamma\) (Figure 3.5.6). As usual, solid lines denote stable fixed points and broken lines denote unstable fixed points. Figure 3.5.4: Figure 3.5.5: We now see that a _supercritical pitchfork bifurcation_ occurs at \(\gamma=1\). It's left to you to check the stability of the fixed points, using linear stability analysis or graphical methods (Exercise 3.5.2). Here's the physical interpretation of the results: When \(\gamma<1\), the hoop is rotating slowly and the centrifugal force is too weak to balance the force of gravity. Thus the bead slides down to the bottom and stays there. But if \(\gamma>1\), the hoop is spinning fast enough that the bottom becomes unstable. Since the centrifugal force _grows_ as the bead moves farther from the bottom, any slight displacement of the bead will be _amplified_. The bead is therefore pushed up the hoop until gravity balances the centrifugal force; this balance occurs at \(\phi*=\pm\cos^{-1}(g/r\omega^{2})\). Which of these two fixed points is actually selected depends on the initial disturbance. Even though the two fixed points are entirely symmetrical, an asymmetry in the initial conditions will lead to one of them being chosen--physicists sometimes refer to these as _symmetry-broken_ solutions. In other words, the solution has less symmetry than the governing equation. What _is_ the symmetry of the governing equation? Clearly the left and right halves of the hoop are physically equivalent--this is reflected by the invariance of (1) and (2) under the change of variables \(\phi\to-\phi\). As we mentioned in Section 3.4, pitchfork bifurcations are to be expected in situations where such a symmetry exists. ### Dimensional Analysis and Scaling Now we need to address the question: When is it valid to neglect the inertia term \(mr\ddot{\phi}\) in (1)? At first sight the limit \(m\to 0\) looks promising, but then we notice that we're throwing out the baby with the bathwater: the centrifugal and gravitational terms vanish in this limit too! So we have to be more careful. In problems like this, it is helpful to express the equation in _dimensionless_ form (at present, all the terms in (1) have the dimensions of force.) The advantage of a dimensionless formulation is that we know how to define _small_--it means "much less than 1." Furthermore, nondimensionalizing the equation reduces the number of parameters by lumping them together into _dimensionless groups_. This reduction always simplifies the analysis. For an excellent introduction to dimensional analysis, see Lin and Segel (1988). There are often several ways to nondimensionalize an equation, and the best choice might not be clear at first. Therefore we proceed in a flexible fashion. We define a dimensionless time \(\tau\) by \[\tau=\frac{t}{T}\] where \(T\) is a _characteristic time scale_ to be chosen later. When \(T\) is chosen correctly, the new derivatives \(d\phi/d\tau\) and \(d^{2}\phi/d\tau^{2}\) should be \(O\)(1), i.e., of order unity. To express these new derivatives in terms of the old ones, we use the chain rule: \[\dot{\phi}=\frac{d\phi}{dt}=\frac{d\phi}{d\tau}\frac{d\tau}{dt}=\frac{1}{T}\frac{d \phi}{d\tau}\] and similarly \[\ddot{\phi}=\frac{1}{T^{2}}\frac{d^{2}\phi}{d\tau^{2}}.\] (The easy way to remember these formulas is to formally substitute \(T\tau\) for \(t\).) Hence (I) becomes \[\frac{mr}{T^{2}}\frac{d^{2}\phi}{d\tau^{2}}=-\frac{b}{T}\frac{d\phi}{d\tau}-mg \sin\phi+mr\omega^{2}\sin\phi\cos\phi.\] Now since this equation is a balance of forces, we nondimensionalize it by dividing by a force \(mg\). This yields the dimensionless equation \[\left(\frac{r}{gT^{2}}\right)\frac{d^{2}\phi}{d\tau^{2}}=-\left(\frac{b}{mgT} \right)\frac{d\phi}{d\tau}-\sin\phi+\left(\frac{r\omega^{2}}{g}\right)\sin \phi\cos\phi. \tag{3}\] Each of the terms in parentheses is a dimensionless group. We recognize the group \(r\omega^{2}/g\) in the last term--that's our old friend \(\gamma\) from earlier in the section. We are interested in the regime where the left-hand side of (3) is negligible compared to all the other terms, and where all the terms on the right-hand side are of comparable size. Since the derivatives are \(O\)(I) by assumption, and \(\sin\phi\approx O\)(I), we see that we need \[\frac{b}{mgT}\approx O(1),\;\mbox{and}\;\;\frac{r}{gT^{2}}<<1.\] The first of these requirements sets the time scale \(T\): a natural choice is \[T=\frac{b}{mg}.\] Then the condition \(r/gT^{2}<<1\) becomes \[\frac{r}{g}\left(\frac{mg}{b}\right)^{2}<<1, \tag{4}\] or equivalently, \[b^{2}>>m^{2}gr.\] This can be interpreted as saying that the _damping is very strong_, or that the mass is very small, now in a precise sense. The condition (4) motivates us to introduce a dimensionless group \[\varepsilon=\frac{m^{2}gr}{b^{2}}. \tag{5}\] Then (3) becomes \[\varepsilon\frac{d^{2}\phi}{d\tau^{2}}=-\frac{d\phi}{d\tau}-\sin\phi+\gamma\sin \phi\cos\phi. \tag{6}\] As advertised, the dimensionless Equation (6) is simpler than (I): the five parameters \(m,g,r,\omega,\) and \(b\) have been replaced by two dimensionless groups \(\gamma\) and \(\varepsilon\). In summary, our dimensional analysis suggests that in the _overdamped_ limit \(\varepsilon\to 0\), (6) should be well approximated by the first-order system \[\frac{d\phi}{d\tau}=f(\phi) \tag{7}\] where \[f(\phi) = -\sin\phi+\gamma\sin\phi\cos\phi\] \[= \sin\phi\ (\gamma\cos\phi-1).\] ## A Paredox Unfortunately, _there is something fundamentally wrong with our idea of replacing a second-order equation by a first-order equation_. The trouble is that a second-order equation requires _two_ initial conditions, whereas a first-order equation has only _one_. In our case, the bead's motion is determined by its initial position and velocity. These two quantities can be chosen completely independent of each other. But that's not true for the first-order system: given the initial position, the initial velocity is dictated by the equation \(d\phi/d\tau\equiv f(\phi).\) Thus the solution to the first-order system will not, in general, be able to satisfy _both_ initial conditions. We seem to have run into a paradox. Is (7) valid in the overdamped limit or not? If it is valid, how can we satisfy the two arbitrary initial conditions demanded by (6)? The resolution of the paradox requires us to analyze the second-order system (6). We haven't dealt with second-order systems before--that's the subject of Chapter 5. But read on if you're curious; some simple ideas are all we need to finish the problem. ### Phase Plane Analysis Throughout Chapters 2 and 3, we have exploited the idea that a first-order system \(\dot{x}=f(x)\) can be regarded as a vector field on a line. By analogy, the _second_-order system (6) can be regarded as a vector field on a _plane_, the so-called _phase plane_. The plane is spanned by two axes, one for the angle \(\phi\) and one for the angular velocity \(d\phi/d\tau\). To simplify the notation, let \[\Omega=\phi^{\prime}\equiv d\phi\,/\,d\tau\] where prime denotes differentiation with respect to \(\tau\). Then an initial condition for (6) corresponds to a point (\(\phi_{\varphi}\), \(\Omega_{o}\)) in the phase plane (Figure 3.5.7). As time evolves, the phase point (\(\phi(t)\), \(\Omega(t)\)) moves around in the phase plane along a _trajectory_ determined by the solution to (6). Our goal now is to see what those trajectories actually look like. As before, the key idea is that _the differential equation can be interpreted as a vector field on the phase space._ To convert (6) into a vector field, we first rewrite it as \[\varepsilon\Omega^{\prime}=f(\phi)-\Omega.\] Along with the definition \(\phi^{\prime}=\Omega\), this yields the _vector field_ \[\phi^{\prime} =\Omega \tag{8a}\] \[\Omega^{\prime} =\frac{1}{\varepsilon}(f(\phi)-\Omega). \tag{8b}\] We interpret the vector (\(\phi^{\prime},\Omega^{\prime}\)) at the point (\(\phi\), \(\Omega\)) as the local velocity of a phase fluid flowing steadily on the plane. Note that the velocity vector now has two components, one in the \(\phi\)-direction and one in the \(\Omega\)-direction. To visualize the trajectories, we just imagine how the phase point would move as it is carried along by the phase fluid. Figure 3.5.7:In general, the pattern of trajectories would be difficult to picture, but the present case is simple because we are only interested in the limit \(\varepsilon\to 0\). In this limit, _all trajectories slam straight up or down onto the curve \(C\) defined by \(f(\phi)=\Omega\), and then slowly ooze along this curve until they reach a fixed point_ (Figure 3.5.8). To arrive at this striking conclusion, let's do an order-of-magnitude calculation. Suppose that the phase point lies off the curve \(C\). For instance, suppose \((\phi,\Omega)\) lies an \(O(1)\) distance below the curve \(C\), i.e., \(\Omega<f(\phi)\) and \(f(\phi)-\Omega\approx O(1)\). Then (8b) shows that \(\Omega^{\prime}\) is enormously positive: \(\Omega^{\prime}\approx O(1/\varepsilon)>>1\). Thus the phase point zaps like lightning up to the region where \(f(\phi)-\Omega\approx O(\varepsilon)\). In the limit \(\varepsilon\to 0\), this region is indistinguishable from \(C\). Once the phase point is on \(C\), it evolves according to \(\Omega\approx f(\phi)\); that is, it approximately satisfies the first-order equation \(\phi^{\prime}=f(\phi)\). Our conclusion is that a typical trajectory is made of two parts: a rapid initial _transient_, during which the phase point zaps onto the curve where \(\phi^{\prime}=f(\phi)\), followed by a much slower drift along this curve. Now we see how the paradox is resolved: The second-order system (6) _does_ behave like the first-order system (7), but only after a rapid initial transient. During this transient, it is _not_ correct to neglect the term \(\varepsilon d^{2}\phi/d\tau^{2}\). The problem with our earlier approach is that we used only a single time scale \(T=b/mg\); this time scale is characteristic of the slow drift process, but not of the rapid transient (Exercise 3.5.5). ### 3.5 Overdaamped Bead on a Rotating Hoop Figure 3.5.8: A Singular Limit The difficulty we have encountered here occurs throughout science and engineering. In some limit of interest (here, the limit of strong damping), the term containing the highest order derivative drops out of the governing equation. Then the initial conditions or boundary conditions can't be satisfied. Such a limit is often called _singular_. For example, in fluid mechanics, the limit of high Reynolds number is a singular limit; it accounts for the presence of extremely thin "boundary layers" in the flow over airplane wings. In our problem, the rapid transient played the role of a boundary layer--it is a thin layer of _time_ that occurs near the boundary \(t=0\). The branch of mathematics that deals with singular limits is called _singular perturbation theory_. See Jordan and Smith (1987) or Lin and Segel (1988) for an introduction. Another problem with a singular limit will be discussed briefly in Section 7.5. ### 3.6 Imperfect Bifurcations and Catastrophes As we mentioned earlier, pitchfork bifurcations are common in problems that have a symmetry. For example, in the problem of the bead on a rotating hoop (Section 3.5), there was a perfect symmetry between the left and right sides of the hoop. But in many real-world circumstances, the symmetry is only approximate--an imperfection leads to a slight difference between left and right. We now want to see what happens when such imperfections are present. For example, consider the system \[\dot{x}=h+rx-x^{3}. \tag{1}\] If \(h=0\), we have the normal form for a supercritical pitchfork bifurcation, and there's a perfect symmetry between \(x\) and \(-x\). But this symmetry is broken when \(h\approx 0\); for this reason we refer to \(h\) as an _imperfection parameter_. Equation (I) is a bit harder to analyze than other bifurcation problems we've considered previously, because we have _two_ independent parameters to worry about (\(h\) and \(r\)). To keep things straight, we'll think of \(r\) as fixed, and then examine the effects of varying \(h\). The first step is to analyze the fixed points of (I). These can be found explicitly, but we'd have to invoke the messy formula for the roots of a cubic equation. It's clearer to use a graphical approach, as in Example 3.1.2. We plot the graphs of \(y=rx-x^{3}\) and \(y=-h\) on the same axes, and look for intersections (Figure 3.6.1). These intersections occur at the fixed points of (I). When \(r\leq 0\), the cubic is monotonically decreasing, and so it intersects the horizontal line \(y=-h\) in exactly one point (Figure 3.6.1a). The more interesting case is \(r>0\); then one, two, or three intersections are possible, depending on the value of \(h\) (Figure 3.6.1b). The critical case occurs when the horizontal line is just _tangent_ to either the local minimum or maximum of the cubic; then we have a _saddle-node bifurcation_. To find the values of \(h\) at which this bifurcation occurs, note that the cubic has a local maximum when \(\frac{d}{dx}(rx-x^{3})=r-3x^{2}=0\). Hence \[x_{\max}=\sqrt{\frac{r}{3}},\] and the value of the cubic at the local maximum is \[rx_{\max}-(x_{\max})^{3}=\frac{2r}{3}\sqrt{\frac{r}{3}}.\] Similarly, the value at the minimum is the negative of this quantity. Hence saddle-node bifurcations occur when \(h=\pm h_{c}(r)\), where \[h_{c}(r)=\frac{2r}{2}\sqrt{\frac{r}{3}}.\] Equation (1) has three fixed points for \(|h|<h_{c}(r)\) and one fixed point for \(|h|>h_{c}(r)\). To summarize the results so far, we plot the _bifurcation curves_\(h=\pm h_{c}(r)\) in the \((r,h)\) plane (Figure 3.6.2). Note that the two bifurcation curves meet tangentially at \((r,h)=(0,0)\); such a point is called a _cusp point_. We also label the regions that correspond to different numbers of fixed points. Saddle-node bifurcations occur all along the boundary of the regions, except at the cusp point, where we have a _codimension-2 bifurcation_. (This fancy terminology essentially means that we have had to tune _two_ parameters, \(h\) and \(r\), to achieve this type of bifurcation. Until now, all our bifurcations could be achieved by tuning a single parameter, and were therefore _codimension-1_ bifurcations.) ### 3.6 Imperfect bifurcations and catastrophes Figure 3.6.1: Pictures like Figure 3.6.2 will prove very useful in our future work. We will refer to such pictures as _stability diagrams_. They show the different types of behavior that occur as we move around in _parameter space_ (here, the \((r,h)\) plane). Now let's present our results in a more familiar way by showing the bifurcation diagram of \(x\)* vs. \(r\), for fixed \(h\) (Figure 3.6.3). When \(h=0\) we have the usual pitchfork diagram (Figure 3.6.3a) but when \(h\approx 0\), the pitchfork disconnects into two pieces (Figure 3.6.3b). The upper piece consists entirely of stable fixed points, whereas the lower piece has both stable and unstable branches. As we increase \(r\) from negative values, there's no longer a sharp transition at \(r=0\); the fixed point simply glides smoothly along the upper branch. Furthermore, the lower branch of stable points is not accessible unless we make a fairly large disturbance. Alternatively, we could plot \(x\)* vs. \(h\), for fixed \(r\) (Figure 3.6.4). Figure 3.6.2 Figure 3.6.3 When \(r\leq 0\) there's one stable fixed point for each \(h\) (Figure 3.6.4a). However, when \(r>0\) there are three fixed points when \(|h|<h_{c}(r)\), and one otherwise (Figure 3.6.4b). In the triple-valued region, the middle branch is unstable and the upper and lower branches are stable. Note that these graphs look like Figure 3.6.1 rotated by \(90^{\circ}\). There is one last way to plot the results, which may appeal to you if you like to picture things in three dimensions. This method of presentation contains all of the others as cross sections or projections. If we plot the fixed points \(x\)* above the (\(r\),\(h\)) plane, we get the _cusp catastrophe_ surface shown in Figure 3.6.5. The surface folds over on itself in certain places. The projection of these folds onto the (\(r\),\(h\)) plane yields the bifurcation curves shown in Figure 3.6.2. A cross section at fixed \(h\) yields Figure 3.6.3, and a cross section at fixed \(r\) yields Figure 3.6.4. The term _catastrophe_ is motivated by the fact that as parameters change, the state of the system can be carried over the edge of the upper surface, after which it drops discontinuously to the lower surface (Figure 3.6.6). This jump could be truly catastrophic for the equilibrium of a bridge or a building. We will see scientific examples of catastrophes in the context of insect outbreaks (Section 3.7) and in the following example from mechanics. For more about catastrophe theory, see Zeeman (1977) or Poston and Stewart (1978). Incidentally, there was a violent controversy about this subject in the late 1970s. If you like watching fights, have a look at Zahler and Sussman (1977) and Kolata (1977). ### 3.6 Imperfect bifurcations and catastrophes Figure 3.6.4: Figure 3.6.5: A bead of mass \(m\) is constrained to slide along a straight wire inclined at an angle \(\theta\) with respect to the horizontal. The mass is attached to a spring of stiffness \(k\) and relaxed length \(L_{0}\), and is also acted on by gravity. We choose coordinates along the wire so that \(x=0\) occurs at the point closest to the support point of the spring; let \(a\) be the distance between this support point and the wire. In Exercises 3.5.4 and 3.6.5, you are asked to analyze the equilibrium positions of the bead. But first let's get some physical intuition. When the wire is horizontal (\(\theta=0\)), there is perfect symmetry between the left and right sides of the wire, and \(x=0\) is always an equilibrium position. The stability of this equilibrium depends on the relative sizes of \(L_{0}\) and \(a:\) if \(L_{0}<a\), the spring is in tension and so the equilibrium should be stable. But if \(L_{0}>a\), the spring is compressed and so we expect an _unstable_ equilibrium at \(x=0\) and a pair of stable equilibria to either side of it. Exercise 3.5.4 deals with this simple case. The problem becomes more interesting when we tilt the wire (\(\theta\approx 0\)). For small tilting, we expect that there are still three equilibria if \(L_{0}>a\). However if the tilt becomes too steep, perhaps you can see intuitively that the uphill equilibrium might suddenly disappear, causing the bead to jump catastrophically to the downhill equilibrium. You might even want to build this mechanical system and try it. Exercise 3.6.5 asks you to work through the mathematical details. ### 3.7. Insect Outbreak For a biological example of bifurcation and catastrophe, we turn now to a model for the sudden outbreak of an insect called the spruce budworm. This insect is a serious pest in eastern Canada, where it attacks the leaves of the balsam fir tree. When an outbreak occurs, the budworms can defoliate and kill most of the fir trees in the forest in about four years. Figure 3.6.7: Ludwig et al. (1978) proposed and analyzed an elegant model of the interaction between budworms and the forest. They simplified the problem by exploiting a separation of time scales: the budworm population evolves on a _fast_ time scale (they can increase their density fivefold in a year, so they have a characteristic time scale of months), whereas the trees grow and die on a _slow_ time scale (they can completely replace their foliage in about 7-10 years, and their life span in the absence of budworms is 100-150 years.) Thus, as far as the budworm dynamics are concerned, the forest variables may be treated as constants. At the end of the analysis, we will allow the forest variables to drift very slowly--this drift ultimately triggers an outbreak. ### Model The proposed model for the budworm population dynamics is \[\dot{N}=RN\left(1-\frac{N}{K}\right)-p(N).\] In the absence of predators, the budworm population \(N(t)\) is assumed to grow logistically with growth rate \(R\) and carrying capacity \(K\). The carrying capacity depends on the amount of foliage left on the trees, and so it is a slowly drifting parameter; at this stage we treat it as fixed. The term \(p(N)\) represents the death rate due to _predation_, chiefly by birds, and is assumed to have the shape shown in Figure 3.7.1. There is almost no predation when budworms are scarce; the birds seek food elsewhere. However, once the population exceeds a certain critical level \(N=A\), the predation turns on sharply and then saturates (the birds are eating as fast as they can). Ludwig et al. (1978) assumed the specific form \[p(N)=\frac{BN^{2}}{A^{2}+N^{2}}\] where \(A\), \(B>0\). Thus the full model is \[\dot{N}=RN\left(1-\frac{N}{K}\right)-\frac{BN^{2}}{A^{2}+N^{2}}\,. \tag{1}\] We now have several questions to answer. What do we mean by an "outbreak" in the context of this model? The idea must be that, as parameters drift, the budworm population suddenly jumps from a low to a high level. But what do we Figure 3.7.1: mean by "low" and "high," and are there solutions with this character? To answer these questions, it is convenient to recast the model into a dimensionless form, as in Section 3.5. ### Dimensionless Formulation The model (I) has four parameters: \(R\), \(K\), \(A\), and \(B\). As usual, there are various ways to nondimensionalize the system. For example, both \(A\) and \(K\) have the same dimension as \(N\), and so either \(N/A\) or \(N/K\) could serve as a dimensionless population level. It often takes some trial and error to find the best choice. In this case, our heuristic will be to scale the equation so that all the dimensionless groups are pushed into the _logistic_ part of the dynamics, with none in the _predation_ part. This turns out to ease the graphical analysis of the fixed points. To get rid of the parameters in the predation term, we divide (I) by \(B\) and then let \[x=N/A,\] which yields \[\frac{A}{B}\frac{dx}{dt}=\frac{R}{B}Ax\left(1-\frac{Ax}{K}\right)-\frac{x^{2}} {1+x^{2}}. \tag{2}\] Equation (2) suggests that we should introduce a dimensionless time \(\tau\) and dimensionless groups \(r\) and \(k\), as follows: \[\tau=\frac{Bt}{A},\hskip 28.452756ptr=\frac{RA}{B},\hskip 28.452756ptk=\frac{K}{A}.\] Then (2) becomes \[\frac{dx}{d\tau}=rx\left(1-\frac{x}{k}\right)-\frac{x^{2}}{1+x^{2}}, \tag{3}\] which is our final dimensionless form. Here \(r\) and \(k\) are the dimensionless growth rate and carrying capacity, respectively. ### Analysis of Fixed Points Equation (3) has a fixed point at \(x^{*}=0\); it is _always unstable_ (Exercise 3.7.1). The intuitive explanation is that the predation is extremely weak for small \(x\), and so the budworm population grows exponentially for \(x\) near zero. The other fixed points of (3) are given by the solutions of \[r\left(1-\frac{x}{k}\right)=\frac{x}{1+x^{2}}. \tag{4}\] **Figure 3.7.2**: **Figure 3.7.3**: **Figure 3.7.3**: **Figure 3.7.4**: **_break_** level. From the point of view of pest control, one would like to keep the population at \(a\) and away from \(c\). The fate of the system is determined by the initialcondition \(x_{0}\); an outbreak occurs if and only if \(x_{0}>b\). In this sense the unstable equilibrium \(b\) plays the role of a _threshold_. An outbreak can also be triggered by a saddle-node bifurcation. If the parameters \(r\) and \(k\) drift in such a way that the fixed point \(a\) disappears, then the population will jump suddenly to the outbreak level \(c\). The situation is made worse by the hysteresis effect--even if the parameters are restored to their values before the outbreak, the population will not drop back to the refuge level. ### Calculating the Bifurcation Curves Now we compute the curves in (\(k,r\)) space where the system undergoes saddle-node bifurcations. The calculation is somewhat harder than that in Section 3.6: we will not be able to write \(r\) explicitly as a function of \(k\), for example. Instead, the bifurcation curves will be written in the _parametric form_ (\(k(x)\), \(r(x)\)), where \(x\) runs through all positive values. (Please don't be confused by this traditional terminology--one would call \(x\) the "parameter" in these parametric equations, even though \(r\) and \(k\) are themselves parameters in a different sense.) As discussed earlier, the condition for a saddle-node bifurcation is that the line \(r(1-x/k)\) intersects the curve \(x/(1+x^{2})\) tangentially. Thus we require _both_ \[r\biggl{(}1-\frac{x}{k}\biggr{)}=\frac{x}{1+x^{2}} \tag{5}\] and \[\frac{d}{dx}\biggl{[}r\biggl{(}1-\frac{x}{k}\biggr{)}\biggr{]}=\frac{d}{dx} \biggl{[}\frac{x}{1+x^{2}}\biggr{]}. \tag{6}\] After differentiation, (6) reduces to \[-\frac{r}{k}=\frac{1-x^{2}}{\bigl{(}1+x^{2}\bigr{)}^{2}}. \tag{7}\] We substitute this expression for \(r/k\) into (5), which allows us to express \(r\) solely in terms of \(x\). The result is \[r=\frac{2x^{3}}{\bigl{(}1+x^{2}\bigr{)}^{2}}. \tag{8}\] Then inserting (8) into (7) yields \[k=\frac{2x^{3}}{x^{2}-1}. \tag{9}\] The condition \(k>0\) implies that \(x\) must be restricted to the range \(x>1\). Together (8) and (9) define the bifurcation curves. For each \(x>1\), we plot the corresponding point \((k(x),r(x))\) in the \((k,r)\) plane. The resulting curves are shown in Figure 3.7.5. (Exercise 3.7.2 deals with some of the analytical properties of these curves.) The different regions in Figure 3.7.5 are labeled according to the stable fixed points that exist. The refuge level \(a\) is the only stable state for low \(r\), and the outbreak level \(c\) is the only stable state for large \(r\). In the _bistable_ region, both stable states exist. The stability diagram is very similar to Figure 3.6.2. It too can be regarded as the projection of a cusp catastrophe surface, as schematically illustrated in Figure 3.7.6. You are hereby challenged to graph the surface accurately! ### 3.7 Insect Outbreak The first two of the three branches of the epidemic are the following: [MISSING_PAGE_POST] rst epidemic slowly as the condition of the forest changes. According to Ludwig et al. (1978), \(r\) increases as the forest grows, while \(k\) remains fixed. They reason as follows: let \(S\) denote the average size of the trees, interpreted as the total surface area of the branches in a stand. Then the carrying capacity \(K\) should be proportional to the available foliage, so \(K=K^{\prime}S\). Similarly, the half-saturation parameter \(A\) in the predation term should be proportional to \(S\); predators such as birds search _units of foliage_, not acres of forest, and so the relevant quantity \(A^{\prime}\) must have the dimensions of budworms per unit of branch area. Hence \(A=A^{\prime}S\) and therefore \[r=\frac{RA^{\prime}}{B}S,\hskip 28.452756ptk=\frac{K^{\prime}}{A^{\prime}}\,. \tag{10}\] The experimental observations suggest that for a young forest, typically \(k\approx 300\) and \(r<1/2\) so the parameters lie in the bistable region. The budworm population is kept down by the birds, which find it easy to search the small number of branches per acre. However, as the forest grows, \(S\) increases and therefore the point \((k,r)\) drifts upward in parameter space toward the outbreak region of Figure 3.7.5. Ludwig et al. (1978) estimate that \(r\approx 1\) for a fully mature forest, which lies dangerously in the outbreak region. After an outbreak occurs, the fir trees die and the forest is taken over by birch trees. But they are less efficient at using nutrients and eventually the fir trees come back--this recovery takes about 50-100 years (Murray 2002). We conclude by mentioning some of the approximations in the model presented here. The tree dynamics have been neglected; see Ludwig et al. (1978) for a discussion of this longer time-scale behavior. We've also neglected the _spatial_ distribution of budworms and their possible dispersal--see Ludwig et al. (1979) and Murray (2002) for treatments of this aspect of the problem. ## 4.3.1. Siddle-Node Bifurcation For each of the following exercises, sketch all the qualitatively different vector fields that occur as \(r\) is varied. Show that a saddle-node bifurcation occurs at a critical value of \(r\), to be determined. Finally, sketch the bifurcation diagram of fixed points \(x*\) versus \(r\). **3.1.1**: \(\dot{x}=1+rx+x^{2}\) **3.1.2**: \(\dot{x}=r-\cosh x\) **3.1.3**: \(\dot{x}=r+x-\ln(1+x)\) **3.1.4**: \(\dot{x}=r+\frac{1}{2}x-x/(1+x)\) **3.1.5**: (Unusual bifurcations) In discussing the normal form of the saddle-node bifurcation, we mentioned the assumption that \(a=\partial f/\partial r\,\big{\{}x^{*},x_{i}\big{\}}=0\). To see what can happen if \(a=\partial f/\partial r\left[{}_{x,x_{c}}\right]\approx 0\), sketch the vector fields for the following examples, and then plot the fixed points as a function of \(r\). (a) \(\dot{x}=r^{2}-x^{2}\) (b) \(\dot{x}=r^{2}+x^{2}\) ### 3.2 Transcritical Bifurcation For each of the following exercises, sketch all the qualitatively different vector fields that occur as \(r\) is varied. Show that a transcritical bifurcation occurs at a critical value of \(r\), to be determined. Finally, sketch the bifurcation diagram of fixed points \(x\)* vs. \(r\). **3.2.1**: \(\dot{x}=rx+x^{2}\) **3.2.2**: \(\dot{x}=rx-\ln(1+x)\) **3.2.3**: \(\dot{x}=x-rx(1-x)\) **3.2.4**: \(\dot{x}=x(r-e^{x})\) **3.2.5**: (Chemical kinetics) Consider the chemical reaction system \[A+X\mathop{\hbox to 12.0pt{\rightarrowfill}\hbox to 12.0pt{\rightarrowfill} \hbox to 12.0pt{\rightarrowfill}}\limits_{k_{-1}}2X\qquad X+B\mathop{\hbox to 12.0pt{\rightarrowfill}\hbox to 12.0pt{\rightarrowfill}}\limits_{k_{-1}}C.\] This is a generalization of Exercise 2.3.2; the new feature is that \(X\) is used up in the production of \(C\). a) Assuming that both \(A\) and \(B\) are kept at constant concentrations \(a\) and \(b\), show that the law of mass action leads to an equation of the form \(\dot{x}=c_{1}x-c_{2}x^{2}\), where \(x\) is the concentration of \(X\), and \(c_{1}\) and \(c_{2}\) are constants to be determined. b) Show that \(x\)* = 0 is stable when \(k_{2}b>k_{1}a\), and explain why this makes sense chemically. ### 3.3 Laser Threshold (An improved model of a laser) In the simple laser model considered in Section 3.3, we wrote an _algebraic_ equation relating \(N\), the number of excited atoms, to \(n\), the number of laser photons. In more realistic models, this would be replaced by a _differential_ equation. For instance, Milonni and Eberly (1988) show that after certain reasonable approximations, quantum mechanics leads to the system \[\begin{array}{l}\dot{n}=GnN-kn\\ \dot{N}=-GnN-fN+p.\end{array}\] Here \(G\) is the gain coefficient for stimulated emission, \(k\) is the decay rate due to loss of photons by mirror transmission, scattering, etc., \(f\) is the decay rate for spontaneous emission, and \(p\) is the pump strength. All parameters are positive, except \(p\), which can have either sign. This two-dimensional system will be analyzed in Exercise 8.1.13. For now, let's convert it to a one-dimensional system, as follows. * Suppose that \(N\) relaxes much more rapidly than \(n\). Then we may make the quasi-static approximation \(\dot{N}\approx 0\). Given this approximation, express \(N(t)\) in terms of \(n(t)\) and derive a first-order system for \(n\). (This procedure is often called _adiabatic elimination_, and one says that the evolution of \(N(t)\) is _slaved_ to that of \(n(t)\). See Haken (1983).) * Show that \(n*=0\) becomes unstable for \(p>p_{{}_{c}}\), where \(p_{{}_{c}}\) is to be determined. * What type of bifurcation occurs at the laser threshold \(p_{{}_{c}}\)? * (Hard question) For what range of parameters is it valid to make the approximation used in (a)? * (Maxwell-Bloch equations) The Maxwell-Bloch equations provide an even more sophisticated model for a laser. These equations describe the dynamics of the electric field \(E\), the mean polarization \(P\) of the atoms, and the population inversion \(D\): \[\dot{E} = \kappa(P-E)\] \[\dot{P} = \gamma_{1}(ED-P)\] \[\dot{D} = \gamma_{2}(\lambda+1-D-\lambda EP)\] where \(\kappa\) is the decay rate in the laser cavity due to beam transmission, \(\gamma_{1}\) and \(\gamma_{2}\) are decay rates of the atomic polarization and population inversion, respectively, and \(l\) is a pumping energy parameter. The parameter \(l\) may be positive, negative, or zero; all the other parameters are positive. These equations are similar to the Lorenz equations and can exhibit chaotic behavior (Haken 1983, Weiss and Vilaseca 1991). However, many practical lasers do not operate in the chaotic regime. In the simplest case \(\gamma_{1}\), \(\gamma_{2}>>\kappa\); then \(P\) and \(D\) relax rapidly to steady values, and hence may be adiabatically eliminated, as follows. * Assuming \(\dot{P}\approx 0\), \(\dot{D}\approx 0\), express \(P\) and \(D\) in terms of \(E\), and thereby derive a first-order equation for the evolution of \(E\). * Find all the fixed points of the equation for \(E\). * Draw the bifurcation diagram of \(E*\) vs. \(\lambda\). (Be sure to distinguish between stable and unstable branches.) ### 3.4 Pitchfork Bifurcation In the following exercises, sketch all the qualitatively different vector fields that occur as \(r\) is varied. Show that a pitchfork bifurcation occurs at a critical value of \(r\) (to be determined) and classify the bifurcation as supercritical or subcritical. Finally, sketch the bifurcation diagram of \(x*\) vs. \(r\). #### 3.4.1 \(\dot{x}=rx+4x^{3}\) 3.4.2 \(\dot{x}=rx-\sinh x\) 3.4.3 \(\dot{x}=rx-4x^{3}\) 3.4.4 \(\dot{x}=x+\frac{rx}{1+x^{2}}\) The next exercises are designed to test your ability to distinguish among the various types of bifurcations--it's easy to confuse them! In each case, find the values of \(r\) at which bifurcations occur, and classify those as saddle-node, transcritical, supercritical pitchfork, or subcritical pitchfork. Finally, sketch the bifurcation diagram of fixed points \(x\)* vs. \(r\). #### 3.4.5 \(\dot{x}=r-3x^{2}\) 3.4.6 \(\dot{x}=rx-\frac{x}{1+x}\) 3.4.7 \(\dot{x}=5-re^{-x^{2}}\) 3.4.8 \(\dot{x}=rx-\frac{x}{1+x^{2}}\) 3.4.9 \(\dot{x}=x+\tanh(rx)\) 3.4.10 \(\dot{x}=rx+\frac{x^{3}}{1+x^{2}}\) #### 3.4.11 (An interesting bifurcation diagram) Consider the system \(\dot{x}=rx-\sin x\). 1. For the case \(r=0\), find and classify all the fixed points, and sketch the vector field. 2. Show that when \(r>1\), there is only one fixed point. What kind of fixed point is it? 3. As \(r\) decreases from \(\infty\) to \(0\), classify _all_ the bifurcations that occur. 4. For \(0<r<<1\), find an approximate formula for values of \(r\) at which bifurcations occur. 5. Now classify all the bifurcations that occur as \(r\) decreases from \(0\) to \(-\infty\). 6. Plot the bifurcation diagram for \(-\infty<r<\infty\), and indicate the stability of the various branches of fixed points. 4.12 ("Quadfurcation") With tongue in cheek, we pointed out that the pitchfork bifurcation could be called a "trifurcation," since three branches of fixed points appear for \(r>0\). Can you construct an example of a "quadfurcation," in which \(\dot{x}=f(x,r)\) has no fixed points for \(r<0\) and four branches of fixed points for \(r>0\)? Extend your results to the case of an arbitrary number of branches, if possible. 4.13 (Computer work on bifurcation diagrams) For the vector fields below, use a computer to obtain a quantitatively accurate plot of the values of \(x\)* vs. \(r\), where \(0\leq r\leq 3\). In each case, there's an easy way to do this, and a harder way using the Newton-Raphson method. 1. \(\dot{x}=r-x-e^{-x}\) b) \(\dot{x}=1-x-e^{-rx}\) 4.14 (Subcritical pitchfork) Consider the system \(\dot{x}=rx+x^{3}-x^{5}\), which exhibits a subcritical pitchfork bifurcation. 1. Find algebraic expressions for all the fixed points as \(r\) varies. 2. Sketch the vector fields as \(r\) varies. Be sure to indicate all the fixed points and their stability. 3. Calculate \(r_{\varepsilon}\), the parameter value at which the nonzero fixed points are born in a saddle-node bifurcation. (First-order phase transition) Consider the potential \(V(x)\) for the system \(\dot{x}=rx+x^{3}-x^{5}\). Calculate \(r_{\varepsilon}\), where \(r_{\varepsilon}\) is defined by the condition that \(V\) has three equally deep wells, i.e., the values of \(V\) at the three local minima are equal. (Note: In equilibrium statistical mechanics, one says that a _first-order phase transition_ occurs at \(r=r_{\varepsilon}\). For this value of \(r\), there is equal probability of finding the system in the state corresponding to any of the three minima. The freezing of water into ice is the most familiar example of a first-order phase transition.) (Potentials) In parts (a)-(c), let \(V(x)\) be the potential, in the sense that \(\dot{x}=-dV/dx\). Sketch the potential as a function of \(r\). Be sure to show all the qualitatively different cases, including bifurcation values of \(r\). 1. (Saddle-node) \(\dot{x}=r-x^{2}\) 2. (Transcritical) \(\dot{x}=rx-x^{2}\) 3. (Subcritical pitchfork) \(\dot{x}=rx+x^{3}-x^{5}\) ### Overdamped Bead on a Rotating Hoop Consider the bead on the rotating hoop discussed in Section 3.5. Explain in physical terms why the bead cannot have an equilibrium position with \(\phi>\pi/2\). Do the linear stability analysis for all the fixed points for Equation (3.5.7), and confirm that Figure 3.5.6 is correct. Show that Equation (3.5.7) reduces to \(\frac{d\phi}{d\tau}=A\phi-B\phi^{3}+O(\phi^{5})\) near \(\phi=0\). Find \(A\) and \(B\). (Bead on a horizontal wire) A bead of mass \(m\) is constrained to slide along a straight horizontal wire. A spring of relaxed length \(L_{0}\) and spring constant \(k\) is attached to the mass and to a support point a distance \(h\) from the wire (Figure 1). ## 4 Bifurcations Figure 1: Finally, suppose that the motion of the bead is opposed by a viscous damping force \(b\dot{x}\). a) Write Newton's law for the motion of the bead. b) Find all possible equilibria, i.e., fixed points, as functions of \(k\), \(h\), \(m\), \(b\), and \(L_{\circ}\). c) Suppose \(m=0\). Classify the stability of all the fixed points, and draw a bifurcation diagram. d) If \(m\neq 0\), how small does \(m\) have to be to be considered negligible? In what sense is it negligible? 5.5 (Time scale for the rapid transient) While considering the bead on the rotating hoop, we used phase plane analysis to show that the equation \[\varepsilon\frac{d^{2}\phi}{d\tau^{2}}+\frac{d\phi}{d\tau}=f(\phi)\] has solutions that rapidly relax to the curve where \(\frac{d\phi}{d\tau}=f(\phi)\). a) Estimate the time scale \(T_{\mbox{\tiny fast}}\) for this rapid transient in terms of \(\varepsilon\), and then express \(T_{\mbox{\tiny fast}}\) in terms of the original dimensional quantities \(m\), \(g\), \(r\), \(\omega\), and \(b\). b) Rescale the original differential equation, using \(T_{\mbox{\tiny fast}}\) as the characteristic time scale, instead of \(T_{\mbox{\tiny slow}}=b/mg\). Which terms in the equation are negligible on this time scale? c) Show that \(T_{\mbox{\tiny fast}}<<T_{\mbox{\tiny slow}}\) if \(\varepsilon<<1\). (In this sense, the time scales \(T_{\mbox{\tiny fast}}\) and \(T_{\mbox{\tiny slow}}\) are _widely separated_.) #### 3.5.6 (A model problem about singular limits) Consider the _linear_ differential equation \[\varepsilon\ddot{x}+\dot{x}+x=0,\] subject to the initial conditions \(x(0)=1,\ \dot{x}(0)=0\). a) Solve the problem analytically for all \(\varepsilon>0\). b) Now suppose \(\varepsilon<<1\). Show that there are two widely separated time scales in the problem, and estimate them in terms of \(\varepsilon\). c) Graph the solution \(x(t)\) for \(\varepsilon<<1\), and indicate the two time scales on the graph. d) What do you conclude about the validity of replacing \(\varepsilon\ddot{x}+\dot{x}+x=0\) with its singular limit \(\dot{x}+x=0\)? e) Give two physical analogs of this problem, one involving a mechanical system, and another involving an electrical circuit. In each case, find the dimensionless combination of parameters corresponding to \(\varepsilon\), and state the physical meaning of the limit \(\varepsilon<<1\). 5.7 (Nondimensionalizing the logistic equation) Consider the logistic equation \(\dot{N}=rN(1-N/K)\), with initial condition \(N(0)=N_{\circ}\). * This system has three dimensional parameters \(r\), \(K\), and \(N_{0}\). Find the dimensions of each of these parameters. * Show that the system can be rewritten in the dimensionless form \[\frac{dx}{d\tau}=x(1-x),\quad x(0)=x_{0}\] for appropriate choices of the dimensionless variables \(x\), \(x_{\phi}\), and \(\tau\). * Find a different nondimensionalization in terms of variables \(u\) and \(\tau\), where \(u\) is chosen such that the initial condition is always \(u_{0}=1\). * Can you think of any advantage of one nondimensionalization over the other? 3.5.8 (Nondimensionalizing the subcritical pitchfork) The first-order system \(\dot{u}=au+bu^{3}-cu^{s}\), where \(b\), \(c>0\), has a subcritical pitchfork bifurcation at \(a=0\). Show that this equation can be rewritten as \[\frac{dx}{d\tau}=rx+x^{3}-x^{5}\] where \(x=u/U,\tau=t/T\), and \(U\), \(T\), and \(r\) are to be determined in terms of \(a\), \(b\), and \(c\). ### 3.6 Imperfect Bifurcations and Catastrophes 6.1 (Warm-up question about imperfect bifurcation) Does Figure 3.6.3b correspond to \(h>0\) or to \(h<0\)? 6.2 (Imperfect transcritical bifurcation) Consider the system \(\dot{x}=h+rx-x^{2}\). When \(h=0\), this system undergoes a transcritical bifurcation at \(r=0\). Our goal is to see how the bifurcation diagram of \(x\)* vs. \(r\) is affected by the imperfection parameter \(h\). * Plot the bifurcation diagram for \(\dot{x}=h+rx-x^{2}\), for \(h<0\), \(h=0\), and \(h>0\). * Sketch the regions in the (\(r\),\(h\)) plane that correspond to qualitatively different vector fields, and identify the bifurcations that occur on the boundaries of those regions. * Plot the potential \(V(x)\) corresponding to all the different regions in the (\(r\),\(h\)) plane. 6.3 (A perturbation to the supercritical pitchfork) Consider the system \(\dot{x}=rx+ax^{2}-x^{3}\), where \(-\infty<a<\infty\). When \(a=0\), we have the normal form for the supercritical pitchfork. The goal of this exercise is to study the effects of the new parameter \(a\). * For each \(a\), there is a bifurcation diagram of \(x\)* vs. \(r\). As \(a\) varies, these bifurcation diagrams can undergo qualitative changes. Sketch all the qualitatively different bifurcation diagrams that can be obtained by varying \(a\). b) Summarize your results by plotting the regions in the \((r,a)\) plane that correspond to qualitatively different classes of vector fields. Bifurcations occur on the boundaries of these regions; identify the types of bifurcations that occur. (Imperfect saddle-node) What happens if you add a small imperfection to a system that has a saddle-node bifurcation? (Mechanical example of imperfect bifurcation and catastrophe) Consider the bead on a tilted wire discussed at the end of Section 3.6. a) Show that the equilibrium positions of the bead satisfy \[mg\sin\theta=kx\left(1-\frac{L_{0}}{\sqrt{x^{2}+a^{2}}}\right).\] b) Show that this equilibrium equation can be written in dimensionless form as \[1-\frac{h}{u}=\frac{R}{\sqrt{1+u^{2}}}\] for appropriate choices of \(R\), \(h\), and \(u\). c) Give a graphical analysis of the dimensionless equation for the cases \(R<1\) and \(R>1\). How many equilibria can exist in each case? d) Let \(r=R-1\). Show that the equilibrium equation reduces to \(h+ru-\frac{1}{2}u^{3}\approx 0\) for small \(r\), \(h\), and \(u\). e) Find an approximate formula for the saddle-node bifurcation curves in the limit of small \(r\), \(h\), and \(u\). f) Show that the _exact_ equations for the bifurcation curves can be written in parametric form as \[h(u)=-u^{3},\qquad R(u)=(1+u^{2})^{3/2},\] where \(-\infty<u<\infty\). (Hint: You may want to look at Section 3.7.) Check that this result reduces to the approximate result in part (d). g) Give a numerically accurate plot of the bifurcation curves in the \((r,h)\) plane. h) Interpret your results physically, in terms of the original dimensional variables. (Patterns in fluids) Ahlers (1989) gives a fascinating review of experiments on one-dimensional patterns in fluid systems. In many cases, the patterns first emerge via supercritical or subcritical pitchfork bifurcations from a spatially uniform state. Near the bifurcation, the dynamics of the amplitude of the patterns are given approximately by \(\tau\dot{A}=\varepsilon A-gA^{3}\) in the supercritical case, or \(\tau\dot{A}=\varepsilon A-gA^{3}-kA^{5}\) in the subcritical case. Here \(A(t)\) is the amplitude, \(\tau\) is a typical time scale, and \(\varepsilon\) is a small dimensionless parameter that measures the distance from the bifurcation. The parameter \(g>0\) in the supercritical case,whereas \(g<0\) and \(k>0\) in the subcritical case. (In this context, the equation \(\tau\dot{A}=\varepsilon A-gA^{3}\) is often called the _Landau equation_.) 1. Dubois and Berge (1978) studied the supercritical bifurcation that arises in Rayleigh-Benard convection, and showed experimentally that the steady-state amplitude depended on \(\varepsilon\) according to the power law \(A^{*}\propto\varepsilon^{\beta}\), where \(\beta=0.50\)\(\pm 0.01\). What does the Landau equation predict? 2. The equation \(\tau\dot{A}=\varepsilon A-gA^{3}-kA^{5}\) is said to undergo a _tricritical bifurcation_ when \(g=0\); this case is the borderline between supercritical and subcritical bifurcations. Find the relation between \(A^{*}\) and \(\varepsilon\) when \(g=0\). 3. In experiments on Taylor-Couette vortex flow, Aitta et al. (1985) were able to change the parameter \(g\) continuously from positive to negative by varying the aspect ratio of their experimental set-up. Assuming that the equation is modified to \(\tau\dot{A}=h+\varepsilon A-gA^{3}-kA^{5}\), where \(h>0\) is a slight imperfection, sketch the bifurcation diagram of \(A^{*}\) vs. \(\varepsilon\) in the three cases \(g>0\), \(g=0\), and \(g<0\). Then look up the actual data in Aitta et al. (1985, Figure 2) or see Ahlers (1989, Figure 15). 4. In the experiments of part (c), the amplitude \(A(t)\) was found to evolve toward a steady state in the manner shown in Figure 2 (redrawn from Ahlers (1989), Figure 18). The results are for the imperfect subcritical case \(g<0\), \(h\approx 0\). In the experiments, the parameter \(\varepsilon\) was switched at \(t=0\) from a negative value to a positive value \(\varepsilon_{f}\). In Figure 2, \(\varepsilon_{f}\) increases from the bottom to the top. Explain intuitively why the curves have this strange shape. Why do the curves for large \(\varepsilon_{f}\) go almost straight up to their steady state, whereas the curves for small \(\varepsilon_{f}\) rise to a plateau before increasing sharply to their final level? (Hint: Graph \(\dot{A}\) vs. \(A\) for different \(\varepsilon_{f}\).) 3.6.7 (Simple model of a magnet) A magnet can be modeled as an enormous collection of electronic spins. In the simplest model, known as the _Ising model_, the spins can point only up or down, and are assigned the values \(S_{i}=\pm 1\), for \(i=1,\ldots,\,N>>1\). For quantum mechanical reasons, the spins like to point in the Figure 2: same direction as their neighbors; on the other hand, the randomizing effects of temperature tend to disrupt any such alignment. An important macroscopic property of the magnet is its average spin or _magnetization_ \[m=\left|\frac{1}{N}\sum_{i=1}^{N}S_{i}\right|.\] At high temperature the spins point in random directions and so \(m\approx 0\); the material is in the _paramagnetic_ state. As the temperature is lowered, \(m\) remains near zero until a critical temperature \(T_{c}\) is reached. Then a _phase transition_ occurs and the material spontaneously magnetizes. Now \(m>0\); we have a _ferromagnet_. But the symmetry between up and down spins means that there are _two_ possible ferromagnetic states. This symmetry can be broken by applying an external magnetic field \(h\), which favors either the up or down direction. Then, in an approximation called _mean-field theory_, the equation governing the equilibrium value of \(m\) is \[h=T\tanh^{-1}m-Jnm\] where \(J\) and \(n\) are constants; \(J>0\) is the ferromagnetic coupling strength and \(n\) is the number of neighbors of each spin (Ma 1985, p. 459). a) Analyze the solutions \(m*\) of \(h=T\tanh^{-1}m-Jnm\), using a graphical approach. b) For the special case \(h=0\), find the critical temperature \(T_{c}\) at which a phase transition occurs. ### Insect Outbreak (Warm-up question about insect outbreak model) Show that the fixed point \(x*=0\) is _always unstable_ for Equation (3.7.3). (Bifurcation curves for insect outbreak model) a) Using Equations (3.7.8) and (3.7.9), sketch \(r(x)\) and \(k(x)\) vs. \(x\). Determine the limiting behavior of \(r(x)\) and \(k(x)\) as \(x\to 1\) and \(x\rightarrow\infty\). b) Find the exact values of \(r\), \(k\), and \(x\) at the cusp point shown in Figure 3.7.5. (A model of a fishery) The equation \(\dot{N}=rN\big{(}1-\frac{N}{K}\big{)}-H\) provides an extremely simple model of a fishery. In the absence of fishing, the population is assumed to grow logistically. The effects of fishing are modeled by the term \(-H\), which says that fish are caught or "harvested" at a constant rate \(H>0\), independent of their population \(N\). (This assumes that the fishermen aren't worried about fishing the population dry--they simply catch the same number of fish every day.) a) Show that the system can be rewritten in dimensionless form as \[\frac{dx}{d\tau}=x(1-x)-h,\]for suitably defined dimensionless quantities \(x\), \(\tau\), and \(h\). * Plot the vector field for different values of \(h\). * Show that a bifurcation occurs at a certain value \(h_{c}\), and classify this bifurcation. * Discuss the long-term behavior of the fish population for \(h<h_{c}\) and \(h>h_{c}\), and give the biological interpretation in each case. There's something silly about this model--the population can become negative! A better model would have a fixed point at zero population for all values of \(H\). See the next exercise for such an improvement. (Improved model of a fishery) A refinement of the model in the last exercise is \[\dot{N}=rN\left(1-\frac{N}{K}\right)-H\frac{N}{A+N}\] where \(H>0\) and \(A>0\). This model is more realistic in two respects: it has a fixed point at \(N=0\) for all values of the parameters, and the rate at which fish are caught decreases with \(N\). This is plausible--when fewer fish are available, it is harder to find them and so the daily catch drops. * Give a biological interpretation of the parameter \(A\); what does it measure? * Show that the system can be rewritten in dimensionless form as \[\frac{dx}{d\tau}=x(1-x)-h\frac{x}{a+x},\] for suitably defined dimensionless quantities \(x\), \(\tau\), \(a\), and \(h\). * Show that the system can have one, two, or three fixed points, depending on the values of \(a\) and \(h\). Classify the stability of the fixed points in each case. * Analyze the dynamics near \(x=0\) and show that a bifurcation occurs when \(h=a\). What type of bifurcation is it? * Show that another bifurcation occurs when \(h=\frac{1}{4}(a+1)^{2}\), for \(a<a_{c}\), where \(a_{c}\) is to be determined. Classify this bifurcation. * Plot the stability diagram of the system in \((a,h)\) parameter space. Can hysteresis occur in any of the stability regions? (A biochemical switch) Zebra stripes and butterfly wing patterns are two of the most spectacular examples of biological pattern formation. Explaining the development of these patterns is one of the outstanding problems of biology; see Murray (2003) for an excellent review. As one ingredient in a model of pattern formation, Lewis et al. (1977) considered a simple example of a biochemical switch, in which a gene \(G\) is activated by a biochemical signal substance \(S\). For example, the gene may normally be inactive but can be "switched on" to produce a pigment or other gene product when the concentration of \(S\) exceeds a certain threshold. Let \(g(t)\) denote the concentration of the gene product, and assume that the concentration \(s_{0}\) of \(S\) is fixed. The model is \[\dot{g}=k_{1}s_{0}-k_{2}g+\frac{k_{3}g^{2}}{k_{4}^{2}+g^{2}}\] where the \(k\)'s are positive constants. The production of \(g\) is stimulated by \(s_{0}\) at a rate \(k_{1}\), and by an _autocatalytic_ or positive feedback process (the nonlinear term). There is also a linear degradation of \(g\) at a rate \(k_{2}\). a) Show that the system can be put in the dimensionless form \[\frac{dx}{d\tau}=s-rx+\frac{x^{2}}{1+x^{2}}\] where \(r>0\) and \(s\geq 0\) are dimensionless groups. b) Show that if \(s=0\), there are two positive fixed points \(x\)* if \(r<r_{c}\), where \(r_{c}\) is to be determined. c) Assume that initially there is no gene product, i.e., \(g(0)=0\), and suppose \(s\) is slowly increased from zero (the activating signal is turned on); what happens to \(g(t)\)? What happens if \(s\) then goes back to zero? Does the gene turn off again? d) Find parametric equations for the bifurcation curves in \((r,s)\) space, and classify the bifurcations that occur. e) Use the computer to give a quantitatively accurate plot of the stability diagram in \((r,s)\) space. For further discussion of this model, see Lewis et al. (1977); Edelstein-Keshet (1988), Section 7.5; or Murray (2002), Chapter 6. (Model of an epidemic) In pioneering work in epidemiology, Kermack and McKendrick (1927) proposed the following simple model for the evolution of an epidemic. Suppose that the population can be divided into three classes: \(x(t)=\) number of healthy people; \(y(t)=\) number of sick people; \(z(t)=\) number of dead people. Assume that the total population remains constant in size, except for deaths due to the epidemic. (That is, the epidemic evolves so rapidly that we can ignore the slower changes in the populations due to births, emigration, or deaths by other causes.) Then the model is \[\begin{array}{l}\dot{x}=-kxy\\ \dot{y}=kxy-ly\\ \dot{z}=ly\end{array}\] where \(k\) and \(l\) are positive constants. The equations are based on two assumptions: **EXERCISES**(i) Healthy people get sick at a rate proportional to the product of \(x\) and \(y\). This would be true if healthy and sick people encounter each other at a rate proportional to their numbers, and if there were a constant probability that each such encounter would lead to transmission of the disease. (ii) Sick people die at a constant rate \(l\). The goal of this exercise is to reduce the model, which is a _third-order system_, to a first-order system that can analyzed by our methods. (In Chapter 6 we will see a simpler analysis.) a) Show that \(x+y+z=N\), where \(N\) is constant. b) Use the \(\dot{x}\) and \(\dot{z}\) equation to show that \(x(t)=x_{0}\exp(-kz(t)/l)\), where \(x_{0}=x(0)\). c) Show that \(z\) satisfies the first-order equation \(\dot{z}=l[N-z-x_{0}\exp(-kz/l)]\). d) Show that this equation can be nondimensionalized to \[\frac{du}{d\tau}=a-bu-e^{-u}\] by an appropriate rescaling. e) Show that \(a\geq 1\) and \(b>0\). f) Determine the number of fixed points \(u\)* and classify their stability. g) Show that the maximum of \(\dot{u}(t)\) occurs at the same time as the maximum of both \(\dot{z}(t)\) and \(y(t)\). (This time is called the _peak_ of the epidemic, denoted \(t_{\mbox{\tiny peak}}\). At this time, there are more sick people and a higher daily death rate than at any other time.) h) Show that if \(b<1\), then \(\dot{u}(t)\) is increasing at \(t=0\) and reaches its maximum at some time \(t_{\mbox{\tiny peak}}>0\). Thus things get worse before they get better. (The term _epidemic_ is reserved for this case.) Show that \(\dot{u}(t)\) eventually decreases to 0. i) On the other hand, show that \(t_{\mbox{\tiny peak}}=0\) if \(b>1\). (Hence no epidemic occurs if \(b>1\).) j) The condition \(b=1\) is the _threshold_ condition for an epidemic to occur. Can you give a biological interpretation of this condition? k) Kermack and McKendrick showed that their model gave a good fit to data from the Bombay plague of 1906. How would you improve the model to make it more appropriate for AIDS? Which assumptions need revising? For an introduction to models of epidemics, see Murray (2002), Chapter 10, or Edelstein-Keshet (1988). Models of AIDS are discussed by Murray (2002) and May and Anderson (1987). An excellent review and commentary on the Kermack-McKendrick papers is given by Anderson (1991). The next two exercises involve applications of nonlinear dynamics to systems biology, and were kindly suggested by Jordi Garcia-Ojalvo. #### 3.7.7 (Hysteretic activation) Consider a protein that activates its own transcription in a positive feedback loop, while its promoter has a certain level of basal expression: \[\dot{p}=\alpha+\frac{\beta p^{n}}{K^{n}+p^{n}}-\delta p\,.\] Here \(\alpha\) is the basal transcription rate, \(\beta\) is the maximal transcription rate, \(K\) is the activation coefficient, and \(\delta\) is the decay rate of the protein. To ease the analysis, assume that \(n\) is large (\(n>>1\)). a) Sketch the graph of the nonlinear function \(g(p)=\beta p^{n}\)/(\(K^{n}+p^{n}\)) for \(n>>1\). What simple shape does it approach as \(n\rightarrow\infty\)? b) The right hand side of the equation for \(\dot{p}\) can be rewritten as \(g(p)-h(p)\), where \(h(p)=\delta p-\alpha\). Use this decomposition to plot the phase portrait for the system for the following three cases: (i) \(\delta K-\alpha>\beta\), (ii) \(\delta K-\alpha=\beta/2\), and (iii) \(\delta K-\alpha<0\). c) From now on, assume \(\delta K>\beta\). Plot the bifurcation diagram for the system. Be sure to indicate clearly how the location and stability of the fixed points \(p^{*}\) vary with respect to \(\alpha\). d) Discuss how the level of protein \(p\) behaves if \(\alpha\) is very slowly increased from \(\alpha=0\) to \(\alpha>\delta K\), and then very slowly decreased back to \(\alpha=0\). Show that such a pulsed stimulation leads to hysteresis. #### 3.7.8 (Irreversible response to a transient stimulus) Many types of molecules within cells can be turned on and off by adding phosphate groups to them, a process known as phosphorylation. The addition of the phosphate group changes the conformation of the molecule to which it is bound, in effect flipping a switch and thereby altering the molecule's activity. This is one of the most common ways that cells control a diverse array of important processes, ranging from enzyme activity to cellular reproduction, movement, signaling, and metabolism. The reverse reaction, in which a phosphate group is removed, is called dephosphorylation. For further information, see Hardie (1999) and Alon (2006). To illustrate how a cell's fate can be determined irreversibly by a transient stimulus, Xiong and Ferrell (2003) considered a model of a phosphorylation/dephosphorylation cycle in which phosphorylation is induced in two different ways: by a stimulus signal \(S\), and by the phosphorylated protein itself via a positive feedback loop. Assuming that the latter process is cooperative, the dynamics of the phosphorylated protein are governed by \[\dot{A}_{p}=k_{p}SA+\beta\,\frac{A_{p}^{a}}{K^{a}+A_{p}^{n}}-\,k_{d}A_{p}\,.\] Here \(\,A\,\) is the concentration of the unphosphorylated protein and \(\,A_{p}\,\) is the concentration of the phosphorylated protein. We'll assume that the total protein concentration, \(\,A_{T}=A+A_{p}\,\), is constant. In the model, \(\,k_{p}\,\) is the activation (phosphorylation) rate and \(\,k_{d}\,\) is the inactivation (dephosphorylation) rate. For simplicity, we'll consider the convenient special case \(\,n>>1,\,\,K=A_{T}\,/\,2\,\), and \(\,\beta=k_{d}A_{T}\,\). a) Nondimensionalize the system by rewriting it in terms of the dimensionless quantities \(\,x=A_{p}\,/\,K\,\), \(\,\tau=k_{d}\,t\,\), \(\,s=k_{p}S/\,k_{d}\,\), and \(\,b=\beta\,/\,(k_{d}K)\,\). b) Assume first that there is no stimulus so \(\,s=0\,\). Plot the phase portrait of the system. c) Plot all the qualitatively different phase portraits that can occur for a constant stimulus \(\,s>0\,\). d) Taking into account the behavior of the fixed points that you calculated in part (b), plot the bifurcation diagram of the system for increasing stimulus \(\,s\). e) Show that the system is _irreversible_ in the following sense: if the cell starts with no phosphorylated proteins and no stimulus, then the protein activates if a sufficiently large stimulus is applied--but it does _not_ deactivate if \(\,s\) is later decreased back to 0. f) Repeat the analysis above, but this time for a value of \(\,\beta<<k_{d}A_{T}\,\). What happens with the reversibility in this case? ## 4.0 Introduction So far we've concentrated on the equation \(\dot{x}=f(x)\), which we visualized as a vector field on the line. Now it's time to consider a new kind of differential equation and its corresponding phase space. This equation, \[\dot{\theta}=f(\theta)\text{,}\] corresponds to a _vector field on the circle._ Here \(\theta\) is a point on the circle and \(\dot{\theta}\) is the velocity vector at that point, determined by the rule \(\dot{\theta}=f(\theta)\). Like the line, the circle is one-dimensional, but it has an important new property: by flowing in one direction, a particle can eventually return to its starting place (Figure 4.0.1). Thus periodic solutions become possible for the first time in this book! To put it another way, _vector fields on the circle provide the most basic model of systems that can oscillate._ However, in all other respects, flows on the circle are similar to flows on the line, so this will be a short chapter. We will discuss the dynamics of some simple oscillators, and then show that these equations arise in a wide variety of applications. For example, the flashing of fireflies and the voltage oscillations of superconducting Josephson junctions have been modeled by the same equation, even though their oscillation frequencies differ by about ten orders of magnitude! ### 4.1 Examples and Definitions Let's begin with some examples, and then give a more careful definition of vector fields on the circle. Figure 4.0.1: Flow on the circle **EXAMPLE 4.1.1:** Sketch the vector field on the circle corresponding to \(\dot{\theta}=\sin\theta\). _Solution_: We assign coordinates to the circle in the usual way, with \(\theta=0\) in the direction of "east," and with \(\theta\) increasing counterclockwise. To sketch the vector field, we first find the fixed points, defined by \(\dot{\theta}=0\). These occur at \(\theta*=0\) and \(\theta*=\pi\). To determine their stability, note that \(\sin\theta>0\) on the upper semicircle. Hence \(\dot{\theta}>0\), so the flow is counterclockwise. Similarly, the flow is clockwise on the lower semicircle, where \(\dot{\theta}<0\). Hence \(\theta*=\pi\) is stable and \(\theta*=0\) is unstable, as shown in Figure 4.1.1. Actually, we've seen this example before--it's given in Section 2.1. There we regarded \(\dot{x}=\sin x\) as a vector field on the _line_. Compare Figure 2.1.1 with Figure 4.1.1 and notice how much clearer it is to think of this system as a vector field on the circle. **EXAMPLE 4.1.2:** Explain why \(\dot{\theta}=\theta\) cannot be regarded as a vector field on the circle, for \(\theta\) in the range \(-\infty<\theta<\infty\). _Solution:_ The velocity is not uniquely defined. For example, \(\theta=0\) and \(\theta=2\pi\) are two labels for the same point on the circle, but the first label implies a velocity of \(0\) at that point, while the second implies a velocity of \(2\pi\). If we try to avoid this non-uniqueness by restricting \(\theta\) to the range \(-\pi<\theta\leq\pi\), then the velocity vector jumps discontinuously at the point corresponding to \(\theta=\pi\). Try as we might, there's no way to consider \(\dot{\theta}=\theta\) as a smooth vector field on the entire circle. Of course, there's no problem regarding \(\dot{\theta}=\theta\) as a vector field on the _line_, because then \(\theta=0\) and \(\theta=2\pi\) are different points, and so there's no conflict about how to define the velocity at each of them. Example 4.1.2 suggests how to define vector fields on the circle. Here's a geometric definition: A _vector field on the circle_ is a rule that assigns a unique velocity vector to each point on the circle. In practice, such vector fields arise when we have a first-order system \(\dot{\theta}=f(\theta)\), where \(f(\theta)\) is a real-valued, \(2\pi\)-_periodic_ function. That is, \(f(\theta+2\pi)=f(\theta)\) for all real \(\theta\). Moreover, we assume (as usual) that \(f(\theta)\) is smooth enough to guarantee existence and uniqueness of solutions. Although this system could be regarded as a special case of a vector field on the line, it is usually clearer to think of it as a vector field on the circle (as in Example 4.1.1). This means that we don't distinguish Figure 4.1.1between \(\theta\)'s that differ by an integer multiple of \(2\pi\). Here's where the periodicity of \(f(\theta)\) becomes important--it ensures that the velocity \(\dot{\theta}\) is uniquely defined at each point \(\theta\) on the circle, in the sense that \(\dot{\theta}\) is the same, whether we call that point \(\theta\) or \(\theta+2\pi\), or \(\theta+2\pi k\) for any integer \(k\). ### 4.2 Uniform Oscillator A point on a circle is often called an _angle_ or a _phase_. Then the simplest oscillator of all is one in which the phase \(\theta\) changes uniformly: \[\dot{\theta}=\omega\] where \(\omega\) is a constant. The solution is \[\theta(t)=\omega t+\theta_{\theta},\] which corresponds to uniform motion around the circle at an angular frequency \(\omega\). This solution _is periodic,_ in the sense that \(\theta(t)\) changes by \(2\pi\), and therefore returns to the same point on the circle, after a time \(T=2\pi/\omega\). We call \(T\) the _period_ of the oscillation. Notice that we have said nothing about the _amplitude_ of the oscillation. There really is no amplitude variable in our system. If we had an amplitude as well as a phase variable, we'd be in a _two-dimensional_ phase space; this situation is more complicated and will be discussed later in the book. (Or if you prefer, you can imagine that the oscillation occurs at some _fixed_ amplitude, corresponding to the radius of our circular phase space. In any case, amplitude plays no role in the dynamics.) **EXAMPLE 4.2.1:** Two joggers, Speedy and Pokey, are running at a steady pace around a circular track. It takes Speedy \(T_{\rm l}\) seconds to run once around the track, whereas it takes Pokey \(T_{\rm z}>T_{\rm l}\) seconds. Of course, Speedy will periodically overtake Pokey; how long does it take for Speedy to lap Pokey once, assuming that they start together? _Solution:_ Let \(\theta_{\rm l}(t)\) be Speedy's position on the track. Then \(\dot{\theta}_{\rm l}=\omega_{\rm l}\) where \(\omega_{\rm l}=2\pi/T_{\rm l}\). This equation says that Speedy runs at a steady pace and completes a circuit every \(T_{\rm l}\) seconds. Similarly, suppose that \(\dot{\theta}_{\rm z}=\omega_{\rm z}=2\pi/T_{\rm z}\) for Pokey. The condition for Speedy to lap Pokey is that the angle between them has increased by \(2\pi\). Thus if we define the _phase difference_\(\phi=\theta_{\rm l}-\theta_{\rm z}\), we want to find how long it takes for \(\phi\) to increase by \(2\pi\) (Figure 4.2.1). By subtraction we find \(\dot{\phi}=\dot{\theta}_{\rm l}-\dot{\theta}_{\rm z}=\omega_{\rm l}-\omega_{\rm z}\). Thus \(\phi\) increases by \(2\pi\) after a time **Figure 4.2.1:**\[T_{\text{lap}}=\frac{2\pi}{\omega_{1}-\omega_{2}}=\left(\frac{1}{T_{1}}-\frac{1}{T_ {2}}\right)^{-1}.\qed\] Example 4.2.1 illustrates an effect called the _beat phenomenon_. Two noninteracting oscillators with different frequencies will periodically go in and out of phase with each other. You may have heard this effect on a Sunday morning: sometimes the bells of two different churches will ring simultaneously, then slowly drift apart, and then eventually ring together again. If the oscillators _interact_ (for example, if the two joggers try to stay together or the bell ringers can hear each other), then we can get more interesting effects, as we will see in Section 4.5 on the flashing rhythm of fireflies. ### 4.3 Nonuniform Oscillator The equation \[\dot{\theta}=\omega-a\sin\theta \tag{1}\] arises in many different branches of science and engineering. Here is a partial list: _Electronics_ (phase-locked loops) _Biology_ (oscillating neurons, firefly flashing rhythm, human sleep-wake cycle) _Condensed-matter physics_ (Josephson junction, charge-density waves) _Mechanics_ (Overdamped pendulum driven by a constant torque) Some of these applications will be discussed later in this chapter and in the exercises. To analyze (1), we assume that \(\omega>0\) and \(a\geq 0\) for convenience; the results for negative \(\omega\) and \(a\) are similar. A typical graph of \(f(\theta)=\omega-a\sin\theta\) is shown in Figure 4.3.1. Note that \(\omega\) is the mean and \(a\) is the amplitude. **Vector Fields** If \(a=0\), (1) reduces to the uniform oscillator. The parameter \(a\) introduces a _nonuniformity_ in the flow around the circle: the flow is fastest at \(\theta=-\pi/2\) and slowest at \(\theta=\pi/2\) (Figure 4.3.2a). This nonuniformity becomes more pronounced as \(a\) increases. When \(a\) is slightly less than \(\omega\), the oscillation is very jerky: the phase point \(\theta(t)\) takes a long time to pass through a _bottleneck_ near \(\theta=\pi/2\), after which it zips around the rest of the circle on a much faster time scale. When \(a=\omega\), the system stops oscillating Figure 4.3.1: altogether: a half-stable fixed point has been born in a _saddle-node bifurcation_ at \(\theta=\pi/2\) (Figure 4.3.2b). Finally, when \(a>\omega\), the half-stable fixed point splits into a stable and unstable fixed point (Figure 4.3.2c). All trajectories are attracted to the stable fixed point as \(t\to\infty\). The same information can be shown by plotting the vector fields on the circle (Figure 4.3.3). **Example 4.3.1:** Use linear stability analysis to classify the fixed points of (1) for \(a>\omega\). _Solution:_ The fixed points \(\theta\)* satisfy \[\sin\theta*=\omega/a\,\qquad\qquad\quad\cos\theta*=\pm\sqrt{1-\left(\omega/a \right)^{2}}\.\] Their linear stability is determined by \[f^{\prime}(\theta*)=-a\cos\theta*=\mp a\sqrt{1-\left(\omega/a\right)^{2}}\.\] ### 4.3 Nonuniform oscillator Figure 4.3.2: Figure 4.3.3: Thus the fixed point with \(\cos\theta*>0\) is the stable one, since \(f^{\prime}(\theta*)<0\). This agrees with Figure 4.3.2c. ### Oscillation Period For \(a<\omega\), the period of the oscillation can be found analytically, as follows: the time required for \(\theta\) to change by \(2\pi\) is given by \[T = \int dt=\int_{0}^{2\pi}\frac{dt}{d\theta}d\theta\] \[= \int_{0}^{2\pi}\frac{d\theta}{\omega-a\sin\theta}\] where we have used (l) to replace \(dt/d\theta\). This integral can be evaluated by complex variable methods, or by the substitution \(u=\tan\frac{\theta}{2}\). (See Exercise 4.3.2 for details.) The result is \[T=\frac{2\pi}{\sqrt{\omega^{2}-a^{2}}}\;\;. \tag{2}\] Figure 4.3.4 shows the graph of \(T\) as a function of \(a\). When \(a=0\), Equation (2) reduces to \(T=2\pi/\omega\), the familiar result for a uniform oscillator. The period increases with \(a\) and diverges as \(a\) approaches \(\omega\) from below (we denote this limit by \(a\rightarrow\omega\)). We can estimate the order of the divergence by noting that \[\sqrt{\omega^{2}-a^{2}} = \sqrt{\omega+a}\sqrt{\omega-a}\] \[\approx \sqrt{2\omega}\sqrt{\omega-a}\] as \(a\rightarrow\omega\). Hence \[T\approx \left(\frac{\pi\sqrt{2}}{\sqrt{\omega}}\right)\frac{1}{\sqrt{\omega-a} }\;, \tag{3}\] Figure 4.3.4: which shows that \(T\) blows up like \((a_{c}-a)^{-1/2}\), where \(a_{c}=\omega\). Now let's explain the origin of this _square-root scaling law_. ### Ghosts and Bottlenecks The square-root scaling law found above is a _very general feature of systems that are close to a saddle-node bifurcation._ Just after the fixed points collide, there is a saddle-node remnant or _ghost_ that leads to slow passage through a bottleneck. For example, consider \(\dot{\theta}=\omega-a\sin\theta\) for decreasing values of \(a\), starting with \(a>\omega\). As \(a\) decreases, the two fixed points approach each other, collide, and disappear (this sequence was shown earlier in Figure 4.3.3, except now you have to read from right to left.) For \(a\) slightly less than \(\omega\), the fixed points near \(\pi/2\) no longer exist, but they still make themselves felt through a saddle-node ghost (Figure 4.3.5). A graph of \(\theta(t)\) would have the shape shown in Figure 4.3.6. Notice how the trajectory spends practically all its time getting through the bottleneck. Now we want to derive a general scaling law for the time required to pass through a bottleneck. The only thing that matters is the behavior of \(\dot{\theta}\) in the immediate vicinity of the minimum, since the time spent there dominates all other time scales in the problem. Generically, \(\dot{\theta}\) looks _parabolic_ near its minimum. Then the problem simplifies tremendously: the dynamics can be reduced to the normal Figure 4.3.6: Figure 4.3.5: form for a saddle-node bifurcation! By a local rescaling of space, we can rewrite the vector field as \[\dot{x}=r+x^{2}\] where \(r\) is proportional to the distance from the bifurcation, and \(0<r<<1\). The graph of \(\dot{x}\) is shown in Figure 4.3.7. To estimate the time spent in the bottleneck, we calculate the time taken for \(x\) to go from \(-\infty\) (all the way on one side of the bottleneck) to \(+\infty\) (all the way on the other side). The result is \[T_{\text{bottleneck}}\approx\int_{-\infty}^{\infty}\frac{dx}{r+x^{2}}=\frac{\pi }{\sqrt{r}}\,,\] (4) which shows the generality of the square-root scaling law. (Exercise 4.3.1 reminds you how to evaluate the integral in (4).) **Example 4.3.2**: _Estimate the period of \(\dot{\theta}=\omega-a\sin\theta\) in the limit \(a\to\omega\), using the normal form method instead of the exact result. Solution: The period will be essentially the time required to get through the bottleneck. To estimate this time, we use a Taylor expansion about \(\theta=\pi/2\), where the bottleneck occurs. Let \(\phi=\theta-\pi/2\), where \(\phi\) is small. Then_ \[\dot{\phi} =\omega-a\sin(\phi+\tfrac{\pi}{2})\] \[=\omega-a\cos\phi\] \[=\omega-a+\tfrac{1}{2}a\phi^{2}+\cdots\] _which is close to the desired normal form. If we let_ \[x=(a/2)^{\nicefrac{{1}}{{2}}}\phi,\qquad\quad r=\omega-a\] _then \((2/a)^{\nicefrac{{1}}{{2}}}\dot{x}\approx r+x^{2}\), to leading order in \(x\). Separating variables yields_ **102**: _Flows on the circle_ Figure 4.3.7: \[T\approx(2/a)^{\nicefrac{{1}}{{2}}}\int_{-\infty}^{\infty}\frac{dx}{r+x^{2}}=(2/a) ^{\nicefrac{{1}}{{2}}}\frac{\pi}{\sqrt{r}}\,.\] Now we substitute \(r=\omega-a\). Furthermore, since \(a\rightarrow\omega^{-}\), we may replace \(2/a\) by \(2/\omega\). Hence \[T\approx\left(\frac{\pi\sqrt{2}}{\sqrt{\omega}}\right)\frac{1}{\sqrt{\omega-a}},\] which agrees with (3). ### Overdamped Pendulum We now consider a simple mechanical example of a nonuniform oscillator: an overdamped pendulum driven by a constant torque. Let \(\theta\) denote the angle between the pendulum and the downward vertical, and suppose that \(\theta\) increases counterclockwise (Figure 4.1). Then Newton's law yields \[mL^{2}\ddot{\theta}+b\dot{\theta}+mgL\sin\theta=\Gamma \tag{1}\] where \(m\) is the mass and \(L\) is the length of the pendulum, \(b\) is a viscous damping constant, \(g\) is the acceleration due to gravity, and \(\Gamma\) is a constant applied torque. All of these parameters are positive. In particular, \(\Gamma>0\) implies that the applied torque drives the pendulum counterclockwise, as shown in Figure 4.1. Equation (1) is a second-order system, but in the _overdamped limit_ of extremely large \(b\), it may be approximated by a first-order system (see Section 3.5 and Exercise 4.4.1). In this limit the inertia term \(mL^{2}\ddot{\theta}\) is negligible and so (1) becomes \[b\dot{\theta}+mgL\sin\theta=\Gamma\,. \tag{2}\] ### Overdamped Pendulum Figure 4.4.1: To think about this problem physically, you should imagine that the pendulum is immersed in molasses. The torque \(\Gamma\) enables the pendulum to plow through its viscous surroundings. Please realize that this is the _opposite_ limit from the familiar frictionless case in which energy is conserved, and the pendulum swings back and forth forever. In the present case, energy is lost to damping and pumped in by the applied torque. To analyze (2), we first nondimensionalize it. Dividing by \(mgL\) yields \[\frac{b}{mgL}\dot{\theta}=\frac{\Gamma}{mgL}-\sin\theta.\] Hence, if we let \[\tau=\frac{mgL}{b}t,\hskip 42.679134pt\gamma=\frac{\Gamma}{mgL} \tag{3}\] then \[\theta^{\prime}=\gamma-\sin\theta \tag{4}\] where \(\theta^{\prime}=d\theta/d\tau\). The dimensionless group \(\gamma\) is the ratio of the applied torque to the maximum gravitational torque. If \(\gamma>1\) then the applied torque can never be balanced by the gravitational torque and _the pendulum will overturn continually._ The rotation rate is nonuniform, since gravity helps the applied torque on one side and opposes it on the other (Figure 4.4.2). As \(\gamma\to 1^{+}\), the pendulum takes longer and longer to climb past \(\theta=\pi/2\) on the slow side. When \(\gamma=1\) a fixed point appears at \(\theta\) * = \(\pi/2\), and then splits into two when \(\gamma<1\) (Figure 4.4.3). On physical grounds, it's clear that the lower of the two equilibrium positions is the stable one. Figure 4.4.2: As \(\gamma\) decreases, the two fixed points move farther apart. Finally, when \(\gamma=0\), the applied torque vanishes and there is an unstable equilibrium at the top (inverted pendulum) and a stable equilibrium at the bottom. ### 4.5 Fireflies Fireflies provide one of the most spectacular examples of synchronization in nature. In some parts of southeast Asia, thousands of male fireflies gather in trees at night and flash on and off in unison. Meanwhile the female fireflies cruise overhead, looking for males with a handsome light. To really appreciate this amazing display, you have to see a movie or videotape of it. A good example is shown in David Attenborough's (1992) television series _The Trials of Life_, in the episode called "Talking to Strangers." See Buck and Buck (1976) for a beautifully written introduction to synchronous fireflies, and Buck (1988) for a comprehensive review. For mathematical models of synchronous fireflies, see Mirollo and Strogatz (1990) and Ermentrout (1991). How does the synchrony occur? Certainly the fireflies don't start out synchronized; they arrive in the trees at dusk, and the synchrony builds up gradually as the night goes on. The key is that _the fireflies influence each other_: When one firefly sees the flash of another, it slows down or speeds up so as to flash more nearly in phase on the next cycle. Hanson (1978) studied this effect experimentally, by periodically flashing a light at a firefly and watching it try to synchronize. For a range of periods close to the firefly's natural period (about 0.9 sec), the firefly was able to match its frequency to the periodic stimulus. In this case, one says that the firefly had been _entrained_ by the stimulus. However, if the stimulus was too fast or too slow, the firefly could not keep up and entrainment was lost--then a kind of beat phenomenon occurred. But in contrast to the simple beat phenomenon of Section 4.2, the phase difference between stimulus and firefly did not increase uniformly. The phase difference increased slowly during part of the beat cycle, as the firefly struggled in vain to synchronize, and then it increased rapidly through \(2\pi\), after which the firefly tried again on the next beat cycle. This process is called _phase walkthrough_ or _phase drift_. ### 4.5 Fireflies Figure 4.4.3: ### Model Ermentrout and Rinzel (1984) proposed a simple model of the firefly's flashing rhythm and its response to stimuli. Suppose that \(\theta(t)\) is the phase of the firefly's flashing rhythm, where \(\theta=0\) corresponds to the instant when a flash is emitted. Assume that in the absence of stimuli, the firefly goes through its cycle at a frequency \(\omega\), according to \(\dot{\theta}=\omega\). Now suppose there's a periodic stimulus whose phase \(\Theta\) satisfies \[\dot{\Theta}=\Omega\;, \tag{1}\] where \(\Theta=0\) corresponds to the flash of the stimulus. We model the firefly's response to this stimulus as follows: If the stimulus is ahead in the cycle, then we assume that the firefly speeds up in an attempt to synchronize. Conversely, the firefly slows down if it's flashing too early. A simple model that incorporates these assumptions is \[\dot{\theta}=\omega+A\sin(\Theta-\theta) \tag{2}\] where \(A>0\). For example, if \(\Theta\) is ahead of \(\theta\) (i.e., \(0<\Theta-\theta<\pi\)) the firefly speeds up (\(\dot{\theta}>\omega\)). The _resetting strength A_ measures the firefly's ability to modify its instantaneous frequency. ### Analysis To see whether entrainment can occur, we look at the dynamics of the phase difference \(\phi=\Theta-\theta\). Subtracting (2) from (1) yields \[\dot{\phi}=\dot{\Theta}-\dot{\theta}=\Omega-\omega-A\sin\phi\;, \tag{3}\] which is a _nonuniform oscillator_ equation for \(\phi\) (_t_). Equation (3) can be nondimensionalized by introducing \[\tau=At,\;\;\;\;\;\;\mu=\frac{\Omega-\omega}{A}. \tag{4}\] Then \[\phi^{\prime}=\mu-\sin\phi \tag{5}\] where \(\phi^{\prime}=d\phi/d\tau\). The dimensionless group \(\mu\) is a measure of the frequency difference, relative to the resetting strength. When \(\mu\) is small, the frequencies are relatively close together and we expect that entrainment should be possible. This is confirmed by Figure 4.5.1, where we plot the vector fields for (5), for different values of \(\mu\geq 0\). (The case \(\mu<0\) is similar.)When \(\mu=0\), all trajectories flow toward a stable fixed point at \(\phi^{*}=0\) (Figure 4.5.1a). Thus the firefly eventually entrains with _zero phase difference_ in the case \(\Omega=\omega\). In other words, the firefly and the stimulus flash _simultaneously_ if the firefly is driven at its natural frequency. Figure 4.5.1b shows that for \(0<\mu<1\), the curve in Figure 4.5.1a lifts up and the stable and unstable fixed points move closer together. All trajectories are still attracted to a stable fixed point, but now \(\phi^{*}>0\). Since the phase difference approaches a constant, one says that the firefly's rhythm is _phase-locked_ to the stimulus. Phase-locking means that the firefly and the stimulus run with the same instantaneous frequency, although they no longer flash in unison. The result \(\phi^{*}>0\) implies that the stimulus flashes _ahead_ of the firefly in each cycle. This makes sense--we assumed \(\mu>0\), which means that \(\Omega>\omega\); the stimulus is inherently faster than the firefly, and drives it faster than it wants to go. Thus the firefly falls behind. But it never gets lapped--it always lags in phase by a constant amount \(\phi^{*}\). If we continue to increase \(\mu\), the stable and unstable fixed points eventually coalesce in a saddle-node bifurcation at \(\mu=1\). For \(\mu>1\) both fixed points have disappeared and now phase-locking is lost; the phase difference \(\phi\) increases indefinitely, corresponding to _phase drift_ (Figure 4.5.1c). (Of course, once \(\phi\) reaches \(2\pi\) the oscillators are in phase again.) Notice that the phases don't separate at a uniform rate, in qualitative agreement with the experiments of Hanson (1978): \(\phi\) increases most slowly when it passes under the minimum of the sine wave in Figure 4.5.1c, at \(\phi=\pi/2\), and most rapidly when it passes under the maximum at \(\phi=\pi/2\). The model makes a number of specific and testable predictions. Entrainment is predicted to be possible only within a symmetric interval of driving frequencies, specifically \(\omega-A\leq\Omega\leq\omega+A\). This interval is called the _range of entrainment_ (Figure 4.5.2). By measuring the range of entrainment experimentally, one can nail down the value of the parameter \(A\). Then the model makes a rigid prediction for the phase difference during entrainment, namely \[\sin\phi^{\star}=\frac{\Omega-\omega}{A} \tag{6}\] where \(-\pi/2\leq\phi^{\star}\leq\pi/2\) corresponds to the _stable_ fixed point of (3). Moreover, for \(\mu>1\), the period of phase drift may be predicted as follows. The time required for \(\phi\) to change by \(2\pi\) is given by \[T_{\rm drift}=\int dt=\int_{0}^{2\pi}\frac{dt}{d\phi}\,d\phi\] \[=\int_{0}^{2\pi}\frac{d\phi}{\Omega-\omega-A\sin\phi}\;\;\;.\] To evaluate this integral, we invoke (2) of Section 4.3, which yields \[T_{\rm drift}=\frac{2\pi}{\sqrt{\left(\Omega-\omega\right)^{2}-A^{2}}}\,. \tag{7}\] Since \(A\) and \(\omega\) are presumably fixed properties of the firefly, the predictions (6) and (7) could be tested simply by varying the drive frequency \(\Omega\). Such experiments have yet to be done. Actually, the biological reality about synchronous fireflies is more complicated. The model presented here is reasonable for certain species, such as _Pteroptyx critbellata_, which behave as if \(A\) and \(\omega\) were fixed. However, the species that is best at synchronizing, _Pteroptyx malaccae_, is actually able to shift its frequency \(\omega\) toward the drive frequency \(\Omega\) (Hanson 1978). In this way it is able to achieve nearly zero phase difference, even when driven at periods that differ from its natural period by \(\pm 15\) percent! A model of this remarkable effect has been presented by Ermentrout (1991). ## 108 Flows on the circle Figure 4.5.2: ### Superconducting Josephson Junctions Josephson junctions are superconducting devices that are capable of generating voltage oscillations of extraordinarily high frequency, typically \(10^{10}-10^{11}\) cycles per second. They have great technological promise as amplifiers, voltage standards, detectors, mixers, and fast switching devices for digital circuits. Josephson junctions can detect electric potentials as small as one quadrillionth of a volt, and they have been used to detect far-infrared radiation from distant galaxies. For an introduction to Josephson junctions, as well as superconductivity more generally, see Van Duzer and Turner (1981). Although quantum mechanics is required to explain the _origin_ of the Josephson effect, we can nevertheless describe the _dynamics_ of Josephson junctions in classical terms. Josephson junctions have been particularly useful for experimental studies of nonlinear dynamics, because the equation governing a single junction is the same as that for a pendulum! In this section we will study the dynamics of a single junction in the overdamped limit. In later sections we will discuss underdamped junctions, as well as arrays of enormous numbers of junctions coupled together. ### Physical Background A Josephson junction consists of two closely spaced superconductors separated by a weak connection (Figure 4.6.l). This connection may be provided by an insulator, a normal metal, a semiconductor, a weakened superconductor, or some other material that weakly couples the two superconductors. The two superconducting regions may be characterized by quantum mechanical wave functions \(\psi_{1}e^{i\phi_{1}}\) and \(\psi_{2}e^{i\phi_{2}}\) respectively. Normally a much more complicated description would be necessary because there are \(\sim\)10\({}^{23}\) electrons to deal with, but in the superconducting ground state, these electrons form "Cooper pairs" that can be described by a _single_ macroscopic wave function. This implies an astonishing degree of coherence among the electrons. The Cooper pairs act like a miniature version of synchronous fireflies: they all adopt the same phase, because this turns out to minimize the energy of the superconductor. As a 22-year-old graduate student, Brian Josephson (1962) suggested that it should be possible for a current to pass between the two superconductors, even if there were no voltage difference between them. Although this behavior would be impossible classically, it could occur because of quantum mechanical _tunneling_ of Cooper pairs across the junction. An observation of this "Josephson effect" was made by Anderson and Rowell in 1963. ### Superconducting Josephson Junctions Figure 4.6.1:Incidentally, Josephson won the Nobel Prize in 1973, after which he lost interest in mainstream physics and was rarely heard from again. See Josephson (1982) for an interview in which he reminisces about his early work and discusses his more recent interests in transcendental meditation, consciousness, language, and even psychic spoon-bending and paranormal phenomena. ### The Josephson Relations We now give a more quantitative discussion of the Josephson effect. Suppose that a Josephson junction is connected to a dc current source (Figure 4.6.2), so that a constant current \(I>0\) is driven through the junction. Using quantum mechanics, one can show that if this current is less than a certain _critical current_\(I_{c}\), no voltage will be developed across the junction; that is, the junction acts as if it had zero resistance! However, the phases of the two superconductors will be driven apart to a constant phase difference \(\phi=\phi_{2}-\phi_{1}\), where \(\phi\) satisfies the _Josephson current-phase relation_ \[I=I_{c}\ \text{sin}\phi. \tag{1}\] Equation (1) implies that the phase difference increases as the _bias current_\(I\) increases. When \(I\) exceeds \(I_{c}\), a constant phase difference can no longer be maintained and a voltage develops across the junction. The phases on the two sides of the junction begin to slip with respect to each other, with the rate of slippage governed by the _Josephson voltage-phase relation_ \[V=\frac{\hbar}{2e}\dot{\phi}. \tag{2}\] Here \(V(t)\) is the instantaneous voltage across the junction, \(\hbar\) is Planck's constant divided by \(2\pi\), and \(e\) is the charge on the electron. For an elementary derivation of the Josephson relations (1) and (2), see Feynman's argument (Feynman et al. (1965), Vol. III), also reproduced in Van Duzer and Turner (1981). ### Equivalent Circuit and Pendulum Analog The relation (1) applies only to the _supercurrent_ carried by the electron pairs. In general, the total current passing through the junction will also contain contributions from a _displacement current_ and an _ordinary current_. Representing the displacement current by a capacitor, and the ordinary current by a resistor, we arrive at the equivalent circuit shown in Figure 4.6.3, first analyzed by Stewart (1968) and McCumber (1968). Figure 4.6.2: Now we apply Kirchhoff's voltage and current laws. For this parallel circuit, the voltage drop across each branch must be equal, and hence all the voltages are equal to \(V\), the voltage across the junction. Hence the current through the capacitor equals \(C\dot{V}\) and the current through the resistor equals \(V/R\). The sum of these currents and the supercurrent \(I_{e}\sin\phi\) must equal the bias current \(I\); hence \[C\dot{V}+\frac{V}{R}+I_{e}\sin\phi=I. \tag{3}\] Equation (3) may be rewritten solely in terms of the phase difference \(\phi\), thanks to (2). The result is \[\frac{\hbar C}{2e}\ddot{\phi}+\frac{\hbar}{2eR}\dot{\phi}+I_{e}\sin\phi=I, \tag{4}\] which is precisely analogous to the equation governing a damped pendulum driven by a constant torque! In the notation of Section 4.4, the pendulum equation is \[mL^{2}\ddot{\theta}+b\dot{\theta}+mgL\sin\theta=\Gamma.\] Hence the analogies are as follows: \[\begin{array}{ll}\mbox{{Pendulum}}&\mbox{{Josephson junction}}\\ \mbox{Angle}\ \theta&\mbox{Phase difference}\ \phi\\ \mbox{Angular velocity}\ \dot{\theta}&\mbox{Voltage}\ \frac{\hbar}{2e}\dot{\phi}\\ \mbox{Mass}\,m&\mbox{Capacitance}\ C\\ \mbox{Applied torque}\ \Gamma&\mbox{Bias current}\ I\\ \mbox{Damping constant}\ b&\mbox{Conductance}\ 1/R\\ \mbox{Maximum gravitational torque}\ mgL&\mbox{Critical current}\ I_{e}\end{array}\] This mechanical analog has often proved useful in visualizing the dynamics of Josephson junctions. Sullivan and Zimmerman (1971) actually constructed such a mechanical analog, and measured the average rotation rate of the pendulum as a Figure 4.6.3: function of the applied torque; this is the analog of the physically important \(I-V\) curve (current-voltage curve) for the Josephson junction. ### Typical Parameter Values Before analyzing (4), we mention some typical parameter values for Josephson junctions. The critical current is typically in the range \(I_{c}\approx 1\)\(\mu\)A - 1 mA, and a typical voltage is \(I_{c}R\approx 1\) mV. Since \(2e/h\approx 4.83\times 10^{14}\) Hz/V, a typical frequency is on the order of \(10^{11}\) Hz. Finally, a typical length scale for Josephson junctions is around \(1\)\(\mu m\), but this depends on the geometry and the type of coupling used. ### Dimensionless Formulation If we divide (4) by \(I_{c}\) and define a dimensionless time \[\tau=\frac{2eI_{c}R}{\hbar}t, \tag{5}\] we obtain the dimensionless equation \[\beta\phi^{\prime\prime}+\phi^{\prime}+\sin\phi=\frac{I}{I_{c}} \tag{6}\] where \(\phi^{\prime}=d\phi/d\tau\). The dimensionless group \(\beta\) is defined by \[\beta=\frac{2eI_{c}R^{2}C}{\hbar}.\] and is called the _McCumber parameter._ It may be thought of as a dimensionless capacitance. Depending on the size, the geometry, and the type of coupling used in the Josephson junction, the value of \(\beta\) can range from \(\beta\approx 10^{-6}\) to much larger values (\(\beta\approx 10^{6}\)). We are not yet prepared to analyze (6) in general. For now, let's restrict ourselves to the _overdamped limit_\(\beta<<1\). Then the term \(\beta\phi^{\prime\prime}\) may be neglected after a rapid initial transient, as discussed in Section 3.5, and so (6) reduces to a nonuniform oscillator: \[\phi^{\prime}=\frac{I}{I_{c}}-\sin\phi. \tag{7}\] As we know from Section 4.3, the solutions of (7) tend to a stable fixed point when \(I<I_{c}\), and vary periodically when \(I>I_{c}\). Example 4.6.1: Find the _current-voltage curve_ analytically in the overdamped limit. In other words, find the average value of the voltage \(\langle V\rangle\) as a function of the constant applied current \(I\), assuming that all transients have decayed and the system has reached steady-state operation. Then plot \(\langle V\rangle\) vs. \(I\). _Solution:_ It is sufficient to find \(\langle\phi^{\prime}\rangle\), since \(\langle V\rangle=(\hbar/2e)\langle\dot{\phi}\rangle\) from the voltage-phase relation (2), and \[\big{\langle}\dot{\phi}\big{\rangle}=\left\langle\frac{d\phi}{dt}\right\rangle= \left\langle\frac{d\tau}{dt}\,\frac{d\phi}{d\tau}\right\rangle=\frac{2eI_{c}R} {\hbar}\big{\langle}\phi^{\prime}\big{\rangle},\] from the definition of \(\tau\) in (5); hence \[\big{\langle}V\big{\rangle}=I_{c}R\big{\langle}\phi^{\prime}\big{\rangle}. \tag{8}\] There are two cases to consider. When \(I\leq I_{c}\), all solutions of (7) approach a fixed point \(\phi^{*}=\sin^{-1}(I/I_{c})\), where \(-\pi/2\leq\phi^{*}\leq\pi/2\). Thus \(\phi^{\prime}=0\) in steady state, and so \(\langle V\rangle=0\) for \(I\leq I_{c}\). When \(I>I_{c}\), all solutions of (7) are periodic with period \[T=\frac{2\pi}{\sqrt{\big{(}I/I_{c}\big{)}^{2}-1}}\,, \tag{9}\] where the period is obtained from (2) of Section 4.3, and time is measured in units of \(\tau\). We compute \(\langle\phi^{\prime}\rangle\) by taking the average over one cycle: \[\langle\phi^{\prime}\rangle=\frac{1}{T}\int_{0}^{\tau}\frac{d\phi}{d\tau}d\tau =\frac{1}{T}\int_{0}^{2\tau}d\phi=\frac{2\pi}{T}. \tag{10}\] Combining (8)-(10) yields \[\langle V\rangle=I_{c}R\sqrt{\big{(}I/I_{c}\big{)}^{2}-1}\qquad\text{ for }I>I_{c}.\] In summary, we have found \[\langle V\rangle=\begin{cases}0&\text{ for }I\leq I_{c}\\ I_{c}R\sqrt{\big{(}I/I_{c}\big{)}^{2}-1}&\text{ for }I>I_{c}.\end{cases} \tag{11}\] The \(I\)-\(V\) curve (11) is shown in Figure 4.6.4. As \(I\) increases, the voltage remains zero until \(I>I_{c}\); then \(\langle V\rangle\) rises sharply and eventually asymptotes to the Ohmic behavior \(\langle V\rangle\approx IR\) for \(I>>I_{c}\). The analysis given in Example 4.6.1 applies only to the overdamped limit \(\beta<<1\). The behavior of the system becomes much more interesting if \(\beta\) is not negligible. In particular, the \(I\)-\(V\) curve can be _hysteretic,_ as shown in Figure 4.6.5. As the bias current is increased slowly from \(I=0\), the voltage remains at \(V=0\) until \(I>I_{c}\). Then the voltage jumps up to a nonzero value, as shown by the upward arrow in Figure 4.6.5. The voltage increases with further increases of \(I\). However, if we now slowly _decrease I_, the voltage doesn't drop back to zero at \(I_{c}\) --we have to go _below_\(I_{c}\) before the voltage returns to zero. The hysteresis comes about because the system has _inertia_ when \(\beta\neq 0\). We can make sense of this by thinking in terms of the pendulum analog. The critical current \(I_{c}\) is analogous to the critical torque \(\Gamma_{c}\) needed to get the pendulum overturning. Once the pendulum has started whirling, its inertia keeps it going so that even Figure 4.6.4: if the torque is lowered _below_\(\Gamma_{c}\), the rotation continues. The torque has to be lowered even further before the pendulum will fail to make it over the top. In more mathematical terms, we'll show in Section 8.5 that this hysteresis occurs because a _stable fixed point coexists with a stable periodic solution._ We have never seen anything like _this_ before! For vector fields on the line, only fixed points can exist; for vector fields on the circle, both fixed points and periodic solutions can exist, _but not simultaneously._ Here we see just one example of the new kinds of phenomena that can occur in two-dimensional systems. It's time to take the plunge. ## 4.1 Examples and Definitions For which real values of \(a\) does the equation \(\dot{\theta}=\sin(a\theta)\) give a well-defined vector field on the circle? For each of the following vector fields, find and classify all the fixed points, and sketch the phase portrait on the circle. **4.1.2**: \(\dot{\theta}=1+2\cos\theta\)**4.1.3**: \(\dot{\theta}=\sin 2\theta\) **4.1.4**: \(\dot{\theta}=\sin^{3}\theta\)**4.1.5**: \(\dot{\theta}=\sin\theta+\cos\theta\) **4.1.6**: \(\dot{\theta}=3+\cos 2\theta\)**4.1.7**: \(\dot{\theta}=\sin k\theta\) where \(k\) is a positive integer. **4.1.8**: (Potentials for vector fields on the circle) **a)**: Consider the vector field on the circle given by \(\dot{\theta}=\cos\theta\). Show that this system has a single-valued potential \(V(\theta)\), i.e., for each point on the circle, there is a well-defined value of \(V\) such that \(\dot{\theta}=-dV/d\theta\). (As usual, \(\theta\) and \(\theta+2\pi k\) are to be regarded as the same point on the circle, for each integer \(k\).) **b)**: Now consider \(\dot{\theta}=1\). Show that there is no single-valued potential \(V(\theta)\) for this vector field on the circle. **c)**: What's the general rule? When does \(\dot{\theta}=f(\theta)\) have a single-valued potential? **4.1.9**: In Exercises 2.6.2 and 2.7.7, you were asked to give two analytical proofs that periodic solutions are impossible for vector fields on the line. Review these arguments and explain why they don't carry over to vector fields on the circle. Specifically which parts of the argument fail? ### 4.2 Uniform Oscillator (Church bells) The bells of two different churches are ringing. One bell rings every 3 seconds, and the other rings every 4 seconds. Assume that the bells have just rung at the same time. How long will it be until the next time they ring together? Answer the question in two ways: using common sense, and using the method of Example 4.2.1. 2.2 (Beats arising from linear superpositions) Graph \(x(t)=\sin 8t+\sin 9t\) for -\(20<t<20\). You should find that the amplitude of the oscillations is _modulated_--it grows and decays periodically. * What is the period of the amplitude modulations? * Solve this problem analytically, using a trigonometric identity that converts sums of sines and cosines to products of sines and cosines. (In the old days, this beat phenomenon was used to tune musical instruments. You would strike a tuning fork at the same time as you played the desired note on the instrument. The combined sound \(A_{1}\sin\omega_{1}t+A_{2}\sin\omega_{2}t\) would get louder and softer as the two vibrations went in and out of phase. Each maximum of total amplitude is called a beat. When the time between beats is long, the instrument is nearly in tune.) 2.3 (The clock problem) Here's an old chestnut from high school algebra: At 12:00, the hour hand and minute hand of a clock are perfectly aligned. When is the _next_ time they will be aligned? (Solve the problem by the methods of this section, and also by some alternative approach of your choosing.) ### 4.3 Nonuniform Oscillator As shown in the text, the time required to pass through a saddle-node bottleneck is approximately \(T_{\text{bottleneck}}=\int_{-\infty}^{\infty}\frac{dx}{r+x^{2}}\). To evaluate this integral, let \(x=\sqrt{r}\tan\theta\), use the identity \(1+\tan^{2}\theta=\sec^{2}\theta\), and change the limits of integration appropriately. Thereby show that \(T_{\text{bottleneck}}=\pi/\sqrt{r}\). The oscillation period for the nonuniform oscillator is given by the integral \(T=\int_{-\pi}^{\pi}\frac{d\theta}{\omega-a\sin\theta}\), where \(\omega>a>0\). Evaluate this integral as follows. * Let \(u=\tan\frac{\theta}{2}\). Solve for \(\theta\) and then express \(d\theta\) in terms of \(u\) and \(du\). * Show that \(\sin\theta=2u/(1+u^{2})\). (Hint: Draw a right triangle with base 1 and height \(u\). Then \(\frac{\theta}{2}\) is the angle opposite the side of length \(u\), since \(u=\tan\frac{\theta}{2}\) by definition. Finally, invoke the half-angle formula \(\sin\theta=2\sin\frac{\theta}{2}\cos\frac{\theta}{2}\). * Show that \(u\rightarrow\pm\infty\) as \(\theta\rightarrow\pm\pi\), and use that fact to rewrite the limits of integration. * Express \(T\) as an integral with respect to \(u\). * Finally, complete the square in the denominator of the integrand of (d), and reduce the integral to the one studied in Exercise 4.3.1, for a suitable choice of \(x\) and \(r\). For each of the following questions, draw the phase portrait as function of the control parameter \(\mu\). Classify the bifurcations that occur as \(\mu\) varies, and find all the bifurcation values of \(\mu\). 3.3 \(\dot{\theta}=\mu\sin\theta-\sin 2\theta\) 4.3.4 \(\dot{\theta}=\frac{\sin\theta}{\mu+\cos\theta}\) 4.3.5 \(\dot{\theta}=\mu+\cos\theta+\cos 2\theta\) 4.3.6 \(\dot{\theta}=\mu+\sin\theta+\cos 2\theta\) 4.3.7 \(\dot{\theta}=\frac{\sin\theta}{\mu+\sin\theta}\) 4.3.8 \(\dot{\theta}=\frac{\sin 2\theta}{1+\mu\sin\theta}\) #### 4.3.9 (Alternative derivation of scaling law) For systems close to a saddle-node bifurcation, the scaling law \(T_{\text{bottleneck}}\thicksim O(r^{-1/2})\) can also be derived as follows. a) Suppose that \(x\) has a characteristic scale \(O(r^{a})\), where \(a\) is unknown for now. Then \(x=r^{a}u\), where \(u\thicksim O(1)\). Similarly, suppose \(t=r^{b}\tau\), with \(\tau\thicksim O(1)\). Show that \(\dot{x}=r+x^{2}\) is thereby transformed to \(r^{a-b}\frac{du}{d\tau}=r+r^{2a}u^{2}\). b) Assume that all terms in the equation have the same order with respect to \(r\), and thereby derive \(a=\frac{1}{2}\), \(b=-\frac{1}{2}\). #### 4.3.10(Nongeneric scaling laws) In deriving the square-root scaling law for the time spent passing through a bottleneck, we assumed that \(\dot{x}\) had a quadratic minimum. This is the generic case, but what if the minimum were of higher order? Suppose that the bottleneck is governed by \(\dot{x}=r+x^{2a}\), where \(n>1\) is an integer. Using the method of Exercise 4.3.9, show that \(T_{\text{bottleneck}}\approx cr^{b}\), and determine \(b\) and \(c\). (It's acceptable to leave \(c\) in the form of a definite integral. If you know complex variables and residue theory, you should be able to evaluate \(c\) exactly by integrating around the boundary of the pie-slice \(\left\{z=re^{\omega}:0\leq\theta\leq\pi/n,\ 0\leq r\leq R\right\}\) and letting \(R\to\infty\).) ### 4.4 Overdamped Pendulum #### 4.4.1 (Validity of overdamped limit) Find the conditions under which it is valid to approximate the equation \(mL^{2}\ddot{\theta}+b\dot{\theta}+mgL\sin\theta=\Gamma\) by its overdamped limit \(b\dot{\theta}+mgL\sin\theta=\Gamma\). #### 4.4.2 (Understanding \(\sin\theta(t)\)) By imagining the rotational motion of an overdamped pendulum, sketch \(\sin\theta(t)\) vs. \(t\) for a typical solution of \(\theta^{\prime}=\gamma-\sin\theta\). How does the shape of the waveform depend on \(\gamma\)? Make a series of graphs for different \(\gamma\), including the limiting cases \(\gamma\approx 1\) and \(\gamma>>1\). For the pendulum, what physical quantity is proportional to \(\sin\theta(t)\)? #### 4.4.3 (Understanding \(\dot{\theta}(t)\)) Redo Exercise 4.4.2, but now for \(\dot{\theta}(t)\) instead of \(\sin\theta(t)\). #### 4.4.4 (Torsional spring) Suppose that our overdamped pendulum is connected to a torsional spring. As the pendulum rotates, the spring winds up and generates an opposing torque\(-k\theta\). Then the equation of motion becomes \(b\dot{\theta}+mgL\sin\theta=\Gamma-k\theta\). * Does this equation give a well-defined vector field on the circle? * Nondimensionalize the equation. * What does the pendulum do in the long run? * Show that many bifurcations occur as \(k\) is varied from 0 to \(\infty\). What kind of bifurcations are they? ### Fireflies 5.1 (Triangle wave) In the firefly model, the sinusoidal form of the firefly's response function was chosen somewhat arbitrarily. Consider the alternative model \(\dot{\Theta}=\Omega,\ \dot{\theta}=\omega+Af(\Theta-\theta)\), where \(f\) is given now by a triangle wave, not a sine wave. Specifically, let \[f(\phi)=\begin{cases}\phi,&-\frac{\pi}{2}\leq\phi\leq\frac{\pi}{2}\\ \pi-\phi,&\frac{\pi}{2}\leq\phi\leq\frac{\pi}{2}\end{cases}\] on the interval \(-\frac{\pi}{2}\leq\phi\leq\frac{3\pi}{2}\), and extend \(f\) periodically outside this interval. * Graph \(f(\phi)\). * Find the range of entrainment. * Assuming that the firefly is phase-locked to the stimulus, find a formula for the phase difference \(\phi\)*. * Find a formula for \(T_{\rm drift}\). 5.2 (General response function) Redo as much of the previous exercise as possible, assuming only that \(f(\phi)\) is a smooth, \(2\pi\)-periodic function with a single maximum and minimum on the interval \(-\pi\leq\phi\leq\pi\). 5.3 (Excitable systems) Suppose you stimulate a neuron by injecting it with a pulse of current. If the stimulus is small, nothing dramatic happens: the neuron increases its membrane potential slightly, and then relaxes back to its resting potential. However, if the stimulus exceeds a certain threshold, the neuron will "fire" and produce a large voltage spike before returning to rest. Surprisingly, the size of the spike doesn't depend much on the size of the stimulus--anything above threshold will elicit essentially the same response. Similar phenomena are found in other types of cells and even in some chemical reactions (Winfree 1980, Rinzel and Ermentrout 1989, Murray 2002). These systems are called _excitable_. The term is hard to define precisely, but roughly speaking, an excitable system is characterized by two properties: (I) it has a unique, globally attracting rest state, and (2) a large enough stimulus can send the system on a long excursion through phase space before it returns to the resting state. This exercise deals with the simplest caricature of an excitable system. Let \(\dot{\theta}=\mu+\sin\theta\), where \(\mu\) is slightly less than 1. 1. Show that the system satisfies the two properties mentioned above. What object plays the role of the "rest state"? And the "threshold"? 2. Let \(V(t)=\cos\theta(t)\). Sketch \(V(t)\) for various initial conditions. (Here \(V\)is analogous to the neuron's membrane potential, and the initial conditions correspond to different perturbations from the rest state.) ### Superconducting Josephson Junctions (Current and voltage oscillations) Consider a Josephson junction in the overdamped limit \(\beta=0\). 1. Sketch the supercurrent \(I_{c}\sin\phi(t)\) as a function of \(t\), assuming first that \(I/I_{c}\) is slightly greater than 1, and then assuming that \(I/I_{c}>>1\). (Hint: In each case, visualize the flow on the circle, as given by Equation (4.6.7).) 2. Sketch the instantaneous voltage \(V(t)\) for the two cases considered in (a). (Computer work) Check your qualitative solution to Exercise 4.6.1 by integrating Equation (4.6.7) numerically, and plotting the graphs of \(I_{c}\sin\phi(t)\) and \(V(t)\). (Washboard potential) Here's another way to visualize the dynamics of an overdamped Josephson junction. As in Section 2.7, imagine a particle sliding down a suitable potential. 1. Find the potential function corresponding to Equation (4.6.7). Show that it is _not_ a single-valued function on the circle. 2. Graph the potential as a function of \(\phi\), for various values of \(I/I_{c}\). Here \(\phi\) is to be regarded as a real number, not an angle. 3. What is the effect of increasing \(I\)? The potential in (b) is often called the "washboard potential" (Van Duzer and Turner 1981, p. 179) because its shape is reminiscent of a tilted, corrugated washboard. (Resistively loaded array) _Arrays_ of coupled Josephson junctions raise many fascinating questions. Their dynamics are not yet understood in detail. The questions are technologically important because arrays can produce much greater power output than a single junction, and also because arrays provide a reasonable model of the (still mysterious) high-temperature superconductors. For an introduction to some of the dynamical questions of interest, see Tsang et al. (1991) and Strogatz and Mirollo (1993). Figure 1 shows an array of two identical overdamped Josephson junctions. The junctions are in series with each other, and in parallel with a resistive "load" \(R\). The goal of this exercise is to derive the governing equations for this circuit. In particular, we want to find differential equations for \(\phi_{1}\) and \(\phi_{2}\). * Write an equation relating the dc bias current \(I_{b}\) to the current \(I_{a}\) flowing through the array and the current \(I_{R}\) flowing through the load resistor. * Let \(V_{1}\) and \(V_{2}\) denote the voltages across the first and second Josephson junctions. Show that \(I_{a}=I_{c}\sin\phi_{1}+V_{1}/r\) and \(I_{a}=I_{c}\sin\phi_{2}+V_{2}/r\). * Let \(k=1,2\). Express \(V_{k}\) in terms of \(\dot{\phi}_{k}\). * Using the results above, along with Kirchhoff's voltage law, show that \[I_{b}=I_{c}\sin\phi_{k}+\frac{\hbar}{2er}\dot{\phi}_{k}+\frac{\hbar}{2eR}\big{(} \dot{\phi}_{1}+\dot{\phi}_{2}\big{)}\quad\text{for}\;\;\;k=1,\;2.\] * The equations in part (d) can be written in more standard form as equations for \(\dot{\phi}_{k}\), as follows. Add the equations for \(k=1,2\), and use the result to eliminate the term \(\big{(}\dot{\phi}_{1}+\dot{\phi}_{2}\big{)}\). Show that the resulting equations take the form \[\dot{\phi}_{k}=\Omega+a\sin\phi_{k}+K\sum_{j=1}^{2}\sin\phi_{j},\] and write down explicit expressions for the parameters \(\Omega\), \(a\), \(K\). #### 4.6.5 (\(N\) junctions, resistive load) Generalize Exercise 4.6.4 as follows. Instead of the two Josephson junctions in Figure 1, consider an array of \(N\) junctions in series. As before, assume the array is in parallel with a resistive load \(R\), and that the junctions are identical, overdamped, and driven by a constant bias current \(I_{b}\). Show that the governing equations can be written in dimensionless form as \[\frac{d\phi_{k}}{d\tau}=\Omega+a\sin\phi_{k}+\frac{1}{N}\sum_{j=1}^{N}\sin\phi_ {j},\;\text{for}\;k=1,...,N,\] Figure 1: and write down explicit expressions for the dimensionless groups \(\Omega\) and \(a\) and the dimensionless time \(\tau\). (See Example 8.7.4 and Tsang et al. (1991) for further discussion.) 6.6 (\(N\) junctions, \(RLC\)load) Generalize Exercise 4.6.4 to the case where there are \(N\) junctions in series, and where the load is a resistor \(R\) in series with a capacitor \(C\) and an inductor L. Write differential equations for \(\phi_{k}\) and for \(Q\), where \(Q\) is the charge on the load capacitor. (See Strogatz and Mirollo 1993.)
## Chapter 5 Linear Systems ### 5.1 Definitions and Examples A _two-dimensional linear system_ is a system of the form \[\dot{x} = ax+by\] \[\dot{y} = cx+dy\] where \(a\), \(b\), \(c\), \(d\) are parameters. If we use boldface to denote vectors, this system can be written more compactly in matrix form as \[\dot{\mathbf{x}}=A\mathbf{x},\] where \[A= \begin{pmatrix}a&b\\ c&d\end{pmatrix}\text{ and }\mathbf{x}=\begin{pmatrix}x\\ y\end{pmatrix}.\] Such a system is _linear_ in the sense that if **x**1 and **x**2 are solutions, then so is any linear combination \(c_{1}\mathbf{x}_{1}+c_{2}\mathbf{x}_{2}\). Notice that \(\mathbf{\dot{x}}=\mathbf{0}\) when \(\mathbf{x}=\mathbf{0}\), so \(\mathbf{x}^{*}=\mathbf{0}\) is always a fixed point for any choice of \(A\). The solutions of \(\mathbf{\dot{x}}=A\mathbf{x}\) can be visualized as trajectories moving on the \((x,y)\) plane, in this context called the _phase plane_. Our first example presents the phase plane analysis of a familiar system. **Example 5.1.1:** As discussed in elementary physics courses, the vibrations of a mass hanging from a linear spring are governed by the linear differential equation \[m\ddot{x}+kx=0 \tag{1}\] where \(m\) is the mass, \(k\) is the spring constant, and \(x\) is the displacement of the mass from equilibrium (Figure 5.1.1). Give a phase plane analysis of this _simple harmonic oscillator_. _Solution:_ As you probably recall, it's easy to solve (1) analytically in terms of sines and cosines. But that's precisely what makes linear equations so special! For the _nonlinear_ equations of ultimate interest to us, it's usually impossible to find an analytical solution. We want to develop methods for deducing the behavior of equations like (1) _without actually solving them_. The motion in the phase plane is determined by a vector field that comes from the differential equation (1). To find this vector field, we note that the _state_ of the system is characterized by its current position \(x\) and velocity \(v\); if we know the values of _both_\(x\) and \(v\), then (1) uniquely determines the future states of the system. Therefore we rewrite (1) in terms of \(x\) and \(v\), as follows: \[\begin{array}{l}\dot{x}=v\\ \dot{v}=-\frac{k}{m}x.\end{array} \tag{2a}\] Equation (2a) is just the definition of velocity, and (2b) is the differential equation (1) rewritten in terms of \(v\). To simplify the notation, let \(\omega^{2}=k/m\). Then (2) becomes \[\begin{array}{l}\dot{x}=v\\ \dot{v}=-\omega^{2}x.\end{array} \tag{3a}\] The system (3) assigns a vector \((\dot{x},\dot{v})=(v,-\omega^{2}x)\) at each point (\(x\), \(v\)), and therefore represents a _vector field_ on the phase plane. For example, let's see what the vector field looks like when we're on the \(x\)-axis. Then \(v=0\) and so \(\left(\dot{x},\dot{v}\right)=(0,-\omega^{2}x)\). Hence the vectors point vertically downward for positive \(x\) and vertically upward for negative \(x\) (Figure 5.1.2). As \(x\) gets larger in magnitude, the vectors \((0,-\omega^{2}x)\) get longer. Similarly, on the \(v\)-axis, the vector field is \(\left(\dot{x},\dot{v}\right)=(v,0)\), which points to the right when \(v>0\) and to the left when \(v<0\). As we move around in phase space, the vectors change direction as shown in Figure 5.1.2. Just as in Chapter 2, it is helpful to visualize the vector field in terms of the motion of an imaginary fluid. In the present case, we imagine that a fluid is flowing steadily on the phase plane with a local velocity given by \((\dot{x},\dot{v})=(v,-\omega^{2}x)\). Then, to find the trajectory starting at \((x_{0},v_{0})\), we place an imaginary particle or _phase point_ at \((x_{0},\ v_{0})\) and watch how it is carried around by the flow. The flow in Figure 5.1.2 swirls about the origin. The origin is special, like the eye of a hurricane: a phase point placed there would remain motionless, because \((\dot{x},\dot{v})=(0,0)\) when \((x,v)=(0,0)\); hence the origin is a _fixed point_. But a phase point starting anywhere else would circulate around the origin and eventually return to its starting point. Such trajectories form _closed orbits_, as shown in Figure 5.1.3. Figure 5.1.3 is called the _phase portrait_ of the system--it shows the overall picture of trajectories in phase space. What do fixed points and closed orbits have to do with the original problem of a mass on a spring? The answers are beautifully simple. The fixed point \((x,v)=(0,0)\) corresponds to static equilibrium of the system: the mass is at rest at its equilibrium position and will remain there forever, since the spring is relaxed. The closed orbits have a more interesting interpretation: they correspond to periodic motions, i.e., oscillations of the mass. To see this, just look at some points on a closed orbit (Figure 5.1.4). When the displacement \(x\) is most negative, the velocity \(v\) is zero; this corresponds to one extreme of the oscillation, where the spring is most compressed (Figure 5.1.4). In the next instant as the phase point flows along the orbit, it is carried to points where \(x\) has increased and \(v\) is now positive; the mass is being pushed back toward its equilibrium position. But by the time the mass has reached \(x=0\), it has a large positive velocity (Figure 5.1.4b) and so it overshoots \(x=0\). The mass eventually comes to rest at the other end of its swing, where \(x\) is most positive and \(v\) is zero again (Figure 5.1.4c). Then the mass gets pulled up again and eventually completes the cycle (Figure 5.1.4d). The shape of the closed orbits also has an interesting physical interpretation. The orbits in Figures 5.1.3 and 5.1.4 are actually _ellipses_ given by the equation \(\omega^{2}x^{2}+v^{2}=C\), where \(C\geq 0\) is a constant. In Exercise 5.1.1, you are asked to derive this geometric result, and to show that it is equivalent to conservation of energy. **Example 5.1.2**: Solve the linear system \(\dot{\mathbf{x}}=A\mathbf{x}\), where \(A=\begin{pmatrix}a&0\\ 0&-1\end{pmatrix}\). Graph the phase portrait as \(a\) varies from \(-\infty\) to \(+\infty\), showing the qualitatively different cases. _Solution:_ The system is \[\begin{pmatrix}\dot{x}\\ \dot{y}\end{pmatrix}=\begin{pmatrix}a&0\\ 0&-1\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix}\!\!.\] Matrix multiplication yields Figure 5.1.4: \[\dot{x} = ax\] \[\dot{y} = -y\] which shows that the two equations are _uncoupled_; there's no \(x\) in the \(y\)-equation and vice versa. In this simple case, each equation may be solved separately. The solution is \[x(t) = x_{0}e^{at} \tag{1a}\] \[y(t) = y_{0}e^{-t}. \tag{1b}\] The phase portraits for different values of \(a\) are shown in Figure 5.1.5. In each case, \(y(t)\) decays exponentially. When \(a<0\), \(x(t)\) also decays exponentially and so all trajectories approach the origin as \(t\rightarrow\infty\). However, the direction of approach depends on the size of \(a\) compared to \(-1\). In Figure 5.1.5a, we have \(a<-1\), which implies that \(x(t)\) decays more rapidly than \(y(t)\). The trajectories approach the origin tangent to the _slower_ direction (here, the \(y\)-direction). The intuitive explanation is that when \(a\) is very negative, the trajectory slams horizontally onto the \(y\)-axis, because the decay of \(x(t)\) is almost instantaneous. Then the trajectory dawdles along the \(y\)-axis toward the origin, and so the approach is tangent to the \(y\)-axis. On the other hand, if we look _backwards_ Figure 5.1.5: along a trajectory (_t_ - - ), then the trajectories all become parallel to the faster decaying direction (here, the _x_-direction). These conclusions are easily proved by looking at the slope \(\,dy/dx = \dot{y}/\dot{x}\,\) along the trajectories; see Exercise 5.1.2. In Figure 5.1.5a, the fixed point **x*** = 0 is called a _stable node._ Figure 5.1.5b shows the case \(a\) = -1. Equation (1) shows that \(y(t)/x(t) = y_{0}/x_{0} = \,x(t) \equiv x_{0}\) and so all trajectories are straight lines through the origin. This is a very special case--it occurs because the decay rates in the two directions are precisely equal. In this case, **x*** is called a symmetrical node or _star_. When -1 < \(a\) < 0, we again have a node, but now the trajectories approach **x*** along the _x_-direction, which is the more slowly decaying direction for this range of \(a\) (Figure 5.1.5c). Something dramatic happens when \(a\) = 0 (Figure 5.1.5d). Now (1a) becomes \(x(t) \equiv x_{0}\) and so there's an entire _line of fixed points_ along the _x_-axis. All trajectories approach these fixed points along vertical lines. Finally when 0 > 0 (Figure 5.1.5e), **x*** becomes unstable, due to the exponential growth in the _x_-direction. Most trajectories veer away from **x*** and head out to infinity. An exception occurs if the trajectory starts on the _y_-axis; then it walks a tightrope to the origin. In forward time, the trajectories are asymptotic to the _x_-axis; in backward time, to the _y_-axis. Here **x*** = 0 is called a _saddle point._ The _y_-axis is called the _stable manifold_ of the saddle point **x***, defined as the set of initial conditions **x**0 such that **x**(_t_) - **x*** as \(t\) -. Likewise, the _unstable manifold_ of **x*** is the set of initial conditions such that **x**(_t_) - **x*** as \(t\) -. Here the unstable manifold is the _x_-axis. Note that a typical trajectory asymptotically approaches the unstable manifold as \(t\) -, and approaches the stable manifold as \(t\) -. This sounds backwards, but it's right! ### 5.6 Stability Language It's useful to introduce some language that allows us to discuss the stability of different types of fixed points. This language will be especially useful when we analyze fixed points of _nonlinear_ systems. For now we'll be informal; precise definitions of the different types of stability will be given in Exercise 5.1.10. We say that **x*** = 0 is an _attracting_ fixed point in Figures 5.1.5a-c; all trajectories that start near **x*** approach it as \(t\) -. That is, **x**(_t_) - **x*** as \(t\) -. In fact **x*** attracts _all_ trajectories in the phase plane, so it could be called _globally attracting_. There's a completely different notion of stability which relates to the behavior of trajectories for _all_ time, not just as \(t\) -. We say that a fixed point **x*** is _Liapunov stable_ if all trajectories that start sufficiently close to **x*** remain close to it for all time. In Figures 5.1.5a-d, the origin is Liapunov stable. Figure 5.1.5d shows that a fixed point can be Liapunov stable but not attracting. This situation comes up often enough that there is a special name for it. When a fixed point is Liapunov stable but not attracting, it is called _neutrally stable_. Nearby trajectories are neither attracted to nor repelled from a neutrally stable point. As a second example, the equilibrium point of the simple harmonic oscillator (Figure 5.1.3) is neutrally stable. Neutral stability is commonly encountered in mechanical systems in the absence of friction. Conversely, it's possible for a fixed point to be attracting but not Liapunov stable; thus, neither notion of stability implies the other. An example is given by the following vector field on the circle: \(\dot{\theta}=1-\cos\theta\) (Figure 5.1.6). Here \(\theta^{*}=0\) attracts all trajectories as \(t\to\infty\), but it is not Liapunov stable; there are trajectories that start infinitesimally close to \(\theta^{*}\) but go on a very large excursion before returning to \(\theta^{*}\). However, in practice the two types of stability often occur together. If a fixed point is _both_ Liapunov stable and attracting, we'll call it _stable,_ or sometimes _asymptotically stable._ Finally, \(\mathbf{x}^{*}\) is _unstable_ in Figure 5.1.5e, because it is neither attracting nor Liapunov stable. A graphical convention: we'll use open dots to denote unstable fixed points, and solid black dots to denote Liapunov stable fixed points. This convention is consistent with that used in previous chapters. ### 5.2 Classification of Linear Systems The examples in the last section had the special feature that two of the entries in the matrix \(A\) were zero. Now we want to study the general case of an arbitrary \(2\times 2\) matrix, with the aim of classifying all the possible phase portraits that can occur. Example 5.1.2 provides a clue about how to proceed. Recall that the \(x\) and \(y\) axes played a crucial geometric role. They determined the direction of the trajectories as \(t\to\pm\infty\). They also contained special _straight-line trajectories_: a trajectory starting on one of the coordinate axes stayed on that axis forever, and exhibited simple exponential growth or decay along it. For the general case, we would like to find the analog of these straight-line trajectories. That is, we seek trajectories of the form \[\mathbf{x}(t)=e^{\lambda t}\mathbf{v}\,, \tag{2}\] where \(\mathbf{v}\approx\mathbf{0}\) is some fixed vector to be determined, and \(\lambda\) is a growth rate, also to be determined. If such solutions exist, they correspond to exponential motion along the line spanned by the vector \(\mathbf{v}\). To find the conditions on \(\mathbf{v}\) and \(\lambda\), we substitute \(\mathbf{x}(t)=e^{\lambda t}\mathbf{v}\) into \(\dot{\mathbf{x}}=A\mathbf{x}\), and obtain \(\lambda e^{\lambda t}\mathbf{v}=\boldsymbol{e}^{\nu}A\mathbf{v}\). Canceling the nonzero scalar factor \(e^{\lambda t}\) yields \[A\mathbf{v}=\lambda\mathbf{v}\,, \tag{3}\]which says that the desired straight line solutions exist if **v** is an _eigenvector_ of \(A\) with corresponding _eigenvalue_\(\lambda\). In this case we call the solution (2) an _eigensolution_. Let's recall how to find eigenvalues and eigenvectors. (If your memory needs more refreshing, see any text on linear algebra.) In general, the eigenvalues of a matrix \(A\) are given by the _characteristic equation_\(\det(A-\lambda I)=0\), where \(I\) is the identity matrix. For a 2 x 2 matrix \[A = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix},\] the characteristic equation becomes \[\det\begin{pmatrix} a - \lambda & b \\ c & d - \lambda \\ \end{pmatrix} = 0.\] Expanding the determinant yields \[\lambda^{2} - \tau \lambda + \Delta = 0 \tag{4}\] where \[\begin{array}{l} {\tau = \text{trace}(A) = a + d,} \\ {\Delta = \det(A) = ad - bc.} \\ \end{array}\] Then \[\lambda_{1} = \frac{\tau + \sqrt{\tau^{2} - 4\Delta}}{2},\;\;\lambda_{2} = \frac{\tau - \sqrt{\tau^{2} - 4\Delta}}{2} \tag{5}\] are the solutions of the quadratic equation (4). In other words, the eigenvalues depend only on the trace and determinant of the matrix \(A\). The typical situation is for the eigenvalues to be distinct: \(\lambda_{1} \neq \lambda_{2}\). In this case, a theorem of linear algebra states that the corresponding eigenvectors **v**1 and **v**2 are linearly independent, and hence span the entire plane (Figure 5.2.1). In particular, any initial condition **x**0 can be written as a linear combination of eigenvectors, say **x**0 = \(c\)1**v**1 + \(c\)2**v**2. This observation allows us to write down the general solution for **x**(_t_)--it is simply \[\mathbf{x}(t) = c_{1}e^{\lambda_{1}t}\mathbf{v}_{1} + c_{2}e^{\lambda_{2}t}\mathbf{v}_{2}. \tag{6}\] Why is this the general solution? First of all, it is a linear combination of solutions to \(\hat{\mathbf{x}} = A\mathbf{x}\)and hence is itself a solution. Second, it satisfies the initial condition \(\mathbf{x}(0)=\mathbf{x}_{0}\), and so by the existence and uniqueness theorem, it is the _only_ solution. (See Section 6.2 for a general statement of the existence and uniqueness theorem.) **Example 5.2.1:** Solve the initial value problem \(\dot{x}=x+y\), \(\dot{y}=4x-2y\), subject to the initial condition \((x_{0},y_{0})=(2,-3)\). _Solution:_ The corresponding matrix equation is \[\begin{pmatrix}\dot{x}\\ \dot{y}\end{pmatrix}=\begin{pmatrix}1&1\\ 4&-2\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix}.\] First we find the eigenvalues of the matrix \(A\). The matrix has \(\tau=-1\) and \(\Delta=-6\), so the characteristic equation is \(\lambda^{2}+\lambda-6=0\). Hence \[\lambda_{1}=2,\,\lambda_{2}=-3.\] Next we find the eigenvectors. Given an eigenvalue \(\lambda\), the corresponding eigenvector \(\mathbf{v}=(v_{1},v_{2})\) satisfies \[\begin{pmatrix}1-\lambda&1\\ 4&-2-\lambda\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=\begin{pmatrix}0\\ 0\end{pmatrix}.\] For \(\lambda_{1}=2\), this yield \(\begin{pmatrix}-1&1\\ 4&-4\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=\begin{pmatrix}0\\ 0\end{pmatrix}\), which has a nontrivial solution \((v_{1},v_{2})=(1,1)\), or any scalar multiple thereof. (Of course, any multiple of an eigenvector is always an eigenvector; we try to pick the simplest multiple, but any one will do.) Similarly, for \(\lambda_{2}=-3\), the eigenvector equation becomes \(\begin{pmatrix}4&1\\ 4&1\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=\begin{pmatrix}0\\ 0\end{pmatrix}\), which has a nontrivial solution \((v_{1},v_{2})=(1,-4)\). In summary, \[\mathbf{v}_{1}=\begin{pmatrix}1\\ 1\end{pmatrix},\qquad\mathbf{v}_{2}=\begin{pmatrix}1\\ -4\end{pmatrix}.\] Next we write the general solution as a linear combination of eigensolutions. From (6), the general solution is \[\mathbf{x}(t)=c_{1}\begin{pmatrix}1\\ 1\end{pmatrix}e^{2t}+c_{2}\begin{pmatrix}1\\ -4\end{pmatrix}e^{-3t}. \tag{7}\] Finally, we compute \(c_{1}\) and \(c_{2}\) to satisfy the initial condition \((x_{0},y_{0})=(2,-3)\). At \(t=0,(7)\) becomes \[\begin{pmatrix}2\\ -3\end{pmatrix}=c_{1}\begin{pmatrix}1\\ 1\end{pmatrix}+c_{2}\begin{pmatrix}1\\ -4\end{pmatrix},\] which is equivalent to the algebraic system \[\begin{array}{l}2=c_{1}+c_{2},\\ -3=c_{1}-4c_{2}.\end{array}\] The solution is \(c_{1}=1\), \(c_{2}=1\). Substituting back into (7) yields \[\begin{array}{l}x(t)=e^{2x}+e^{-3x},\\ y(t)=e^{2x}-4e^{-3x}\end{array}\] for the solution to the initial value problem. **Example 5.2.2**: Draw the phase portrait for the system of Example 5.2.1. _Solution:_ The system has eigenvalues \(\lambda_{1}=2\), \(\lambda_{2}=-3\). Hence the first eigensolution grows exponentially, and the second eigensolution decays. This means the origin is a _saddle point_. Its stable manifold is the line spanned by the eigenvector \(\mathbf{v}_{2}=(1,-4)\), corresponding to the decaying eigensolution. Similarly, the unstable manifold is the line spanned by \(\mathbf{v}_{1}=(1,1)\). As with all saddle points, a typical trajectory approaches the unstable manifold as \(t\to\infty\), and the stable manifold as \(t\to-\infty\). Figure 5.2.2 shows the phase portrait. **Figure 5.2.2**: **Example 5.2.3**: Sketch a typical phase portrait for the case \(\lambda_{2}<\lambda_{1}<0\). _Solution:_ First suppose \(\lambda_{2}<\lambda_{1}<0\). Then both eigensolutions decay exponentially. The fixed point is a stable node, as in Figures 5.1.5a and 5.1.5c, except now the eigenvectors are not mutually perpendicular, in general. Trajectories typically approach the origin tangent to the _slow eigendirection_, defined as the direction spanned by the eigenvector with the smaller \(|\lambda|\). In backwards time (\(t\to-\infty\)), the trajectories become parallel to the fast eigendirection. Figure 5.2.3 shows the phase portrait. (If we reverse all the arrows in Figure 5.2.3, we obtain a typical phase portrait for an _unstable node_.) **Example 5.2.4**: What happens if the eigenvalues are _complex_ numbers? _Solution:_ If the eigenvalues are complex, the fixed point is either a _center_ (Figure 5.2.4a) or a _spiral_ (Figure 5.2.4b). We've already seen an example of a center in the simple harmonic oscillator of Section 5.1; the origin is surrounded by a family of closed orbits. Note that centers are _neutrally stable_, since nearby trajectories are neither attracted to nor repelled from the fixed point. A spiral would occur if the harmonic oscillator were lightly damped. Then the trajectory would just fail to close, because the oscillator loses a bit of energy on each cycle. To justify these statements, recall that the eigenvalues are \[\lambda_{1,2}=\tfrac{1}{2}\Big{(}\tau\pm\sqrt{\tau^{2}-4\Delta}\Big{)}.\] Thus complex eigenvalues occur when \[\tau^{2}-4\Delta<0\.\] ### 5.2 Classification of linear systems Figure 5.2.4: Figure 5.2.3:To simplify the notation, let's write the eigenvalues as \[\lambda_{1,2}=\alpha\pm i\omega\] where \[\alpha=\tau/2,\ \ \omega=\tfrac{1}{2}\sqrt{4\Delta-\tau^{2}}.\] By assumption, \(\omega\approx 0\). Then the eigenvalues are distinct and so the general solution is still given by \[\mathbf{x}(t)=c_{1}e^{\lambda t}\mathbf{v}_{1}+c_{2}e^{\lambda t}\mathbf{v}_{2}.\] But now the \(c\)'s and \(\mathbf{v}\)'s are _complex_, since the \(\lambda\)'s are. This means that \(\mathbf{x}(t)\) involves linear combinations of \(e^{(\alpha\pm i\omega)t}\). By Euler's formula, \(e^{\omega t}=\cos\,\omega t+i\sin\,\omega t\). Hence \(\mathbf{x}(t)\) is a combination of terms involving \(e^{\omega t}\cos\,\omega t\) and \(e^{\omega t}\sin\,\omega t\). Such terms represent exponentially _decaying oscillations_ if \(\alpha=\operatorname{Re}(\lambda)<0\) and _growing oscillations_ if \(\alpha>0\). The corresponding fixed points are _stable_ and _unstable spirals_, respectively. Figure 5.2.4b shows the stable case. If the eigenvalues are pure imaginary (\(\alpha=0\)), then all the solutions are periodic with period \(T=2\pi/\omega\). The oscillations have fixed amplitude and the fixed point is a center. For both centers and spirals, it's easy to determine whether the rotation is clockwise or counterclockwise; just compute a few vectors in the vector field and the sense of rotation should be obvious. ## Example 5.2.5: In our analysis of the general case, we have been assuming that the eigenvalues are distinct. What happens if the eigenvalues are _equal_? _Solution:_ Suppose \(\lambda_{1}=\lambda_{2}=\lambda\). There are two possibilities: either there are two independent eigenvectors corresponding to \(\lambda\), or there's only one. If there are two independent eigenvectors, then they span the plane and so _every vector is an eigenvector with this same eigenvalue_\(\lambda\). To see this, write an arbitrary vector \(\mathbf{x}_{0}\) as a linear combination of the two eigenvectors: \(\mathbf{x}_{0}=c_{1}\mathbf{v}_{1}+c_{2}\mathbf{v}_{2}\). Then \[A\,\mathbf{x}_{0}=A\,(c_{1}\mathbf{v}_{1}+c_{2}\mathbf{v}_{2})=c_{1}\lambda \mathbf{v}_{1}+c_{2}\lambda\mathbf{v}_{2}=\lambda\mathbf{x}_{0}\] so \(\mathbf{x}_{0}\) is also an eigenvector with eigenvalue \(\lambda\). Since multiplication by \(A\) simply stretches every vector by a factor \(\lambda\), the matrix must be a multiple of the identity: \[A=\begin{pmatrix}\lambda&0\\ 0&\lambda\end{pmatrix}.\] Figure 5.2.5: Then if \(\lambda\approx 0\), all trajectories are straight lines through the origin (\(\mathbf{x}(t)=e^{\lambda t}\mathbf{x}_{0}\)) and the fixed point is a _star node_ (Figure 5.2.5). On the other hand, if \(\lambda=0\), the whole plane is filled with fixed points! (No surprise--the system is \(\dot{\mathbf{x}}=\mathbf{0}\).) The other possibility is that there's only one eigenvector (more accurately, the eigenspace corresponding to \(\lambda\) is one-dimensional.) For example, any matrix of the form \(A=\begin{pmatrix}\lambda&b\\ 0&\lambda\end{pmatrix}\), with \(b\approx 0\) has only a one-dimensional eigenspace (Exercise 5.2.11). When there's only one eigendirection, the fixed point is a _degenerate node._ A typical phase portrait is shown in Figure 5.2.6. As \(t\to+\infty\) and also as \(t\to-\infty\), all trajectories become parallel to the one available eigendirection. A good way to think about the degenerate node is to imagine that it has been created by deforming an ordinary node. The ordinary node has two independent eigendirections; all trajectories are parallel to the slow eigendirection as \(t\to\infty\), and to the fast eigendirection as \(t\to-\infty\) (Figure 5.2.7a). Now suppose we start changing the parameters of the system in such a way that the two eigendirections are scissored together. Then some of the trajectories will get squashed in the collapsing region between the two eigendirections, while the surviving trajectories get pulled around to form the degenerate node (Figure 5.2.7b). ### 5.2 Classification of linear systems Figure 5.2.6: Another way to get intuition about this case is to realize that the degenerate node is on the _borderline between a spiral and a node_. The trajectories are trying to wind around in a spiral, but they don't quite make it. ### Classification of Fixed Points By now you're probably tired of all the examples and ready for a simple classification scheme. Happily, there is one. We can show the type and stability of all the different fixed points on a single diagram (Figure 5.2.8). The axes are the trace \(\tau\) and the determinant \(\Delta\) of the matrix \(A\). All of the information in the diagram is implied by the following formulas: \[\lambda_{\text{ 1,2}} = \tfrac{1}{2}\Big{(}\tau \pm \sqrt{\tau^{2} - 4\Delta}\Big{)},\qquad\Delta = \lambda_{\text{ 1}}\lambda_{\text{ 2}},\qquad\tau = \lambda_{\text{ 1}} + \lambda_{\text{ 2}}.\] The first equation is just (5). The second and third can be obtained by writing the characteristic equation in the form \((\lambda - \lambda_{\text{ 1}})(\lambda - \lambda_{\text{ 2}}) = \lambda^{2} - \tau\lambda + \Delta = 0\). To arrive at Figure 5.2.8, we make the following observations: If \(\Delta < 0\), the eigenvalues are real and have opposite signs; hence the fixed point is a _saddle point_. If \(\Delta > 0\), the eigenvalues are either real with the same sign (_nodes_), or complex conjugate (_spirals_ and _centers_). Nodes satisfy \(\tau^{2} - 4\Delta > 0\) and spirals satisfy \(\tau^{2} - 4\Delta < 0\). The parabola \(\tau^{2} - 4\Delta = 0\) is the borderline between nodes and spirals; star nodes and degenerate nodes live on this parabola. The stability of the nodes and spirals is determined by \(\tau\). When \(\tau < 0\), both eigenvalues have negative real parts, so the fixed point is stable. Unstable spirals and nodes have \(\tau > 0\). Neutrally stable centers live on the borderline \(\tau = 0\), where the eigenvalues are purely imaginary. Figure 5.2.8: If \(\Delta=0\), at least one of the eigenvalues is zero. Then the origin is not an isolated fixed point. There is either a whole line of fixed points, as in Figure 5.1.5d, or a plane of fixed points, if \(A=0\). Figure 5.2.8 shows that saddle points, nodes, and spirals are the major types of fixed points; they occur in large open regions of the (\(\Delta\),\(\tau\)) plane. Centers, stars, degenerate nodes, and non-isolated fixed points are _borderline cases_ that occur along curves in the (\(\Delta\),\(\tau\)) plane. Of these borderline cases, centers are by far the most important. They occur very commonly in frictionless mechanical systems where energy is conserved. **Example 5.2.6:** Classify the fixed point \(\mathbf{x}^{\ast}=\mathbf{0}\) for the system \(\dot{\mathbf{x}}=A\mathbf{x}\), where \(A=\begin{pmatrix}1&2\\ 3&4\end{pmatrix}\). _Solution:_ The matrix has \(\Delta=-2\); hence the fixed point is a saddle point. **Example 5.2.7:** Redo Example 5.2.6 for \(A=\begin{pmatrix}2&1\\ 3&4\end{pmatrix}\). _Solution:_ Now \(\Delta=5\) and \(\tau=6\). Since \(\Delta>0\) and \(\tau^{2}-4\Delta=16>0\), the fixed point is a node. It is unstable, since \(\tau>0\). ### 5.3 Love Affairs To arouse your interest in the classification of linear systems, we now discuss a simple model for the dynamics of love affairs (Strogatz 1988). The following story illustrates the idea. Romeo is in love with Juliet, but in our version of this story, Juliet is a fickle lover. The more Romeo loves her, the more Juliet wants to run away and hide. But when Romeo gets discouraged and backs off, Juliet begins to find him strangely attractive. Romeo, on the other hand, tends to echo her: he warms up when she loves him, and grows cold when she hates him. Let \(R(t)=\) Romeo's love/hate for Juliet at time \(t\) \(J(t)=\) Juliet's love/hate for Romeo at time \(t\). Positive values of \(R\), \(J\) signify love, negative values signify hate. Then a model for their star-crossed romance is \[\begin{array}{l} {\dot{R} = aJ} \\ {\dot{J} = - bR} \\ \end{array}\] where the parameters \(a\) and \(b\) are positive, to be consistent with the story. The sad outcome of their affair is, of course, a neverending cycle of love and hate; the governing system has a center at (_R_,_J_) = (0,0). At least they manage to achieve simultaneous love one-quarter of the time (Figure 5.3.1). Now consider the forecast for lovers governed by the general linear system \[\begin{array}{l} {\dot{R} = aR + bJ} \\ {\dot{J} = cR + dJ} \\ \end{array}\] where the parameters _a, b, c, d_ may have either sign. A choice of signs specifies the romantic styles. As named by one of my students, the choice \(a\) > 0, \(b\) > 0 means that Romeo is an "eager beaver"--he gets excited by Juliet's love for him, and is further spurred on by his own affectionate feelings for her. It's entertaining to name the other three romantic styles, and to predict the outcomes for the various pairings. For example, can a "cautious lover" (_a_ < 0, \(b\) > 0) find true love with an eager beaver? These and other pressing questions will be considered in the exercises. ## Example 5.3.1: What happens when two identically cautious lovers get together? _Solution:_ The system is \[\begin{array}{l} {\dot{R} = aR + bJ} \\ {\dot{J} = bR + aJ} \\ \end{array}\] with \(a\) < 0, \(b\) > 0. Here \(a\) is a measure of cautiousness (they each try to avoid throwing themselves at the other) and \(b\) is a measure of responsiveness (they both get excited by the other's advances). We might suspect that the outcome depends on the relative size of \(a\) and \(b\). Let's see what happens. The corresponding matrix is \[A=\begin{pmatrix}a&b\\ b&a\end{pmatrix}\] which has \[\tau=2a<0\,\qquad\Delta=a^{2}-b^{2},\qquad\tau^{2}-4\Delta=4b^{2}>0\.\] Hence the fixed point \((R,J)=(0,0)\) is a saddle point if \(a^{2}<b^{2}\) and a stable node if \(a^{2}>b^{2}\). The eigenvalues and corresponding eigenvectors are \[\lambda_{1}=a+b\,\qquad\mathbf{v}_{1}=(1,1)\,\qquad\quad\lambda_{2}=a-b\, \qquad\mathbf{v}_{2}=(1,-1).\] Since \(a+b>a-b\), the eigenvector \((1,1)\) spans the unstable manifold when the origin is a saddle point, and it spans the slow eigendirection when the origin is a stable node. Figure 5.3.2 shows the phase portrait for the two cases. If \(a^{2}>b^{2}\), the relationship always fizzles out to mutual indifference. The lesson seems to be that excessive caution can lead to apathy. If \(a^{2}<b^{2}\), the lovers are more daring, or perhaps more sensitive to each other. Now the relationship is explosive. Depending on their feelings initially, their relationship either becomes a love fest or a war. In either case, all trajectories approach the line \(R=J\), so their feelings are eventually mutual. ### 5.3 LOVE AFFAIRS Figure 5.3.2: ### Definitions and Examples 1.1 (Ellipses and energy conservation for the harmonic oscillator) Consider the harmonic oscillator \(\dot{x}=v\), \(\dot{v}=-\omega^{2}x\). * Show that the orbits are given by ellipses \(\omega^{2}x^{2}+v^{2}=C\), where \(C\) is any non-negative constant. (Hint: Divide the \(\dot{x}\) equation by the \(\dot{v}\) equation, separate the \(v\)'s from the \(x\)'s, and integrate the resulting separable equation.) * Show that this condition is equivalent to conservation of energy. Consider the system \(\dot{x}=ax\), \(\dot{y}=-y\), where \(a<-1\). Show that all trajectories become parallel to the \(y\)-direction as \(t\to\infty\), and parallel to the \(x\)-direction as \(t\to-\infty\). (Hint: Examine the slope \(dy/dx=\dot{y}/\dot{x}\).) Write the following systems in matrix form. \[\begin{array}{llll}\mathbf{5.1.3}&\dot{x}=-y,\;\dot{y}=-x&\mathbf{5.1.4}&\dot {x}=3x-2y,\;\dot{y}=2y-x\\ \mathbf{5.1.5}&\dot{x}=0,\;\dot{y}=x+y&\mathbf{5.1.6}&\dot{x}=x,\;\dot{y}=5x+y \end{array}\] Sketch the vector field for the following systems. Indicate the length and direction of the vectors with reasonable accuracy. Sketch some typical trajectories. \[\begin{array}{llll}\mathbf{5.1.7}&\dot{x}=x,\;\dot{y}=x+y&\mathbf{5.1.8}&\dot {x}=-2y,\;\dot{y}=x\end{array}\] Consider the system \(\dot{x}=-y\), \(\dot{y}=-x\). * Sketch the vector field. * Show that the trajectories of the system are hyperbolas of the form \(x^{2}-y^{2}=C\). (Hint: Show that the governing equations imply \(x\dot{x}-y\dot{y}=0\) and then integrate both sides.) * The origin is a saddle point; find equations for its stable and unstable manifolds. * The system can be decoupled and solved as follows. Introduce new variables \(u\) and \(v\), where \(u=x+y\), \(v=x-y\). Then rewrite the system in terms of \(u\) and \(v\). Solve for \(u(t)\) and \(v(t)\), starting from an arbitrary initial condition \((u_{0},v_{0})\). * What are the equations for the stable and unstable manifolds in terms of \(u\) and \(v\)? * Finally, using the answer to (d), write the general solution for \(x(t)\) and \(y(t)\), starting from an initial condition \((x_{0},y_{0})\) (Attracting and Liapunov stable) Here are the official definitions of the various types of stability. Consider a fixed point \(\mathbf{x}\)* of a system \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\). We say that \(\mathbf{x}\)* is _attracting_ if there is a \(\delta>0\) such that \(\lim\limits_{t\to\infty}\mathbf{x}(t)=\mathbf{x}\)* whenever \(\|\mathbf{x}(0)-\mathbf{x}\|<\delta\). In other words, any trajectory that starts within a distance \(\delta\) of \(\mathbf{x}\)* is guaranteed to converge to \(\mathbf{x}\)* _eventually_. As shown schematically in Figure 1, trajectories that start nearby are allowed to stray from \(\mathbf{x}\)* in the short run, but they must approach \(\mathbf{x}\)* in the long run. In contrast, Liapunov stability requires that nearby trajectories remain close for _all_ time. We say that **x* is _Liapunov stable_** if for each \(\varepsilon>0\), there is a \(\delta>0\) such that \(\left\|\mathbf{x}(t)-\mathbf{x}\right\|<\varepsilon\) whenever \(t\geq 0\) and \(\left\|\mathbf{x}(0)-\mathbf{x}\right\|<\delta\). Thus, trajectories that start within \(\delta\) of **x* remain within \(\varepsilon\) of **x* for all positive time (Figure 1): Finally, **x* is _asymptotically stable_** if it is both attracting and Liapunov stable. For each of the following systems, decide whether the origin is attracting, Liapunov stable, asymptotically stable, or none of the above. a) \(\dot{x}=y\), \(\dot{y}=-4x\). b) \(\dot{x}=2y\), \(\dot{y}=x\) c) \(\dot{x}=0\), \(\dot{y}=x\) d) \(\dot{x}=0\), \(\dot{y}=-y\) e) \(\dot{x}=-x\), \(\dot{y}=-5y\) f) \(\dot{x}=x\), \(\dot{y}=y\) (Stability proofs) Prove that your answers to 5.1.10 are correct, using the definitions of the different types of stability. (You must produce a suitable \(\delta\) to prove that the origin is attracting, or a suitable \(\delta(\varepsilon)\) to prove Liapunov stability.) (Closed orbits from symmetry arguments) Give a simple proof that orbits are closed for the simple harmonic oscillator \(\dot{x}=v\), \(\dot{v}=-x\), using _only_ the symmetry properties of the vector field. (Hint: Consider a trajectory that starts on the \(v\)-axis at \((0,-v_{0})\), and suppose that the trajectory intersects the \(x\)-axis at \((x,0)\). Then use symmetry arguments to find the subsequent intersections with the \(v\)-axis and \(x\)-axis.) Why do you think a "saddle point" is called by that name? What's the connection to real saddles (the kind used on horses)? ### Classification of Linear Systems Consider the system \(\dot{x}=4x-y\), \(\dot{y}=2x+y\). a) Write the system as \(\dot{\mathbf{x}}=A\mathbf{x}\). Show that the characteristic polynomial is \(\lambda^{2}-5\lambda+6\), and find the eigenvalues and eigenvectors of \(A\). Figure 1: 2. Find the general solution of the system. 3. Classify the fixed point at the origin. 4. Solve the system subject to the initial condition \((x_{0},y_{0})=(3,4)\). 2.2 (Complex eigenvalues) This exercise leads you through the solution of a linear system where the eigenvalues are complex. The system is \(\dot{x}=x-y\), \(\dot{y}=x+y\). 1. Find \(A\) and show that it has eigenvalues \(\lambda_{1}=1+i\), \(\lambda_{2}=1-i\), with eigenvectors \(\mathbf{v}_{1}=(i,1)\), \(\mathbf{v}_{2}=(-i,1)\). (Note that the eigenvalues are complex conjugates, and so are the eigenvectors--this is always the case for real \(A\) with complex eigenvalues.) 2. The general solution is \(\mathbf{x}(t)=c_{1}e^{\lambda_{1}t}\mathbf{v}_{1}+c_{2}e^{\lambda_{2}t}\mathbf{ v}_{2}\). So in one sense we're done! But this way of writing \(\mathbf{x}(t)\) involves complex coefficients and looks unfamiliar. Express \(\mathbf{x}(t)\) purely in terms of real-valued functions. (Hint: Use \(e^{i\omega t}=\cos\omega t+i\sin\omega t\) to rewrite \(\mathbf{x}(t)\) in terms of sines and cosines, and then separate the terms that have a prefactor of \(i\) from those that don't.) Plot the phase portrait and classify the fixed point of the following linear systems. If the eigenvectors are real, indicate them in your sketch. (A project about random systems) Suppose we pick a linear system at random; what's the probability that the origin will be, say, an unstable spiral? To be more specific, consider the system \(\dot{\mathbf{x}}=A\mathbf{x}\), where \(A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\). Suppose we pick the entries \(a\),\(b\),\(c\),\(d\) independently and at random from a uniform distribution on the interval \([-1,1]\). Find the probabilities of all the different kinds of fixed points. To check your answers (or if you hit an analytical roadblock), try the _Monte Carlo method_. Generate millions of random matrices on the computer and have the machine count the relative frequency of saddles, unstable spirals, etc. Are the answers the same if you use a normal distribution instead of a uniform distribution? ### 5.3 Love Affairs (Name-calling) Suggest names for the four romantic styles, determined by the signs of \(a\) and \(b\) in \(\dot{R}=aR+bJ\). Consider the affair described by \(\dot{R}=J\), \(\dot{J}=-R+J\). 1. Characterize the romantic styles of Romeo and Juliet. 2. Classify the fixed point at the origin. What does this imply for the affair? 3. Sketch \(R(t)\) and \(J(t)\) as functions of \(t\), assuming \(R(0)=1\), \(J(0)=0\). In each of the following problems, predict the course of the love affair, depending on the signs and relative sizes of \(a\) and \(b\). (Out of touch with their own feelings) Suppose Romeo and Juliet react to each other, but not to themselves: \(\dot{R}=aJ\), \(\dot{J}=bR\). What happens? (Fire and water) Do opposites attract? Analyze \(\dot{R}=aR+bJ\), \(\dot{J}=-bR-aJ\). (Peas in a pod) If Romeo and Juliet are romantic clones (\(\dot{R}=aR+bJ\), \(\dot{J}=bR+aJ\)), should they expect boredom or bliss? (Romeo the robot) Nothing could ever change the way Romeo feels about Juliet: \(\dot{R}=0\), \(\dot{J}=aR+bJ\). Does Juliet end up loving him or hating him? ### 6.0 Introduction This chapter begins our study of two-dimensional _nonlinear_ systems. First we consider some of their general properties. Then we classify the kinds of fixed points that can arise, building on our knowledge of linear systems (Chapter 5). The theory is further developed through a series of examples from biology (competition between two species) and physics (conservative systems, reversible systems, and the pendulum). The chapter concludes with a discussion of index theory, a topological method that provides global information about the phase portrait. This chapter is mainly about fixed points. The next two chapters will discuss closed orbits and bifurcations in two-dimensional systems. ### 6.1 Phase Portraits The general form of a vector field on the phase plane is \[\begin{array}{l}\dot{x}_{1}=f_{1}(x_{1},x_{2})\\ \dot{x}_{2}=f_{2}(x_{1},x_{2})\end{array}\] where \(f\)1 and \(f\)2 are given functions. This system can be written more compactly in vector notation as \[\begin{array}{l}\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\end{array}\] where \(\mathbf{x}=(x_{1},x_{2})\) and \(\mathbf{f}(\mathbf{x})=(f_{1}(\mathbf{x}),f_{2}(\mathbf{x}))\). Here \(\mathbf{x}\) represents a point in the phase plane, and \(\begin{array}{l}\dot{\mathbf{x}}\end{array}\) is the velocity vector at that point. By flowing along the vector field,a phase point traces out a solution \(\mathbf{x}(t)\), corresponding to a trajectory winding through the phase plane (Figure 6.1.1). Furthermore, the entire phase plane is filled with trajectories, since each point can play the role of an initial condition. For nonlinear systems, there's typically no hope of finding the trajectories analytically. Even when explicit formulas are available, they are often too complicated to provide much insight. Instead we will try to determine the _qualitative_ behavior of the solutions. Our goal is to find the system's phase portrait directly from the properties of \(\mathbf{f}(\mathbf{x})\). An enormous variety of phase portraits is possible; one example is shown in Figure 6.1.2. Some of the most salient features of any phase portrait are: 1. The _fixed points,_ like \(A\), \(B\), and \(C\) in Figure 6.1.2. Fixed points satisfy \(\mathbf{f}(\mathbf{x}^{*})=\mathbf{0}\), and correspond to steady states or equilibria of the system. 2. The _closed orbits,_ like \(D\) in Figure 6.1.2. These correspond to periodic solutions, i.e., solutions for which \(\mathbf{x}(t+T)=\mathbf{x}(t)\) for all \(t\), for some \(T>0\). 3. The arrangement of trajectories near the fixed points and closed orbits. For example, the flow pattern near \(A\) and \(C\) is similar, and different from that near \(B\). 4. The stability or instability of the fixed points and closed orbits. Here, the fixed points \(A\), \(B\), and \(C\) are unstable, because nearby trajectories tend to move away from them, whereas the closed orbit \(D\) is stable. ### Numerical Computation of Phase Portraits Sometimes we are also interested in _quantitative_ aspects of the phase portrait. Fortunately, numerical integration of \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\) is not much harder than that of \(\dot{x}=f(x)\). The numerical methods of Section 2.8 still work, as long as we replace Figure 6.1.2: Figure 6.1.1: the numbers \(x\) and \(f(x)\) by the vectors \(\mathbf{x}\) and \(\mathbf{f}(\mathbf{x})\). We will always use the Runge-Kutta method, which in vector form is \[\mathbf{x}_{a+1}=\mathbf{x}_{a}+\tfrac{1}{6}(\mathbf{k}_{1}+2\mathbf{k}_{2}+2 \mathbf{k}_{3}+\mathbf{k}_{4})\] where \[\mathbf{k}_{1} = \mathbf{f}(\mathbf{x}_{a})\Delta t\] \[\mathbf{k}_{2} = \mathbf{f}(\mathbf{x}_{a}+\tfrac{1}{2}\mathbf{k}_{1})\Delta t\] \[\mathbf{k}_{3} = \mathbf{f}(\mathbf{x}_{a}+\tfrac{1}{2}\mathbf{k}_{2})\Delta t\] \[\mathbf{k}_{4} = \mathbf{f}(\mathbf{x}_{a}+\mathbf{k}_{3})\Delta t.\] A stepsize \(\Delta t=0.1\) usually provides sufficient accuracy for our purposes. When plotting the phase portrait, it often helps to see a grid of representative vectors in the vector field. Unfortunately, the arrowheads and different lengths of the vectors tend to clutter such pictures. A plot of the _direction field_ is clearer: short line segments are used to indicate the local direction of flow. ## Example 6.1.1: Consider the system \(\dot{x}=x+e^{-y}\), \(\dot{y}=-y\). First use qualitative arguments to obtain information about the phase portrait. Then, using a computer, plot the direction field. Finally, use the Runge-Kutta method to compute several trajectories, and plot them on the phase plane. _Solution:_ First we find the fixed points by solving \(\dot{x}=0\), \(\dot{y}=0\) simultaneously. The only solution is \((x*,y*)=(-1,0)\). To determine its stability, note that \(y(t)\to 0\) as \(t\to\infty\), since the solution to \(\dot{y}=-y\) is \(y(t)=y_{0}e^{-t}\). Hence \(e^{-y}\to 1\) and so in the long run, the equation for \(x\) becomes \(\dot{x}\approx x+1\); this has exponentially growing solutions, which suggests that the fixed point is unstable. In fact, if we restrict our attention to initial conditions on the \(x\)-axis, then \(y_{0}=0\) and so \(y(t)=0\) for all time. Hence the flow on the \(x\)-axis is governed _strictly_ by \(\dot{x}=x+1\). Therefore the fixed point is unstable. To sketch the phase portrait, it is helpful to plot the _nullclines_, defined as the curves where either \(\dot{x}=0\) or \(\dot{y}=0\). The nullclines indicate where the flow is purely horizontal or vertical (Figure 6.1.3). For example, the flow is horizontal where \(\dot{y}=0\), and since \(\dot{y}=-y\), this occurs on the line \(y=0\). Along this line, the flow is to the right where \(\dot{x}=x+1>0\), that is, where \(x>-1\). Similarly, the flow is vertical where \(\dot{x}=x+e^{-y}=0\), which occurs on the curve shown in Figure 6.1.3. On the upper part of the curve where \(y>0\), the flow is downward, since \(\dot{y}<0\). The nullclines also partition the plane into regions where \(\dot{x}\) and \(\dot{y}\) have various signs. Some of the typical vectors are sketched above in Figure 6.1.3. Even with the limited information obtained so far, Figure 6.1.3 gives a good sense of the overall flow pattern. Now we use the computer to finish the problem. The direction field is indicated by the line segments in Figure 6.1.4, and several trajectories are shown. Note how the trajectories always follow the local slope. The fixed point is now seen to be a nonlinear version of a saddle point. ### 6.2 Existence, Uniqueness, and Topological Consequences We have been a bit optimistic so far--at this stage, we have no guarantee that the general nonlinear system \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\) even _has_ solutions! Fortunately the existence and uniqueness theorem given in Section 2.5 can be generalized to two-dimensional Figure 6.1.4: Figure 6.1.3: systems. We state the result for \(n\)-dimensional systems, since no extra effort is involved: **Existence and Uniqueness Theorem:** Consider the initial value problem \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\,,\mathbf{x}(0)=\mathbf{x}_{v}\). Suppose that \(\mathbf{f}\) is continuous and that all its partial derivatives \(\partial f/\partial x_{j}\), \(i,j=1\), \(\ldots\,,\,n\), are continuous for \(\mathbf{x}\) in some open connected set \(D\subset\mathbf{R}^{n}\). Then for \(\mathbf{x}_{v}\in D\), the initial value problem has a solution \(\mathbf{x}(\,t)\) on some time interval \((-\tau,\tau)\) about \(t=0\), and the solution is unique. In other words, existence and uniqueness of solutions are guaranteed if \(\mathbf{f}\) is continuously differentiable. The proof of the theorem is similar to that for the case \(n=1\), and can be found in most texts on differential equations. Stronger versions of the theorem are available, but this one suffices for most applications. From now on, we'll assume that all our vector fields are smooth enough to ensure the existence and uniqueness of solutions, starting from any point in phase space. The existence and uniqueness theorem has an important corollary: _different trajectories never intersect_. If two trajectories _did_ intersect, then there would be two solutions starting from the same point (the crossing point), and this would violate the uniqueness part of the theorem. In more intuitive language, a trajectory can't move in two directions at once. Because trajectories can't intersect, phase portraits always have a well-groomed look to them. Otherwise they might degenerate into a snarl of criss-crossed curves (Figure 6.2.1). The existence and uniqueness theorem prevents this from happening. In two-dimensional phase spaces (as opposed to higher-dimensional phase spaces), these results have especially strong topological consequences. For example, suppose there is a closed orbit \(C\) in the phase plane. Then any trajectory starting inside \(C\) is trapped in there forever (Figure 6.2.2). What is the fate of such a bounded trajectory? If there are fixed points inside \(C\), then of course the trajectory might eventually approach one of them. But what if there _aren't_ any fixed points? Your intuition may tell you that the trajectory can't meander around forever--if so, you're right. For vector fields on the plane, the _Poincare-Bendixson theorem_ states that if a trajectory is confined to a closed, bounded region and there are no fixed points in the region, then the trajectory must eventually approach a closed orbit. We'll discuss this important theorem in Section 7.3. Figure 6.2.2 Figure 6.2.1 But that part of our story comes later. First we must become better acquainted with fixed points. ### Fixed Points and Linearization In this section we extend the _linearization_ technique developed earlier for one-dimensional systems (Section 2.4). The hope is that we can approximate the phase portrait near a fixed point by that of a corresponding linear system. **Linearized System** Consider the system \[\dot{x} =f(x,y)\] \[\dot{y} =g(x,y)\] and suppose that (_x_*,_y_*) is a fixed point, i.e., \[f(x*,y*) =0,\ \ \ \ \ g(x*,y*) =0.\] Let \[u =x-x*,\ \ \ \ \ \ \ v =y-y*\] denote the components of a small disturbance from the fixed point. To see whether the disturbance grows or decays, we need to derive differential equations for \(u\) and \(v\). Let's do the \(u\)-equation first: \[\dot{u} =\dot{x}\] (since \[x*\] is a constant) \[=f(x*+u,y*+v)\] (by substitution) \[=f(x*,y*)+u\,\frac{\partial f}{\partial x}+v\,\frac{\partial f}{ \partial y}+O(u^{2},v^{2},uv)\ \ \text{(Taylor series expansion)}\] \[=u\,\frac{\partial f}{\partial x}+v\,\frac{\partial f}{\partial y} +O(u^{2},v^{2},uv)\Similarly we find \[\dot{v}=u\frac{\partial g}{\partial x}+v\frac{\partial g}{\partial y}+O(u^{2},v^{2},uv).\] Hence the disturbance (\(u,v\)) evolves according to \[\begin{pmatrix}\dot{u}\\ \dot{v}\end{pmatrix}=\begin{pmatrix}\frac{\partial\sigma}{\partial x}&\frac{ \partial\sigma}{\partial y}\\ \frac{\partial\sigma}{\partial x}&\frac{\partial\sigma}{\partial y}\end{pmatrix} \begin{pmatrix}u\\ v\end{pmatrix}+\text{quadratic terms}. \tag{1}\] The matrix \[A=\begin{pmatrix}\frac{\partial\sigma}{\partial x}&\frac{\partial\sigma}{ \partial y}\\ \frac{\partial\sigma}{\partial x}&\frac{\partial\sigma}{\partial y}\end{pmatrix}_{( x^{\star},y^{\star})}\] is called the _Jacobian matrix_ at the fixed point (\(x^{\star},y^{\star}\)). It is the multivariable analog of the derivative \(f^{\prime}(x^{\star})\) seen in Section 2.4. Now since the quadratic terms in (1) are tiny, it's tempting to neglect them altogether. If we do that, we obtain the _linearized system_ \[\begin{pmatrix}\dot{u}\\ \dot{v}\end{pmatrix}=\begin{pmatrix}\frac{\partial\sigma}{\partial x}&\frac{ \partial\sigma}{\partial y}\\ \frac{\partial\sigma}{\partial x}&\frac{\partial\sigma}{\partial y}\end{pmatrix} \begin{pmatrix}u\\ v\end{pmatrix} \tag{2}\] whose dynamics can be analyzed by the methods of Section 5.2. ### The Effect of Small Nonlinear Terms Is it really safe to neglect the quadratic terms in (1)? In other words, does the linearized system give a qualitatively correct picture of the phase portrait near (\(x^{\star}\), \(y^{\star}\))? The answer is _yes, as long as the fixed point for the linearized system is not one of the borderline cases_ discussed in Section 5.2. In other words, if the linearized system predicts a saddle, node, or a spiral, then the fixed point _really is_ a saddle, node, or spiral for the original nonlinear system. See Andronov et al. (1973) for a proof of this result, and Example 6.3.1 for a concrete illustration. The borderline cases (centers, degenerate nodes, stars, or non-isolated fixed points) are much more delicate. They can be altered by small nonlinear terms, as we'll see in Example 6.3.2 and in Exercise 6.3.11. ## Example 6.3.1: Find all the fixed points of the system \(\dot{x}=-x+x^{3}\), \(\dot{y}=-2y\), and use linearization to classify them. Then check your conclusions by deriving the phase portrait for the full nonlinear system. _Solution:_ Fixed points occur where \(\dot{x}=0\) and \(\dot{y}=0\) simultaneously. Hence we need \(x=0\) or \(x=\pm 1\), and \(y=0\). Thus, there are three fixed points: \((0,0)\), \((1,0)\), and \((-1,0)\). The Jacobian matrix at a general point \((x,y)\) is \[A=\begin{pmatrix}\frac{\partial\dot{x}}{\partial x}&\frac{\partial\dot{x}}{ \partial y}\\ \frac{\partial\dot{y}}{\partial x}&\frac{\partial\dot{y}}{\partial y}\end{pmatrix} =\begin{pmatrix}-1+3x^{2}&0\\ 0&-2\end{pmatrix}.\] Next we evaluate \(A\) at the fixed points. At \((0,0)\), we find \(A=\begin{pmatrix}-1&0\\ 0&-2\end{pmatrix}\), so \((0,0)\) is a stable node. At \((\pm 1,0)\), \(A=\begin{pmatrix}2&0\\ 0&-2\end{pmatrix}\), so both \((1,0)\) and \((-1,0)\) are saddle points. Now because stable nodes and saddle points are not borderline cases, we can be certain that the fixed points for the full nonlinear system have been predicted correctly. This conclusion can be checked explicitly for the nonlinear system, since the \(x\) and \(y\) equations are _uncoupled_; the system is essentially two independent first-order systems at right angles to each other. In the \(y\)-direction, all trajectories decay exponentially to \(y=0\). In the \(x\)-direction, the trajectories are attracted to \(x=0\) and repelled from \(x=\pm 1\). The vertical lines \(x=0\) and \(x=\pm 1\) are _invariant,_ because \(\dot{x}=0\) on them; hence any trajectory that starts on these lines stays on them forever. Similarly, \(y=0\) is an invariant horizontal line. As a final observation, we note that the phase portrait must be symmetric in both the \(x\) and \(y\) axes, since the equations are invariant under the transformations \(x\rightarrow-x\) and \(y\rightarrow-y\). Putting all this information together, we arrive at the phase portrait shown in Figure 6.3.1. This picture confirms that \((0,0)\) is a stable node, and \((\pm 1,0)\) are saddles, as expected from the linearization. The next example shows that small nonlinear terms can change a center into a spiral. ### 6.3 Fixed points and linearization Figure 6.3.1: **Example 6.3.2:** Consider the system \[\begin{array}{l}\dot{x}=-y+ax(x^{2}+y^{2})\\ \dot{y}=\ x+ay(x^{2}+y^{2})\end{array}\] where \(a\) is a parameter. Show that the linearized system _incorrectly_ predicts that the origin is a center for all values of \(a\), whereas in fact the origin is a stable spiral if \(a<0\) and an unstable spiral if \(a>0\). _Solution:_ To obtain the linearization about \((x*,y*)=(0,0)\), we can either compute the Jacobian matrix directly from the definition, or we can take the following shortcut. For any system with a fixed point at the origin, \(x\) and \(y\) represent deviations from the fixed point, since \(u=x-x*=x\) and \(v=y-y*=y\); hence we can linearize by simply omitting nonlinear terms in \(x\) and \(y\). Thus the linearized system is \(\dot{x}=-y\), \(\dot{y}=x\). The Jacobian is \[A=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\] which has \(\tau=0\), \(\Delta=1>0\), so the origin is always a center, according to the linearization. To analyze the nonlinear system, we change variables to _polar coordinates_. Let \(x=r\cos\theta\), \(y=r\sin\theta\). To derive a differential equation for \(r\), we note \(x^{2}+y^{2}=r^{2}\), so \(x\dot{x}+y\dot{y}=r\dot{r}\). Substituting for \(\dot{x}\) and \(\dot{y}\) yields \[\begin{array}{l}r\dot{r}=x\left(-y+ax(x^{2}+y^{2})\right)\ +y\left(x+ay(x^{2}+y^{2})\right)\\ \phantom{x}=a(x^{2}+y^{2})^{2}\\ \phantom{x}=ar^{4}.\end{array}\] Hence \(\dot{r}=ar^{3}\). In Exercise 6.3.12, you are asked to derive the following differential equation for \(\theta\): \[\dot{\theta}=\frac{x\dot{y}-y\dot{x}}{r^{2}}.\] After substituting for \(\dot{x}\) and \(\dot{y}\) we find \(\dot{\theta}=1\). Thus in polar coordinates the original system becomes \[\begin{array}{l}\dot{r}=ar^{3}\\ \dot{\theta}=1.\end{array}\]The system is easy to analyze in this form, because the radial and angular motions are independent. All trajectories rotate about the origin with constant angular velocity \(\dot{\theta}=1\). The radial motion depends on \(a\), as shown in Figure 6.3.2. If \(a<0\), then \(r(t)\to 0\) monotonically as \(t\rightarrow\infty\). In this case, the origin is a stable spiral. (However, note that the decay is extremely slow, as suggested by the computer-generated trajectories shown in Figure 6.3.2.) If \(a=0\), then \(r(t)=r_{0}\) for all \(t\) and the origin is a center. Finally, if \(a>0\), then \(r(t)\rightarrow\infty\) monotonically and the origin is an unstable spiral. We can see now why centers are so delicate: all trajectories are required to close _perfectly_ after one cycle. The slightest miss converts the center into a spiral. Similarly, stars and degenerate nodes can be altered by small nonlinearities, but unlike centers, _their stability doesn't change._ For example, a stable star may be changed into a stable spiral (Exercise 6.3.11) but not into an unstable spiral. This is plausible, given the classification of linear systems in Figure 5.2.8: stars and degenerate nodes live squarely in the stable or unstable region, whereas centers live on the razor's edge between stability and instability. If we're only interested in _stability,_ and not in the detailed geometry of the trajectories, then we can classify fixed points more coarsely as follows: **Robust cases:** _Repellers_ (also called _sources_): both eigenvalues have positive real part. _Attractors_ (also called _sinks_): both eigenvalues have negative real part. _Saddles_: one eigenvalue is positive and one is negative. **Marginal cases:** _Centers_: both eigenvalues are pure imaginary. _Higher-order and non-isolated fixed points:_ at least one eigenvalue is zero. Thus, from the point of view of stability, the marginal cases are those where at least one eigenvalue satisfies \(\text{Re}(\lambda)=0\). **Hyperbolic Fixed Points, Topological Equivalence, and** **Structural Stability** If \(\text{Re}(\lambda)\approx 0\) for both eigenvalues, the fixed point is often called _hyperbolic._ (This is an unfortunate name--it sounds like it should mean "saddle point"--but it has become standard.) Hyperbolic fixed points are sturdy; their stability type is unaffected by small nonlinear terms. Nonhyperbolic fixed points are the fragile ones. We've already seen a simple instance of hyperbolicity in the context of vector fields on the line. In Section 2.4 we saw that the stability of a fixed point was accurately predicted by the linearization, _as long as_\(f^{\prime}(x*)\approx 0\). This condition is the exact analog of \(\text{Re}(\lambda)\approx 0\). These ideas also generalize neatly to higher-order systems. A fixed point of an \(n\)th-order system is _hyperbolic_ if all the eigenvalues of the linearization lie off the imaginary axis, i.e., \(\text{Re}(\lambda_{i})\approx 0\) for \(i=1,\ldots,\ n\). The important _Hartman-Grobman theorem_ states that the local phase portrait near a hyperbolic fixed point is "topologically equivalent" to the phase portrait of the linearization; in particular, the stability type of the fixed point is faithfully captured by the linearization. Here _topologically equivalent_ means that there is a _homeomorphism_ (a continuous deformation with a continuous inverse) that maps one local phase portrait onto the other, such that trajectories map onto trajectories and the sense of time (the direction of the arrows) is preserved. Intuitively, two phase portraits are topologically equivalent if one is a distorted version of the other. Bending and warping are allowed, but not ripping, so closed orbits must remain closed, trajectories connecting saddle points must not be broken, etc. Hyperbolic fixed points also illustrate the important general notion of structural stability. A phase portrait is _structurally stable_ if its topology cannot be changed by an arbitrarily small perturbation to the vector field. For instance, the phase portrait of a saddle point is structurally stable, but that of a center is not: an arbitrarily small amount of damping converts the center to a spiral. ### 6.4 Rabbits versus Sheep In the next few sections we'll consider some simple examples of phase plane analysis. We begin with the classic _Lotka-Volterra model of competition_ between two species, here imagined to be rabbits and sheep. Suppose that both species are competing for the same food supply (grass) and the amount available is limited. Furthermore, ignore all other complications, like predators, seasonal effects, and other sources of food. Then there are two main effects we should consider: 1. Each species would grow to its carrying capacity in the absence of the other. This can be modeled by assuming logistic growth for each species (recall Section 2.3). Rabbits have a legendary ability to reproduce, so perhaps we should assign them a higher intrinsic growth rate. 2. When rabbits and sheep encounter each other, trouble starts. Sometimes the rabbit gets to eat, but more usually the sheep nudges the rabbit aside and starts nibbling (on the grass, that is). We'll assume that these conflicts occur at a rate proportional to the size of each population. (If there were twice as many sheep, the odds of a rabbit encountering a sheep would be twice as great.) Furthermore, we assume that the conflicts reduce the growth rate for each species, but the effect is more severe for the rabbits. A specific model that incorporates these assumptions is \[\begin{array}{l}\dot{x}=x(3-x-2y)\\ \dot{y}=y(2-x-y)\end{array}\] where \[\begin{array}{l}x(t)=\text{population of rabbits,}\\ y(t)=\text{population of sheep}\end{array}\] and \(x\), \(y\geq 0\). The coefficients have been chosen to reflect this scenario, but are otherwise arbitrary. In the exercises, you'll be asked to study what happens if the coefficients are changed. To find the fixed points for the system, we solve \(\dot{x}=0\) and \(\dot{y}=0\) simultaneously. Four fixed points are obtained: \((0,0)\), \((0,2)\), \((3,0)\), and \((1,1)\). To classify them, we compute the Jacobian: \[A=\begin{pmatrix}\frac{\partial x}{\partial x}&\frac{\partial x}{\partial y}\\ \frac{\partial y}{\partial x}&\frac{\partial y}{\partial y}\end{pmatrix}= \begin{pmatrix}3-2x-2y&-2x\\ -y&2-x-2y\end{pmatrix}.\] Now consider the four fixed points in turn: \[(0,0):\text{Then }A=\begin{pmatrix}3&0\\ 0&2\end{pmatrix}.\] The eigenvalues are \(\lambda=3\), \(2\) so \((0,0)\) is an _unstable node_. Trajectories leave the origin parallel to the eigenvector for \(\lambda=2\), i.e. tangential to \(\mathbf{v}=(0,1)\), which spans the \(y\)-axis. (Recall the general rule: at a node, trajectories are tangential to the slow eigendirection, which is the eigendirection with the smallest \(|\lambda|\)) Thus, the phase portrait near \((0,0)\) looks like Figure 6.4.1. ### 6.4 Rabbits versus SHEP Figure 6.4.1: This matrix has eigenvalues \(\lambda=-1,-2\), as can be seen from inspection, since the matrix is triangular. Hence the fixed point is a _stable node_. Trajectories approach along the eigendirection associated with \(\lambda=-1\); you can check that this direction is spanned by \(\mathbf{v}=(\mathrm{l},-2)\). Figure 6.4.2 shows the phase portrait near the fixed point \((0,2)\). \[(3,0)\text{: Then }\ A=\begin{pmatrix}-3&-6\\ 0&-1\end{pmatrix}\text{ and }\lambda=-3,-1.\] This is also a _stable node_. The trajectories approach along the slow eigendirection spanned by \(\mathbf{v}=(3,-1)\), as shown in Figure 6.4.3. \[(1,1)\text{: Then }\ A=\begin{pmatrix}-1&-2\\ -1&-1\end{pmatrix}\text{, which has }\tau=-2\text{, }\Delta=-1\text{, and }\lambda=-1\pm\sqrt{2}.\] Hence this is a _saddle point_. As you can check, the phase portrait near \((1,1)\) is as shown in Figure 6.4.4. Combining Figures 6.4.1-6.4.4, we get Figure 6.4.5, which already conveys a good sense of the entire phase portrait. Furthermore, notice that the \(x\) and \(y\) axes contain straight-line trajectories, since \(\dot{x}=0\) when \(x=0\), and \(\dot{y}=0\) when \(y=0\). Now we use common sense to fill in the rest of the phase portrait (Figure 6.4.6). For example, some of the trajectories starting near the origin must go to the stable node on the \(x\)-axis, while others must go to the stable node on the \(y\)-axis. In between, there must be a special trajectory that can't decide which way to turn, and so it dives into the saddle point. This trajectory is part of the _stable manifold_ of the saddle, drawn with a heavy line in Figure 6.4.6. The other branch of the stable manifold consists of a trajectory coming in "from infinity." A computer-generated phase portrait (Figure 6.4.7) confirms our sketch. Figure 6.4.4: Figure 6.4.5: Figure 6.4.2: The phase portrait has an interesting biological interpretation. It shows that one species generally drives the other to extinction. Trajectories starting below the stable manifold lead to eventual extinction of the sheep, while those starting above lead to eventual extinction of the rabbits. This dichotomy occurs in other models of competition and has led biologists to formulate the _principle of competitive exclusion_, which states that two species competing for the same limited resource typically cannot coexist. See Pianka (1981) for a biological discussion, and Pielou (1969), Edelstein-Keshet (1988), or Murray (2002) for additional references and analysis. Our example also illustrates some general mathematical concepts. Given an attracting fixed point **x***, we define its _basin of attraction_ to be the set of initial conditions \(\mathbf{x}_{0}\) such that \(\mathbf{x}(t)\rightarrow\mathbf{x}\)* as \(t\rightarrow\infty\). For instance, the basin of attraction for the node at \((3,0)\) consists of all the points lying below the stable manifold of the saddle. This basin is shown as the shaded region in Figure 6.4.8. Because the stable manifold separates the basins for the two nodes, it is called the _basin boundary_. For the same reason, the two trajectories that comprise the Figure 6.4.7: Figure 6.4.8: stable manifold are traditionally called _separatrices_. Basins and their boundaries are important because they partition the phase space into regions of different long-term behavior. ### 6.5 Conservative Systems Newton's law \(F=ma\) is the source of many important second-order systems. For example, consider a particle of mass \(m\) moving along the \(x\)-axis, subject to a non-linear force \(F(x)\). Then the equation of motion is \[m\ddot{x}=F\left(x\right).\] Notice that we are assuming that \(F\) is independent of both \(\dot{x}\) and \(t\); hence there is no damping or friction of any kind, and there is no time-dependent driving force. Under these assumptions, we can show that _energy is conserved,_ as follows. Let \(V(x)\) denote the _potential energy_, defined by \(F(x)=-dV/dx\). Then \[m\ddot{x}+\frac{dV}{dx}=0. \tag{1}\] Now comes a trick worth remembering: multiply both sides by \(\dot{x}\) and notice that the left-hand side becomes an exact time-derivative! \[m\dot{x}\ddot{x}+\frac{dV}{dx}\dot{x}=0\Rightarrow\frac{d}{dt}\left[\tfrac{1} {2}\,m\dot{x}^{2}+V(x)\right]=0\] where we've used the chain rule \[\frac{d}{dt}V(x(t))=\frac{dV}{dx}\frac{dx}{dt}\] in reverse. Hence, for a given solution \(x(t)\), the total _energy_ \[E=\tfrac{1}{2}\,m\dot{x}^{2}+V(x)\] is constant as a function of time. The energy is often called a conserved quantity, a constant of motion, or a first integral. Systems for which a conserved quantity exists are called _conservative systems_. Let's be a bit more general and precise. Given a system \(\dot{\mathbf{x}}=\mathbf{f(x)}\), a _conserved quantity_ is a real-valued continuous function \(E(\mathbf{x})\) that is constant on trajectories, i.e. \(dE/dt=0\). To avoid trivial examples, we also require that \(E(\mathbf{x})\) be nonconstant on every open set. Otherwise a constant function like \(E(\mathbf{x})\equiv 0\) would qualify as a conserved quantity for every system, and so _every_ system would be conservative! Our caveat rules out this silliness. The first example points out a basic fact about conservative systems. **Example 6.5.1:** Show that _a conservative system cannot have any attracting fixed points._ _Solution:_ Suppose **x*** were an attracting fixed point. Then all points in its basin of attraction would have to be at the same energy \(E(\textbf{x*})\) (because energy is constant on trajectories and all trajectories in the basin flow to **x***). Hence \(E(\textbf{x})\) must be a _constant function_ for **x** in the basin. But this contradicts our definition of a conservative system, in which we required that \(E(\textbf{x})\) be nonconstant on all open sets. If attracting fixed points can't occur, then what kind of fixed points _can_ occur? One generally finds saddles and centers, as in the next example. **Example 6.5.2:** Consider a particle of mass \(m=1\) moving in a double-well potential \(V(x)=-\frac{1}{2}x^{2}+\frac{1}{4}x^{4}\). Find and classify all the equilibrium points for the system. Then plot the phase portrait and interpret the results physically. _Solution:_ The force is \(-dV/dx=x-x^{3}\), so the equation of motion is \[\ddot{x}=x-x^{3}\,.\] This can be rewritten as the vector field \[\dot{x}=y\] \[\dot{y}=x-x^{3}\] where \(y\) represents the particle's velocity. Equilibrium points occur where \(\left(\dot{x},\dot{y}\right)=(0,0)\). Hence the equilibria are \((x^{\ast}\), \(y^{\ast})=(0,0)\) and \((\pm 1,0)\). To classify these fixed points we compute the Jacobian: \[A=\begin{pmatrix}0&1\\ 1-3x^{2}&0\end{pmatrix}.\] At \((0,0)\), we have \(\Delta=-1\), so the origin is a saddle point. But when \((x^{\ast}\), \(y^{\ast})=(\pm 1,0)\), we find \(\tau=0\), \(\Delta=2\); hence these equilibria are predicted to be centers. At this point you should be hearing warning bells--in Section 6.3 we saw that small nonlinear terms can easily destroy a center predicted by the linear approximation. But that's not the case here, because of energy conservation. The trajectories are closed curves defined by the _contours_ of constant energy, i.e., \[E=\tfrac{1}{2}\,y^{2}-\tfrac{1}{2}\,x^{2}+\tfrac{1}{4}\,x^{4}=\text{constant}.\] Figure 6.5.1 shows the trajectories corresponding to different values of \(E\). To decide which way the arrows point along the trajectories, we simply compute the vector \((\dot{x},\dot{y})\) at a few convenient locations. For example, \(\dot{x}>0\) and \(\dot{y}=0\) on the positive \(y\)-axis, so the motion is to the right. The orientation of neighboring trajectories follows by continuity. As expected, the system has a saddle point at \((0,0)\) and centers at \((1,0)\) and \((-1,0)\). Each of the neutrally stable centers is surrounded by a family of small closed orbits. There are also large closed orbits that encircle all three fixed points. Thus solutions of the system are typically _periodic_, except for the equilibrium solutions and two very special trajectories: these are the trajectories that appear to start and end at the origin. More precisely, these trajectories approach the origin as \(t\to\pm\infty\). Trajectories that start and end at the same fixed point are called _homoclinic orbits_. They are common in conservative systems, but are rare otherwise. Notice that a homoclinic orbit does _not_ correspond to a periodic solution, because the trajectory takes forever trying to reach the fixed point. Finally, let's connect the phase portrait to the motion of an undamped particle in a double-well potential (Figure 6.5.2). The neutrally stable equilibria correspond to the particle at rest at the bottom of one of the wells, and the small closed orbits represent small oscillations about these equilibria. The large orbits represent more energetic oscillations that repeatedly take the particle back and forth over the hump. Do you see what the saddle point and the homoclinic orbits mean physically? Sketch the graph of the energy function \(E(x,y)\) for Example 6.5.2. _Solution:_ The graph of \(E(x,y)\) is shown in Figure 6.5.3. The energy \(E\) is plotted above each point \((x,y)\) of the phase plane. The resulting surface is often called the _energy surface_ for the system. Figure 6.5.1: Figure 6.5.3 shows that the local minima of \(E\) project down to centers in the phase plane. Contours of slightly higher energy correspond to the small orbits surrounding the centers. The saddle point and its homoclinic orbits lie at even higher energy, and the large orbits that encircle all three fixed points are the most energetic of all. It's sometimes helpful to think of the flow as occurring on the energy surface itself, rather than in the phase plane. But notice--the trajectories must maintain a constant height \(E\), so they would run _around_ the surface, not down it. ### Nonlinear Centers Centers are ordinarily very delicate but, as the examples above suggest, they are much more robust when the system is conservative. We now present a theorem about nonlinear centers in second-order conservative systems. The theorem says that centers occur at the local minima of the energy function. This is physically plausible--one expects neutrally stable equilibria and small oscillations to occur at the bottom of _any_ potential well, no matter what its shape. **Theorem 6.5.1:** (Nonlinear centers for conservative systems) Consider the system \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\), where \(\mathbf{x}=(x,\,y)\in\mathbf{R}^{2}\), and \(\mathbf{f}\) is continuously differentiable. Suppose there exists a conserved quantity \(E(\mathbf{x})\) and suppose that \(\mathbf{x}^{*}\) is an isolated fixed point (i.e., there are no other fixed points in a small neighborhood surrounding \(\mathbf{x}^{*}\)). If \(\mathbf{x}^{*}\) is a local minimum of \(E\), then all trajectories sufficiently close to \(\mathbf{x}^{*}\) are closed. **Ideas behind the proof:** Since \(E\) is constant on trajectories, each trajectory is contained in some contour of \(E\). Near a local maximum or minimum, the contours are _closed._ (We won't prove this, but Figure 6.5.3 should make it seem obvious.) The only remaining question is whether the trajectory actually goes all the way around the contour or whether it stops at a fixed point on the contour. But because we're assuming that \(\mathbf{x}^{*}\) is an _isolated_ fixed point, there cannot be any fixed Figure 6.5.3: points on contours sufficiently close to **x***. Hence all trajectories in a sufficiently small neighborhood of **x*** are closed orbits, and therefore **x*** is a center. Two remarks about this result: 1. The theorem is valid for local _maxima_ of \(E\) also. Just replace the function \(E\) by \(-E\), and maxima get converted to minima; then Theorem 6.5.1 applies. 2. We need to assume that **x*** is isolated. Otherwise there are counterexamples due to fixed points on the energy contour--see Exercise 6.5.12. Another theorem about nonlinear centers will be presented in the next section. ### 6.6 Reversible Systems Many mechanical systems have _time-reversal symmetry_. This means that their dynamics look the same whether time runs forward or backward. For example, if you were watching a movie of an undamped pendulum swinging back and forth, you wouldn't see any physical absurdities if the movie were run backward. In fact, any mechanical system of the form \(m\ddot{x}=F(x)\) is symmetric under time reversal. If we make the change of variables \(t\to-t\), the second derivative \(\ddot{x}\) stays the same and so the equation is unchanged. Of course, the velocity \(\dot{x}\) would be reversed. Let's see what this means in the phase plane. The equivalent system is \[\begin{array}{l}\dot{x}=y\\ \dot{y}=\frac{1}{m}F(x)\end{array}\] where \(y\) is the velocity. If we make the change of variables \(t\to-t\) and \(y\to-y\), both equations stay the same. Hence if \((x(t),y(t))\) is a solution, then so is \((x(-t),\)\(-y(-t))\). Therefore every trajectory has a twin: they differ only by time-reversal and a reflection in the \(x\)-axis (Figure 6.6.1). The trajectory above the \(x\)-axis looks just like the one below the \(x\)-axis, except the arrows are reversed. Figure 6.6.1: More generally, let's define a _reversible system_ to be _any_ second-order system that is invariant under \(t\to-t\) and \(y\to-y\). For example, any system of the form \[\dot{x} =f(x,y)\] \[\dot{y} =g(x,y),\] where \(f\)is _odd_ in \(y\) and \(g\) is _even_ in \(y\) (i.e.,\(f(x,-y)=-f(x,y)\) and \(g(x,-y)=g(x,y)\)) is reversible. Reversible systems are different from conservative systems, but they have many of the same properties. For instance, the next theorem shows that centers are robust in reversible systems as well. **Theorem 6.6.1:** (Nonlinear centers for reversible systems) Suppose the origin \(\mathbf{x}^{*}=\mathbf{0}\) is a linear center for the continuously differentiable system \[\dot{x} =f(x,y)\] \[\dot{y} =g(x,y),\] and suppose that the system is reversible. Then sufficiently close to the origin, all trajectories are closed curves. **Figure 6.6.2** **Figure 6.6.3** **Example 6.6.1:** Show that the system \[\dot{x} =y-y^{3}\] \[\dot{y} =-x-y^{2}\] ### 6.6 Reversible systems **Examplehas a nonlinear center at the origin, and plot the phase portrait. _Solution:_ We'll show that the hypotheses of the theorem are satisfied. The Jacobian at the origin is \[A = \begin{pmatrix}0 &1\\ -1 &0\end{pmatrix}.\] This has \(\tau=0\), \(\Delta>0\), so the origin is a linear center. Furthermore, the system is reversible, since the equations are invariant under the transformation \(t\to-t\), \(y\to-y\). By Theorem 6.6.1, the origin is a _nonlinear_ center. The other fixed points of the system are \((-1,1)\) and \((-1,-1)\). They are saddle points, as is easily checked by computing the linearization. A computer-generated phase portrait is shown in Figure 6.4. It looks like some exotic sea creature, perhaps a manta ray. The reversibility symmetry is apparent. The trajectories above the \(x\)-axis have twins below the \(x\)-axis, with arrows reversed. Notice that the twin saddle points are joined by a pair of trajectories. They are called _heteroclinic trajectories_ or _saddle connections_. Like homoclinic orbits, heteroclinic trajectories are much more common in reversible or conservative systems than in other types of systems. Although we have relied on the computer to plot Figure 6.4, it can be sketched on the basis of qualitative reasoning alone. For example, the existence of the heteroclinic trajectories can be deduced rigorously using reversibility arguments (Exercise 6.6.6). The next example illustrates the spirit of such arguments. Figure 6.4: **Example 6.6.2**: _Using reversibility arguments alone, show that the system_ \[\dot{x} = y\] \[\dot{y} = x-x^{2}\] _has a homoclinic orbit in the half-plane \(x\geq 0\)._ _Solution:_ Consider the unstable manifold of the saddle point at the origin. This manifold leaves the origin along the vector \((1,1)\), since this is the unstable eigen-direction for the linearization. Hence, close to the origin, part of the unstable manifold lies in the first quadrant \(x\), \(y>0\). Now imagine a phase point with coordinates \((x(t),y(t))\) moving along the unstable manifold, starting from \(x\), \(y\) small and positive. At first, \(x(t)\) must increase since \(\dot{x}=y>0\). Also, \(y(t)\) increases initially, since \(\dot{y}=x-x^{2}>0\) for small \(x\). Thus the phase point moves up and to the right. Its horizontal velocity is continually increasing, so at some time it must cross the vertical line \(x=1\). Then \(\dot{y}<0\) so \(y(t)\) decreases, eventually reaching \(y=0\). Figure 6.6.5 shows the situation. Now, _by reversibility_, there must be a twin trajectory with the same endpoints but with arrow reversed (Figure 6.6.6). Together the two trajectories form the desired homoclinic orbit. There is a more general definition of reversibility which extends nicely to higher-order systems. Consider any mapping \(R(\mathbf{x})\) of the phase space to itself that satisfies \(R^{2}(\mathbf{x})=\mathbf{x}\). In other words, if the mapping is applied _twice_, all points go back to where they started. In our two-dimensional examples, a reflection about the \(x\)-axis (or any axis through the origin) has this property. Then the system \(\dot{\mathbf{x}}=\mathbf{f}\left(\mathbf{x}\right)\) is _reversible_ if it is invariant under the change of variables \(t\to-t\), \(\mathbf{x}\to\mathbf{R}(\mathbf{x})\). Our next example illustrates this more general notion of reversibility, and also highlights the main difference between reversible and conservative systems. **Example 6.6.3**: _Show that the system_ \[\dot{x} = -2\cos x-\cos y\] \[\dot{y} = -2\cos y-\cos x\] ### 6.6 Reversible Systems Figure 6.6.5: Figure 6.6.6:is reversible, but _not_ conservative. Then plot the phase portrait. _Solution:_ The system is invariant under the change of variables \(t\to-t\), \(x\to-x\), and \(y\to-y\). Hence the system is reversible, with \(R(x,y)=(-x,-y)\) in the preceding notation. To show that the system is not conservative, it suffices to show that it has an attracting fixed point. (Recall that a conservative system can never have an attracting fixed point--see Example 6.5.1.) The fixed points satisfy \(2\cos\,x=-\cos\,y\) and \(2\cos\,y=-\cos\,x\). Solving these equations simultaneously yields \(\cos\,x*=\cos\,y*=0\). Hence there are four fixed points, given by \((x*,y*)=(\pm\frac{\pi}{2},\pm\frac{\pi}{2})\). We claim that \((x*,y*)=(-\frac{\pi}{2},-\frac{\pi}{2})\) is an attracting fixed point. The Jacobian there is \[A=\begin{pmatrix}2\sin x*&\sin y*\\ \sin x*&2\sin y*\end{pmatrix}=\begin{pmatrix}-2&-1\\ -1&-2\end{pmatrix},\] which has \(\tau=-4\), \(\Delta=3\), \(\tau^{2}-4\Delta=4\). Therefore the fixed point is a stable node. This shows that the system is not conservative. The other three fixed points can be shown to be an unstable node and two saddles. A computer-generated phase portrait is shown in Figure 6.6.7. To see the reversibility symmetry, compare the dynamics at any two points \((x,\,y)\) and \(R(x,\,y)=(-x,\,-y)\). The trajectories look the same, but the arrows are reversed. In particular, the stable node at \((-\frac{\pi}{2},-\frac{\pi}{2})\) is the twin of the unstable node at \((\frac{\pi}{2},\frac{\pi}{2})\). The system in Example 6.6.3 is closely related to a model of two superconducting Josephson junctions coupled through a resistive load (Tsang et al. 1991). For further discussion, see Exercise 6.6.9 and Example 8.7.4. Reversible, nonconservative systems also arise in the context of lasers (Politi et al. 1986) and fluid flows (Stone, Nadim, and Strogatz 1991 and Exercise 6.6.8). ### Pendulum Do you remember the first nonlinear system you ever studied in school? It was probably the pendulum. But in elementary courses, the pendulum's essential nonlinearity is sidestepped by the small-angle approximation \(\sin\,\theta\approx\theta\). Enough of Figure 6.6.7: that! In this section we use phase plane methods to analyze the pendulum, even in the dreaded large-angle regime where the pendulum whirls over the top. In the absence of damping and external driving, the motion of a pendulum is governed by \[\frac{d^{2}\theta}{dt^{2}}+\frac{g}{L}\sin\theta=0 \tag{1}\] where \(\theta\) is the angle from the downward vertical, \(g\) is the acceleration due to gravity, and \(L\) is the length of the pendulum (Figure 6.7.1). We nondimensionalize (1) by introducing a frequency \(\omega=\sqrt{g/L}\) and a dimensionless time \(\tau=\omega t\). Then the equation becomes \[\ddot{\theta}+\sin\theta=0 \tag{2}\] where the overdot denotes differentiation with respect to \(\tau\). The corresponding system in the phase plane is \[\dot{\theta}=v \tag{3a}\] \[\dot{v}=-\sin\theta \tag{3b}\] where \(v\) is the (dimensionless) angular velocity. The fixed points are \((\theta^{*},v^{*})=(k\pi,0)\), where \(k\) is any integer. There's no physical difference between angles that differ by \(2\pi\), so we'll concentrate on the two fixed points \((0,0)\) and \((\pi,0)\). At \((0,0)\), the Jacobian is \[A=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\] so the origin is a linear center. In fact, the origin is a _nonlinear_ center, for two reasons. First, the system (3) is _reversible_: the equations are invariant under the transformation \(\tau\rightarrow-\tau\), \(v\rightarrow-v\). Then Theorem 6.6.1 implies that the origin is a nonlinear center. Second, the system is also _conservative_. Multiplying (2) by \(\dot{\theta}\) and integrating yields \[\dot{\theta}(\ddot{\theta}+\sin\theta)=0\Rightarrow\tfrac{1}{2}\dot{\theta}^{ 2}-\cos\theta=\text{constant}.\] The energy function \[E(\theta,v)=\tfrac{1}{2}v^{2}-\cos\theta \tag{4}\]has a local minimum at (0,0), since \(E \approx \frac{1}{2}(v^{2} + \theta^{2}) - 1\) for small (\(\theta,v\)). Hence Theorem 6.5.1 provides a second proof that the origin is a nonlinear center. (This argument also shows that the closed orbits are approximately _circular_, with \(\theta^{2} + v^{2} \approx 2(E + 1)\).) Now that we've beaten the origin to death, consider the fixed point at (\(\pi,0\)). The Jacobian is \[A = \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}.\] The characteristic equation is \(\lambda^{2} - 1 = 0\). Therefore \(\lambda_{1} = - 1\), \(\lambda_{2} = 1\); the fixed point is a saddle. The corresponding eigenvectors are \(\mathbf{v}_{1} = (1, - 1)\) and \(\mathbf{v}_{2} = (1,1)\). The phase portrait near the fixed points can be sketched from the information obtained so far (Figure 6.7.2). To fill in the picture, we include the energy contours \(E = \frac{1}{2}v^{2} - \cos\theta\) for different values of \(E\). The resulting phase portrait is shown in Figure 6.7.3. The picture is periodic in the \(\theta\)-direction, as we'd expect. Now for the physical interpretation. The center corresponds to a state of neutrally stable equilibrium, with the pendulum at rest and hanging straight down. This is the lowest possible energy state (\(E = - 1\)). The small orbits surrounding the center represent small oscillations about equilibrium, traditionally called_librations_. As \(E\) increases, the orbits grow. The critical case is \(E=1\), corresponding to the heteroclinic trajectories joining the saddles in Figure 6.7. The saddles represent an _inverted_ pendulum at rest; hence the heteroclinic trajectories represent delicate motions in which the pendulum slows to a halt precisely as it approaches the inverted position. For \(E>1\), the pendulum whirls repeatedly over the top. These _rotations_ should also be regarded as periodic solutions, since \(\theta=-\pi\) and \(\theta=+\pi\) are the same physical position. ### Cylindrical Phase Space The phase portrait for the pendulum is more illuminating when wrapped onto the surface of a cylinder (Figure 6.7.4). In fact, a cylinder is the _natural_ phase space for the pendulum, because it incorporates the fundamental geometric difference between \(v\) and \(\theta\): the angular velocity \(v\) is a real number, whereas \(\theta\) is an _angle_. There are several advantages to the cylindrical representation. Now the periodic whirling motions _look_ periodic--they are the closed orbits that encircle the cylinder for \(E>1\). Also, it becomes obvious that the saddle points in Figure 6.7.3 are all the same physical state (an inverted pendulum at rest). The heteroclinic trajectories of Figure 6.7.3 become homoclinic orbits on the cylinder. There is an obvious symmetry between the top and bottom half of Figure 6.7.4. For example, both homoclinic orbits have the same energy and shape. To highlight this symmetry, it is interesting (if a bit mind-boggling at first) to plot the _energy_ vertically instead of the angular velocity \(v\) (Figure 6.7.5). Then the orbits on the cylinder remain at constant height, while the cylinder gets bent into a _U-tube_. The two arms of the tube are distinguished by the sense of rotation of the pendulum, Figure 6.7.4: either clockwise or counterclockwise. At low energies, this distinction no longer exists; the pendulum oscillates to and fro. The homoclinic orbits lie at \(E=1\), the borderline between rotations and librations. At first you might think that the trajectories are drawn incorrectly on one of the arms of the U-tube. It might seem that the arrows for clockwise and counterclockwise motions should go in _opposite_ directions. But if you think about the coordinate system shown in Figure 6.7.6, you'll see that the picture is correct. The point is that the direction of increasing \(\theta\) has reversed when the bottom of the cylinder is bent around to form the U-tube. (Please understand that Figure 6.7.6 shows the coordinate system, not the actual trajectories; the trajectories were shown in Figure 6.7.5.) ## Chapter 7 Phase plane ### 7.1 Phase plane Figure 6.7.6: ### Damping Now let's return to the phase plane, and suppose that we add a small amount of linear damping to the pendulum. The governing equation becomes \[\ddot{\theta}+b\dot{\theta}+\sin\theta=0\] where \(b>0\) is the damping strength. Then centers become stable spirals while saddles remain saddles. A computer-generated phase portrait is shown in Figure 6.7.7. The picture on the U-tube is clearer. _All trajectories continually lose altitude,_ except for the fixed points (Figure 6.7.8). We can see this explicitly by computing the change in energy along a trajectory: \[\frac{dE}{d\tau}=\frac{d}{d\tau}\bigl{(}\tfrac{1}{2}\dot{\theta}^{2}-\cos \theta\bigr{)}=\dot{\theta}(\ddot{\theta}+\sin\theta)=-b\dot{\theta}^{2}\leq 0.\] ### Pendulum Figure 6.7.8:Hence \(E\) decreases monotonically along trajectories, except at fixed points where \(\dot{\theta}\equiv 0\). The trajectory shown in Figure 6.7.8 has the following physical interpretation: the pendulum is initially whirling clockwise. As it loses energy, it has a harder time rotating over the top. The corresponding trajectory spirals down the arm of the U-tube until \(E<1\); then the pendulum doesn't have enough energy to whirl, and so it settles down into a small oscillation about the bottom. Eventually the motion damps and the pendulum comes to rest at its stable equilibrium. This example shows how far we can go with pictures--without invoking any difficult formulas, we were able to extract all the important features of the pendulum's dynamics. It would be much more difficult to obtain these results analytically, and much more confusing to interpret the formulas, even if we _could_ find them. ### 6.8 Index Theory In Section 6.3 we learned how to linearize a system about a fixed point. Linearization is a prime example of a _local_ method: it gives us a detailed microscopic view of the trajectories near a fixed point, but it can't tell us what happens to the trajectories after they leave that tiny neighborhood. Furthermore, if the vector field starts with quadratic or higher-order terms, the linearization tells us nothing. In this section we discuss index theory, a method that provides _global_ information about the phase portrait. It enables us to answer such questions as: Must a closed trajectory always encircle a fixed point? If so, what types of fixed points are permitted? What types of fixed points can coalesce in bifurcations? The method also yields information about the trajectories near higher-order fixed points. Finally, we can sometimes use index arguments to rule out the possibility of closed orbits in certain parts of the phase plane. ### The Index of a Closed Curve The index of a closed curve \(C\) is an integer that measures the winding of the vector field on \(C\). The index also provides information about any fixed points that might happen to lie inside the curve, as we'll see. This idea may remind you of a concept in electrostatics. In that subject, one often introduces a hypothetical closed surface (a "Gaussian surface") to probe a configuration of electric charges. By studying the behavior of the electric field on the surface, one can determine the total amount of charge _inside_ the surface. Amazingly, the behavior _on_ the surface tells us what's happening far away _inside_ the surface! In the present context, the electric field is analogous to our vector field, the Gaussian surface is analogous to the curve \(C\), and the total charge is analogous to the index. Now let's make these notions precise. Suppose that \(\,\dot{\mathbf{x}}=\mathbf{f}\left(\mathbf{x}\right)\) is a smooth vector field on the phase plane. Consider a closed curve \(\,C\) (Figure 6.8.1). This curve is _not_ necessarily a trajectory--it's simply a loop that we're putting in the phase plane to probe the behavior of the vector field. We also assume that \(\,C\) is a "simple closed curve" (i.e., it doesn't intersect itself) and that it doesn't pass through any fixed points of the system. Then at each point \(\,\mathbf{x}\) on \(\,C\), the vector field \(\,\dot{\mathbf{x}}=\left(\dot{x},\dot{y}\right)\) makes a well-defined angle \(\,\phi=\tan^{-1}\left(\dot{y}/\dot{x}\right)\,\) with the positive \(x\)-axis (Figure 6.8.1). As \(\,\mathbf{x}\) moves counterclockwise around \(\,C\), the angle \(\,\phi\) changes _continuously_ since the vector field is smooth. Also, when \(\,\mathbf{x}\) returns to its starting place, \(\,\phi\) returns to its original direction. Hence, over one circuit, \(\,\phi\) has changed by an _integer_ multiple of \(2\pi\). Let \(\left[\phi\right]_{C}\) be the net change in \(\,\phi\) over one circuit. Then the _index of the closed curve_\(\,C\) with respect to the vector field \(\,\mathbf{f}\) is defined as \[I_{\,C}=\tfrac{1}{2\pi}\left[\phi\right]_{C}.\] Thus, \(\,I_{\,C}\) is the net number of counterclockwise revolutions made by the vector field as \(\,\mathbf{x}\) moves once counterclockwise around \(\,C\). To compute the index, we do not need to know the vector field everywhere; we only need to know it along \(\,C\). The first two examples illustrate this point. **Example 6.8.1:** Given that the vector field varies along \(\,C\) as shown in Figure 6.8.2, find \(\,I_{\,C}\). _Solution:_ As we traverse \(\,C\) once counterclockwise, the vectors rotate through one full turn in the same sense. Hence \(\,I_{\,C}=+1\). If you have trouble visualizing this, here's a foolproof method. Number the vectors in counterclockwise order, starting anywhere on \(\,C\) (Figure 6.8.3a). Then transport these vectors (_without rotation!_) such that their tails lie at a common origin (Figure 6.8.3b). The index equals the net number of counterclockwise revolutions made by the numbered vectors. **Figure 6.8.2:**As Figure 6.8.3b shows, the vectors rotate once counterclockwise as we go in increasing order from vector #1 to vector #8. Hence \(I_{C}=+1\). **Example 6.8.2**: Given the vector field on the closed curve shown in Figure 6.8.4a, compute \(I_{C}\). _Solution:_ We use the same construction as in Example 6.8.1. As we make one circuit around \(C\), the vectors rotate through one full turn, but now in the _opposite_ sense. In other words, the vectors on \(C\) rotate _clockwise_ as we go around \(C\) counterclockwise. This is clear from Figure 6.8.4b; the vectors rotate clockwise as we go in increasing order from vector #1 to vector #8. Therefore \(I_{C}=-1\). In many cases, we are given equations for the vector field, rather than a picture of it. Then we have to draw the picture ourselves, and repeat the steps above. Sometimes this can be confusing, as in the next example. **Example 6.8.3**: Given the vector field \(\dot{x}=x^{2}y\), \(\dot{y}=x^{2}-y^{2}\), find \(I_{C}\), where \(C\) is the unit circle \(x^{2}+y^{2}=1\). Figure 6.8.4: Figure 6.8.3: _Solution:_ To get a clear picture of the vector field, it is sufficient to consider a few conveniently chosen points on \(C.\) For instance, at \((x,\,y)=(1,0),\) the vector is \((\dot{x},\dot{y})=(x^{3}y,\,x^{2}-y^{2})=(0,1).\) This vector is labeled #1 in Figure 6.8.5a. Now we move counterclockwise around \(C,\) computing vectors as we go. At \((x,y)=\frac{1}{\sqrt{2}}(1,1),\) we have \((\dot{x},\dot{y})=\frac{1}{2\sqrt{2}}(1,0),\) labeled #2. The remaining vectors are found similarly. Notice that different points on the circle may be associated with the same vector; for example, vector #3 and #7 are both \((0,-1).\) Now we translate the vectors over to Figure 6.8.5b. As we move from #1 to #9 in order, the vectors rotate 180o clockwise between #1 and #3, then swing back 360o counterclockwise between #3 and #7, and finally rotate 180o clockwise again between #7 and #9 as we complete the circuit of \(C.\) Thus \(\left[\phi\right]_{C}=-\pi+2\pi-\pi=0\) and therefore \(I_{C}=0.\) We plotted nine vectors in this example, but you may want to plot more to see the variation of the vector field in finer detail. **Properties of the Index** Now we list some of the most important properties of the index. 1. Suppose that \(C\) can be continuously deformed into \(C^{\prime}\) without passing through a fixed point. Then \(I_{C}=I_{C^{\prime}}.\) This property has an elegant proof: Our assumptions imply that as we deform \(C\) into \(C^{\prime},\) the index \(I_{C}\) varies _continuously_. But \(I_{C}\) is an integer--hence it can't change without jumping! (To put it more formally, if an integer-valued function is continuous, it must be _constant_.) As you think about this argument, try to see where we used the assumption that the intermediate curves don't pass through any fixed points. 2. If \(C\) doesn't enclose any fixed points, then \(I_{C}=0.\) ### Index Theory Figure 6.8.5: Proof: By property (I), we can shrink \(C\) to a tiny circle without changing the index. But \(\phi\) is essentially constant on such a circle, because all the vectors point in nearly the same direction, thanks to the assumed smoothness of the vector field (Figure 6.8.6). Hence \([\phi]_{C}=0\) and therefore \(I_{C}=0\). 3. If we reverse all the arrows in the vector field by changing \(t\rightarrow-t\), the index is unchanged. Proof: All angles change from \(\phi\) to \(\phi+\pi\). Hence \([\phi]_{C}\) stays the same. 4. Suppose that the closed curve \(C\) is actually a _trajectory_ for the system, i.e., \(C\) is a closed orbit. Then \(I_{C}=+1\). We won't prove this, but it should be clear from geometric intuition (Figure 6.8.7). Notice that the vector field is everywhere tangent to \(C\), because \(C\) is a trajectory. Hence, as \(\mathbf{x}\) winds around \(C\) once, the tangent vector also rotates once in the same sense. **Index of a Point** The properties above are useful in several ways. Perhaps most importantly, they allow us to define the index of a fixed point, as follows. Suppose \(\mathbf{x}\)* is an isolated fixed point. Then the _index_\(I\) of \(\mathbf{x}\)* is defined as \(I_{C}\), where \(C\) is _any_ closed curve that encloses \(\mathbf{x}\)* and no other fixed points. By property (I) above, \(I_{C}\) is independent of \(C\) and is therefore a property of \(\mathbf{x}\)* alone. Therefore we may drop the subscript \(C\) and use the notation \(I\) for the index of a point. **EXAMPLE 6.8.4:** Find the index of a stable node, an unstable node, and a saddle point. Figure 6.8.7: Figure 6.8.8: _Solution:_ The vector field near a stable node looks like the vector field of Example 6.8.1. Hence \(I=+1\). The index is also \(+1\) for an unstable node, because the only difference is that all the arrows are reversed; by property (3), this doesn't change the index! (This observation shows that _the index is not related to stability_, per se.) Finally, \(I=-1\) for a saddle point, because the vector field resembles that discussed in Example 6.8.2. In Exercise 6.8.1, you are asked to show that spirals, centers, degenerate nodes and stars all have \(I=+1\). Thus, a saddle point is truly a different animal from all the other familiar types of isolated fixed points. The index of a curve is related in a beautifully simple way to the indices of the fixed points inside it. This is the content of the following theorem. **Theorem 6.8.1:** If a closed curve \(C\) surrounds \(n\) isolated fixed points \({\bf x}_{1}^{*},\ldots,\)\({\bf x}_{n}^{*}\), then \[I_{C}=I_{1}+I_{2}+\ldots+I_{n}\] where \(I_{k}\) is the index of \({\bf x}_{k}^{*}\), for \(k=1,\ldots,n\). **Ideas behind the proof:** The argument is a familiar one, and comes up in multivariable calculus, complex variables, electrostatics, and various other subjects. We think of \(C\) as a balloon and suck most of the air out of it, being careful not to hit any of the fixed points. The result of this deformation is a new closed curve \(\Gamma\), consisting of \(n\) small circles \(\gamma_{1},\ldots,\gamma_{n}\) about the fixed points, and two-way bridges connecting these circles (Figure 6.8.8). Note that \(I_{\Gamma}=I_{C}\), by property (I), since we didn't cross any fixed points during the deformation. Now let's compute \(I_{\Gamma}\) by considering \([\phi]_{\Gamma}\). There are contributions to \([\phi]_{\Gamma}\) from the small circles and from the two-way bridges. The key point is that _the contributions from the bridges cancel out:_ as we move around \(\Gamma\), each bridge is traversed once in one direction, and later in the opposite direction. Thus we only need to consider the contributions from the small circles. On \(\gamma_{k}\), the angle \(\phi\) changes by \([\phi]_{\gamma_{k}}=2\pi\,I_{k}\), by definition of \(I_{k}\). Hence \[I_{\Gamma}=\frac{1}{2\pi}[\phi]_{\Gamma}=\frac{1}{2\pi}\sum_{k=1}^{n}[\phi]_{ \gamma_{k}}=\sum_{k=1}^{n}I_{k}\] and since \(I_{\Gamma}=I_{C}\), we're done. ### Index theory Figure 6.8.8: This theorem is reminiscent of Gauss's law in electrostatics, namely that the electric flux through a surface is proportional to the total charge enclosed. See Exercise 6.8.12 for a further exploration of this analogy between index and charge. **Theorem 6.8.2**: Any closed orbit in the phase plane must enclose fixed points whose indices sum to \(+1\). Let \(C\) denote the closed orbit. From property (4) above, \(I_{C}=+1\). Then Theorem 6.8.1 implies \(\sum_{k=1}^{n}I_{k}=+1\). Theorem 6.8.2 has many practical consequences. For instance, it implies that there is always at least one fixed point inside any closed orbit in the phase plane (as you may have noticed on your own). If there is _only_ one fixed point inside, it cannot be a saddle point. Furthermore, Theorem 6.8.2 can sometimes be used to rule out the possible occurrence of closed trajectories, as seen in the following examples. **Example 6.8.5**: Show that closed orbits are impossible for the "rabbit vs. sheep" system \[\begin{array}{l}\dot{x}=x(3-x-2y)\\ \dot{y}=y(2-x-y)\end{array}\] studied in Section 6.4. Here \(x\), \(y\geq 0\). _Solution:_ As shown previously, the system has four fixed points: \((0,0)=\) unstable node; \((0,2)\) and \((3,0)=\) stable nodes; and \((1,1)=\) saddle point. The index at each of these points is shown in Figure 6.8.9. Now suppose that the system had a closed trajectory. Where could it lie? There are three qualitatively different locations, indicated by the dotted curves \(C_{1}\), \(C_{2}\), \(C_{3}\). They can be ruled out as follows: orbits like \(C_{1}\) are impossible because they don't enclose any fixed points, and orbits like \(C_{2}\) violate the requirement that the indices inside must sum to \(+1\). But what is wrong with orbits like \(C_{3}\), which satisfy the index requirement? The trouble is that such orbits always cross the \(x\)-axis or the \(y\)-axis, and these axes contain straight-line trajectories. Hence \(C_{3}\) violates the rule that trajectories can't cross (recall Section 6.2). **Example 6.8.6**: Show that the system \(\dot{x}=xe^{-x},\ \dot{y}=1+x+y^{2}\) has no closed orbits. _Solution:_ This system has no fixed points: if \(\dot{x}=0\), then \(x=0\) and so \(\dot{y}=1+y^{2}\approx 0\). By Theorem 6.8.2, closed orbits cannot exist. manifold is a curve that is harder to find. The goal of this exercise is to approximate this unknown curve. 1. Let \((x,\,y)\) be a point on the stable manifold, and assume that \((x,\,y)\) is close to \((-1,0)\). Introduce a new variable \(u=x+1\), and write the stable manifold as \(y=a_{1}u+a_{2}u^{2}+O(u^{3})\). To determine the coefficients, derive two expressions for \(dy/du\) and equate them. 2. Check that your analytical result produces a curve with the same shape as the stable manifold shown in Figure 6.1.4. ### 6.2 Existence, Uniqueness, and Topological Consequences We claimed that different trajectories can never intersect. But in many phase portraits, different trajectories appear to intersect at a fixed point. Is there a contradiction here? Consider the system \(\dot{x}=y,\,\,\,\dot{y}=-x+(1-x^{2}-y^{2})y\). 1. Let \(D\) be the open disk \(x^{2}+y^{2}<4\). Verify that the system satisfies the hypotheses of the existence and uniqueness theorem throughout the domain \(D\). 2. By substitution, show that \(x(t)=\sin\,t\), \(y(t)=\cos\,t\) is an exact solution of the system. 3. Now consider a different solution, in this case starting from the initial condition \(x(0)=\frac{1}{2},\,\,\,y(0)=0\). Without doing any calculations, explain why this solution _must_ satisfy \(x(t)^{2}+y(t)^{2}<1\) for all \(t<\infty\). ### 6.3 Fixed Points and Linearization For each of the following systems, find the fixed points, classify them, sketch the neighboring trajectories, and try to fill in the rest of the phase portrait. \(\dot{x}=x-y,\,\,\,\dot{y}=x^{2}-4\) 6.3.2 \(\dot{x}=\sin y,\,\,\,\dot{y}=x-x^{3}\) 6.3.3 \(\dot{x}=1+y-e^{-x},\,\,\,\dot{y}=x^{3}-y\) 6.3.4 \(\dot{x}=y+x-x^{3},\,\,\,\dot{y}=-y\) 6.3.5 \(\dot{x}=\sin y,\,\,\,\dot{y}=\cos x\) 6.3.6 \(\dot{x}=xy-1,\,\,\,\dot{y}=x-y^{3}\) For each of the nonlinear systems above, plot a computer-generated phase portrait and compare to your approximate sketch. (Gravitational equilibrium) A particle moves along a line joining two stationary masses, \(m_{1}\) and \(m_{2}\), which are separated by a fixed distance \(a\). Let \(x\) denote the distance of the particle from \(m_{1}\). 1. Show that \(\ddot{x}=\frac{Gm_{2}}{\left(x-a\right)^{2}}-\frac{Gm_{1}}{x^{2}}\), where \(G\) is the gravitational constant. 2. Find the particle's equilibrium position. Is it stable or unstable? 6.3.9 Consider the system \(\dot{x}=y^{3}-4x,\,\,\,\dot{y}=y^{3}-y-3x\). 1. Find all the fixed points and classify them. 2. Show that the line \(x=y\) is invariant, i.e., any trajectory that starts on it stays on it. 3. Show that \(\left|x(t)-y(t)\right|\to 0\) as \(t\rightarrow\infty\) for all other trajectories. (Hint: Form a differential equation for \(x-y\).) 4. Sketch the phase portrait. 5. If you have access to a computer, plot an accurate phase portrait on the square domain \(-20\leq x\), \(y\leq 20\). (To avoid numerical instability, you'll need to use a fairly small step size, because of the strong cubic nonlinearity.) Notice the trajectories seem to approach a certain curve as \(t\rightarrow-\infty\); can you explain this behavior intuitively, and perhaps find an approximate equation for this curve? (Dealing with a fixed point for which linearization is inconclusive) The goal of this exercise is to sketch the phase portrait for \(\dot{x}=xy\), \(\dot{y}=x^{2}-y\). 1. Show that the linearization predicts that the origin is a non-isolated fixed point. 2. Show that the origin is in fact an isolated fixed point. 3. Is the origin repelling, attracting, a saddle, or what? Sketch the vector field along the nullclines and at other points in the phase plane. Use this information to sketch the phase portrait. 4. Plot a computer-generated phase portrait to check your answer to (c). (Note: This problem can also be solved by a method called _center manifold theory_, as explained in Wiggins (1990) and Guckenheimer and Holmes (1983).) (Nonlinear terms can change a star into a spiral) Here's another example that shows that borderline fixed points are sensitive to nonlinear terms. Consider the system in polar coordinates given by \(\dot{r}=-r\), \(\dot{\theta}=1/\ln r\). 1. Find \(r(t)\) and \(\theta(t)\) explicitly, given an initial condition \((r_{0},\theta_{0})\). 2. Show that \(r(t)\to 0\) and \(|\theta(t)|\rightarrow\infty\) as \(t\rightarrow\infty\). Therefore the origin is a stable spiral for the nonlinear system. 3. Write the system in \(x\), \(y\) coordinates. 4. Show that the linearized system about the origin is \(\dot{x}=-x\), \(\dot{y}=-y\). Thus the origin is a stable star for the linearized system. (Polar coordinates) Using the identity \(\theta=\tan^{-1}(y/x)\), show that \(\dot{\theta}=(x\dot{y}-y\dot{x})/r^{2}\). (Another linear center that's actually a nonlinear spiral) Consider the system \(\dot{x}=-y-x^{3}\), \(\dot{y}=x\). Show that the origin is a spiral, although the linearization predicts a center. Classify the fixed point at the origin for the system \(\dot{x}=-y+ax^{3}\), \(\dot{y}=x+ay^{3}\), for all real values of the parameter \(a\). Consider the system \(\dot{r}=r(1-r^{2})\), \(\dot{\theta}=1-\cos\theta\) where \(r\), \(\theta\) represent polar coordinates. Sketch the phase portrait and thereby show that the fixed point \(r^{\bullet}=1\), \(\theta^{\bullet}=0\) is attracting but not Liapunov stable. **6.3.16**: (Saddle switching and structural stability) Consider the system \(\dot{x}=a+x^{2}-xy,\ \ \dot{y}=y^{2}-x^{2}-1,\ \text{where}\ a\ \text{is}\ \text{a}\ \text{parameter}.\) * Sketch the phase portrait for \(a=0.\) Show that there is a trajectory connecting two saddle points. (Such a trajectory is called a _saddle connection._) * With the aid of a computer if necessary, sketch the phase portrait for \(a<0\) and \(a>0.\) Notice that for \(a\neq 0,\) the phase portrait has a different topological character: the saddles are no longer connected by a trajectory. The point of this exercise is that the phase portrait in (a) is _not structurally stable_, since its topology can be changed by an arbitrarily small perturbation \(a.\) **6.3.17**: (Nasty fixed point) The system \(\dot{x}=xy-x^{2}y+y^{3},\ \ \dot{y}=y^{2}+x^{3}-xy^{2}\) has a nasty higher-order fixed point at the origin. Using polar coordinates or otherwise, sketch the phase portrait. ### 6.4 Rabbits versus Sheep Consider the following "rabbits vs. sheep" problems, where \(x,y\geq 0.\) Find the fixed points, investigate their stability, draw the nullclines, and sketch plausible phase portraits. Indicate the basins of attraction of any stable fixed points. **6.4.1**: \(\dot{x}=x(3-x-y),\ \ \dot{y}=y(2-x-y)\) **6.4.2**: \(\dot{x}=x(3-2x-y),\ \ \dot{y}=y(2-x-y)\) **6.4.3**: \(\dot{x}=x(3-2x-2y),\ \ \dot{y}=y(2-x-y)\) The next three exercises deal with competition models of increasing complexity. We assume \(N_{1},\)\(N_{2}\geq 0\) in all cases. **6.4.4**: The simplest model is \(\dot{N}_{1}=\dot{\imath}_{1}N_{1}-bN_{1}N_{2},\ \ \dot{N}_{2}=\dot{\imath}_{2}N_{2}-b_{2}N_{1}N_{2}.\) * In what way is this model less realistic than the one considered in the text? * Show that by suitable rescalings of \(N_{1},\)\(N_{2},\) and \(t,\) the model can be nondimensionalized to \(x^{\prime}=x(1-y),\)\(y^{\prime}=y(\rho-x).\) Find a formula for the dimensionless group \(\rho.\) * Sketch the nullclines and vector field for the system in (b). * Draw the phase portrait, and comment on the biological implications. * Show that (almost) all trajectories are curves of the form \(\rho\ln x-x=\ln y-y+C.\) (Hint: Derive a differential equation for \(dx/dy,\) and separate the variables.) Which trajectories are not of the stated form? **6.4.5**: Now suppose that species #1 has a finite carrying capacity \(K_{1}.\) Thus \[\dot{N}_{1}=\dot{\imath}_{1}N_{1}(1-N_{1}/K_{1})-b_{1}N_{1}N_{2}\] \[\dot{N}_{2}=\dot{\imath}_{2}N_{2}-b_{2}N_{1}N_{2}.\]Nondimensionalize the model and analyze it. Show that there are two qualitatively different kinds of phase portrait, depending on the size of \(K_{1}\). (Hint: Draw the nullclines.) Describe the long-term behavior in each case. Finally, suppose that both species have finite carrying capacities: \[\dot{N}_{1} = r_{1}N_{1}(1-N_{1}/K_{1})-b_{1}N_{1}N_{2}\] \[\dot{N}_{2} = r_{2}N_{2}(1-N_{2}/K_{2})-b_{2}N_{1}N_{2}.\] 1. Nondimensionalize the model. How many dimensionless groups are needed? 2. Show that there are four qualitatively different phase portraits, as far as long-term behavior is concerned. 3. Find conditions under which the two species can stably coexist. Explain the biological meaning of these conditions. (Hint: The carrying capacities reflect the competition _within_ a species, whereas the \(b\)'s reflect the competition _between_ species.) (Two-mode laser) According to Haken (1983, p. 129), a two-mode laser produces two different kinds of photons with numbers \(n_{1}\) and \(n_{2}\). By analogy with the simple laser model discussed in Section 3.3, the rate equations are \[\dot{n}_{1} = G_{1}Nn_{1}-K_{1}n_{1}\] \[\dot{n}_{2} = G_{2}Nn_{2}-K_{2}n_{2}\] where \(N(t)=N_{0}-\alpha_{1}n_{1}-\alpha_{2}n_{2}\) is the number of excited atoms. The parameters \(G_{1}\), \(G_{2}\), \(k_{1}\), \(k_{2}\), \(\alpha_{1}\), \(\alpha_{2}\), \(N_{0}\) are all positive. 1. Discuss the stability of the fixed point \(n_{1}*=n_{2}*=0\). 2. Find and classify any other fixed points that may exist. 3. Depending on the values of the various parameters, how many qualitatively different phase portraits can occur? For each case, what does the model predict about the long-term behavior of the laser? The system \(\dot{x}=ax^{c}-\phi x\), \(\dot{y}=by^{c}-\phi y\), where \(\phi\equiv ax^{c}+by^{c}\), has been used to model the evolutionary dynamics of two interacting species (Nowak 2006). Here \(x\) and \(y\) denote the relative abundances of the species, and \(a\), \(b\), \(c>0\) are parameters. The unusual feature of this model is the exponent \(c\) in the growth law for each species. The goal of this question is to explore how the value of \(c\) affects the system's long-term dynamics. 1. Show that if the initial conditions satisfy \(x_{0}+y_{0}=1\), then \(x(t)+y(t)=1\) for all \(t\); hence, any trajectory that starts on this line stays on it forever. (Assume here and in the other parts of this problem that \(x_{0}\) and \(y_{0}\) are non-negative.) 2. Show that all trajectories starting in the positive quadrant are attracted to the invariant line \(x+y=1\) found in part (a). Thus, the analysis of the system'slong-term dynamics boils down to figuring out how trajectories flow along this invariant line. * Draw the phase portrait in the \((x,y)\) plane for the simple case \(c=1\). How does the long-term behavior of the system depend on the relative sizes of \(a\) and \(b\)? * How does the phase portrait change when \(c>1\)? * Finally, what happens when \(c<1\)? * (Model of a national economy) The following exercise is adapted from Exercise 2.24 in Jordan and Smith (1987). A simple model of a national economy, based on what economists call the "Keynesian cross," is given by \(\dot{I}=I-\alpha C\), \(\dot{C}=\beta(I-C-G)\), where \(I\geq 0\) is the national income, \(C\geq 0\) is the rate of consumer spending, and \(G\geq 0\) is the rate of government spending. The parameters \(\alpha\) and \(\beta\) satisfy \(1<\alpha<\infty\) and \(1\leq\beta<\infty\). * Show that if the rate of government spending \(G\) is _constant_, there is a fixed point for the model, and hence an equilibrium state for the economy. Classify this fixed point as a function of \(\alpha\) and \(\beta\). In the limiting case where \(\beta=1\), show that the economy is predicted to oscillate. * Next, assume that government spending increases _linearly_ with the national income: \(G=G_{0}+kI\), where \(k>0\). Determine under what conditions there is an economically sensible equilibrium, meaning one in the first quadrant \(I\geq 0\), \(C\geq 0\). Show that this kind of equilibrium ceases to exist if \(k\) exceeds a critical value \(k_{c}\), to be determined. How is the economy predicted to behave when \(k>k_{c}\)? * Finally, suppose government expenditures grow _quadratically_ with the national income: \(G=G_{0}+kI^{2}\). Show that the system can have two, one, or no fixed points in the first quadrant, depending on how big \(G_{0}\) is. Discuss the implications for the economy in the various cases by interpreting the phase portraits. #### 6.4.10 (Hypercycle equation) In Eigen and Schuster's (1978) model of pre-biotic evolution, a group of \(n\geq 2\) RNA molecules or other self-reproducing chemical units are imagined to catalyze each other's replication in a closed feedback loop, with one molecule serving as the catalyst for the next. Eigen and Schuster considered a variety of hypothetical reaction schemes, the simplest of which, in dimensionless form, is \[\dot{x}_{i}=x_{i}\left[x_{i-1}-\sum_{j=1}^{n}x_{j}x_{j-1}\right]\,\quad i=1,2,\ldots,n\,\] where the indices are reduced modulo \(n\), so that \(x_{0}=x_{n}\). Here \(x_{i}\) denotes the relative frequency of molecule \(i\). From now on, let's focus on the case where \(n=2\), and assume \(x_{i}>0\) for all \(i\). 1. Show that for \(n=2\), the system reduces to \(\dot{x}_{1}=x_{1}(x_{2}-2x_{1}x_{2})\), \(\dot{x}_{2}=x_{2}(x_{1}-2x_{1}x_{2})\). 2. Find and classify all the fixed points. (Remember we're assuming \(x_{i}>0\) for all \(i\).) 3. Let \(u=x_{1}+x_{2}\). By deriving and analyzing a differential equation for \(\dot{u}\) in terms of \(u\) and \(x_{1}x_{2}\), prove that \(u(t)\to 1\) as \(t\to\infty\). 4. Let \(v=x_{1}-x_{2}\). Prove that \(v(t)\to 0\) as \(t\to\infty\). 5. By combining the results of parts (c) and (d), prove that \((x(t),y(t))\to(\frac{1}{2},\frac{1}{2})\). 6. Using a computer, draw the phase portrait. You'll see something striking, something that cries out for explanation. Explain it analytically. For larger values of \(n\), the dynamics of the hypercycle equation become much richer--see Chapter 12 in Hofbauer and Sigmund (1998). 4.11 (Leftists, rightists, centrists) Vasquez and Redner (2004, p. 8489) mention a highly simplified model of political opinion dynamics consisting of a population of leftists, rightists, and centrists. The leftists and rightists never talk to each other; they are too far apart politically to even begin a dialogue. But they do talk to the centrists--this is how opinion change occurs. Whenever an extremist of either type talks with a centrist, one of them convinces the other to change his or her mind, with the winner depending on the sign of a parameter \(r\). If \(r>0\), the extremist always wins and persuades the centrist to move to that end of the spectrum. If \(r<0\), the centrist always wins and pulls the extremist to the middle. The model's governing equations are \[\dot{x} = rxz\] \[\dot{y} = ryz\] \[\dot{z} = - rxz-ryz\] where \(x\), \(y\), and \(z\) are the relative fractions of rightists, leftists, and centrists, respectively, in the population. 1. Show that the set \(x+y+z=1\) is invariant. 2. Analyze the long-term behavior predicted by the model for both positive and negative values of \(r\). 3. Interpret the results in political terms. ### 6.5 Conservative Systems Consider the system \(\ddot{x}=x^{3}-x\). 1. Find all the equilibrium points and classify them. 2. Find a conserved quantity. 3. Sketch the phase portrait. Hamiltonian systems are fundamental to classical mechanics; they provide an equivalent but more geometric version of Newton's laws. They are also central to celestial mechanics and plasma physics, where dissipation can sometimes be neglected on the time scales of interest. The theory of Hamiltonian systems is deep and beautiful, but perhaps too specialized and subtle for a first course on nonlinear dynamics. See Arnold (1978), Lichtenberg and Lieberman (1992), Tabor (1989), or Henon (1983) for introductions. Here's the simplest instance of a Hamiltonian system. Let \(H(p,\,q)\) be a smooth, real-valued function of two variables. The variable \(q\) is the "generalized coordinate" and \(p\) is the "conjugate momentum." (In some physical settings, \(H\) could also depend explicitly on time \(t\), but we'll ignore that possibility.) Then a system of the form \[\dot{q}=\partial H/\partial p,\qquad\dot{p}=-\partial H/\partial q\] is called a _Hamiltonian system_ and the function \(H\) is called the _Hamiltonian_. The equations for \(\dot{q}\) and \(\dot{p}\) are called Hamilton's equations. The next three exercises concern Hamiltonian systems. #### 6.5.8 (Harmonic oscillator) For a simple harmonic oscillator of mass \(m\), spring constant \(k\), displacement \(x\), and momentum \(p\), the Hamiltonian is \(H=\frac{p^{2}}{2m}+\frac{kx^{2}}{2}\). Write out Hamilton's equations explicitly. Show that one equation gives the usual definition of momentum and the other is equivalent to \(F=ma\). Verify that \(H\) is the total energy. Show that for any Hamiltonian system, \(H(x,\,p)\) is a conserved quantity. (Hint: Show \(\dot{H}=0\) by applying the chain rule and invoking Hamilton's equations.) Hence the trajectories lie on the contour curves \(H(x,\,p)=C\). #### 6.5.10 (Inverse-square law) A particle moves in a plane under the influence of an inverse-square force. It is governed by the Hamiltonian \(H(p,r)=\frac{p^{2}}{2}+\frac{h^{2}}{2r^{2}}-\frac{k}{r}\) where \(r>0\) is the distance from the origin and \(p\) is the radial momentum. The parameters \(h\) and \(k\) are the angular momentum and the force constant, respectively. a) Suppose \(k>0\), corresponding to an attractive force like gravity. Sketch the phase portrait in the (\(r,\,p\)) plane. (Hint: Graph the "effective potential" \(V(r)=h^{2}/2r^{2}-k/r\) and then look for intersections with horizontal lines of height \(E\). Use this information to sketch the contour curves \(H(p,\,r)=E\) for various positive and negative values of \(E\).) 2. Show that the trajectories are closed if \(-k^{2}/2h^{2}<E<0\), in which case the particle is "captured" by the force. What happens if \(E>0\)? What about \(E=0\)? 3. If \(k<0\) (as in electric repulsion), show that there are no periodic orbits. 6.5.11 (Basins for damped double-well oscillator) Suppose we add a small amount of damping to the double-well oscillator of Example 6.5.2. The new system is \(\dot{x}=y\), \(\dot{y}=-by+x-x^{3}\), where \(0<b<<1\). Sketch the basin of attraction for the stable fixed point (\(x\)*, \(y\)*) = (I,0). Make the picture large enough so that the global structure of the basin is clearly indicated. 6.5.12 (Why we need to assume _isolated_ minima in Theorem 6.5.1) Consider the system \(\dot{x}=xy\), \(\dot{y}=-x^{2}\). 1. Show that \(E=x^{2}+y^{2}\) is conserved. 2. Show that the origin is a fixed point, but not an isolated fixed point. 3. Since \(E\) has a local minimum at the origin, one might have thought that the origin has to be a center. But that would be a misuse of Theorem 6.5.1; the theorem does not apply here because the origin is _not_ an isolated fixed point. Show that in fact the origin is not surrounded by closed orbits, and sketch the actual phase portrait. 6.5.13 (Nonlinear centers) 1. Show that the Duffing equation \(\ddot{x}+x+\varepsilon x^{3}=0\) has a nonlinear center at the origin for all \(\varepsilon>0\). 2. If \(\varepsilon<0\), show that all trajectories near the origin are closed. What about trajectories that are far from the origin? 6.5.14 (Glider) Consider a glider flying at speed \(v\) at an angle \(\theta\) to the horizontal. Its motion is governed approximately by the dimensionless equations \[\dot{v}= -\sin\theta-Dv^{2}\] \[v\dot{\theta}= -\cos\theta+v^{2}\] where the trigonometric terms represent the effects of gravity and the \(v^{2}\) terms represent the effects of drag and lift. 1. Suppose there is no drag (\(D=0\)). Show that \(v^{3}-3v\cos\theta\) is a conserved quantity. Sketch the phase portrait in this case. Interpret your results physically--what does the flight path of the glider look like? 2. Investigate the case of positive drag (\(D>0\)). In the next four exercises, we return to the problem of a bead on a rotating hoop, discussed in Section 3.5. Recall that the bead's motion is governed by \[mr\ddot{\phi}=-b\dot{\phi}-mg\sin\phi+mr\omega^{2}\sin\phi\cos\phi.\]Previously, we could only treat the overdamped limit. The next four exercises deal with the dynamics more generally. (Frictionless bead) Consider the undamped case \(b=0\). 1. Show that the equation can be nondimensionalized to \(\phi^{\prime\prime}\!=\sin\phi\) (cos \(\phi-\gamma^{-1}\)), where \(\gamma\!=\!r\omega^{2}/g\) as before, and prime denotes differentiation with respect to dimensionless time \(\tau\!=\!\omega t\). 2. Draw all the qualitatively different phase portraits as \(\gamma\) varies. 3. What do the phase portraits imply about the physical motion of the bead? (Small oscillations of the bead) Return to the original dimensional variables. Show that when \(b=0\) and \(\omega\) is sufficiently large, the system has a symmetric pair of stable equilibria. Find the approximate frequency of small oscillations about these equilibria. (Please express your answer with respect to \(t\), not \(\tau\)) (A puzzling constant of motion for the bead) Find a conserved quantity when \(b=0\). You might think that it's essentially the bead's total energy, but it isn't! Show explicitly that the bead's kinetic plus potential energy is _not_ conserved. Does this make sense physically? Can you find a physical interpretation for the conserved quantity? (Hint: Think about reference frames and moving constraints.) (General case for the bead) Finally, allow the damping \(b\) to be arbitrary. Define an appropriate dimensionless version of \(b\), and plot all the qualitatively different phase portraits that occur as \(b\) and \(\gamma\) vary. (Rabbits vs. foxes) The model \(\dot{R}=aR-bRF\), \(\dot{F}=-cF+dRF\) is the _Lotka-Volterra predator-prey model_. Here \(R(t)\) is the number of rabbits, \(F(t)\) is the number of foxes, and \(a\), \(b\), \(c\), \(d>0\) are parameters. 1. Discuss the biological meaning of each of the terms in the model. Comment on any unrealistic assumptions. 2. Show that the model can be recast in dimensionless form as \(x^{\prime}\!=\!x(1\!-\!y)\), \(y^{\prime}\!=\!\mu y(x\!-\!1)\). 3. Find a conserved quantity in terms of the dimensionless variables. 4. Show that the model predicts _cycles_ in the populations of both species, for almost all initial conditions. This model is popular with many textbook writers because it's simple, but some are beguiled into taking it too seriously. Mathematical biologists dismiss the Lotka-Volterra model because it is not structurally stable, and because real predator-prey cycles typically have a characteristic amplitude. In other words, realistic models should predict a _single_ closed orbit, or perhaps finitely many, but not a continuous family of neutrally stable cycles. See the discussions in May (1972), Edelstein-Keshet (1988), or Murray (2002). (Rock-paper-scissors) In the children's hand game of rock-paper-scissors, rock beats scissors (by smashing it); scissors beats paper (by cutting it); and paper beats rock (by covering it). In a biological setting, analogs of this non-transitive competition occur among certain types of bacteria (Kirkup and Riley 2004) and lizards (Sinervo and Lively 1996). Consider the following idealized model for three competing species locked in a life-and-death game of rock-paper-scissors: \[\dot{P} = P(R - S)\] \[\dot{R} = R(S - P)\] \[\dot{S} = S(P - R),\] where \(P\), \(R\), and \(S\) (all positive, of course) are the sizes of the paper, rock, and scissors populations. * Write a few sentences explaining the various terms in these equations. Be sure to comment on why a given term has a plus or minus sign in front of it. You don't have to write much--just enough to demonstrate that you understand how the form of the equations reflects the rock-paper-scissors story. Also, state some of the biological assumptions being made here implicitly. * Show that \(P + R + S\) is a conserved quantity. * Show that \(PRS\) is also conserved. * How does the system behave as \(t \to \infty\)? Prove that your answer is correct. (Hint: Visualize the level sets of the functions \(E_{i}(P,R,S) = P + R + S\) and \(E_{2}(P,R,S) = PRS\) in the three-dimensional \((P,R,S)\) space. What can you infer from the fact that all trajectories must simultaneously lie in level sets of _both_ functions?) ### 6.6 Reversible Systems Show that each of the following systems is reversible, and sketch the phase portrait. #### 6.6.1 \(\dot{x} = y(1 - x^{2})\), \(\dot{y} = 1 - y^{2}\) #### 6.6.2 \(\dot{x} = y\), \(\dot{y} = x \cos y\) #### 6.6.3 (Wallpaper) Consider the system \(\dot{x} = \sin y\), \(\dot{y} = \sin x\). * Show that the system is reversible. * Find and classify all the fixed points. * Show that the lines \(y = \pm x\) are invariant (any trajectory that starts on them stays on them forever). * Sketch the phase portrait. #### 6.6.4 (Computer explorations) For each of the following reversible systems, try to sketch the phase portrait by hand. Then use a computer to check your sketch. If the computer reveals patterns you hadn't anticipated, try to explain them. * y^{3}\), \(\dot{y} = x \cos y\) c) \(\dot{x} = \sin y\), \(\dot{y} = y^{2} Consider equations of the form \(\ddot{x}+f(\dot{x})+g(x)=0\), where \(f\)is an even function, and both \(f\)and \(g\) are smooth. a) Show that the equation is invariant under the pure time-reversal symmetry \(t\to-t\). b) Show that the equilibrium points cannot be stable nodes or spirals. (Manta ray) Use qualitative arguments to deduce the "manta ray" phase portrait of Example 6.6.1. a) Plot the nullclines \(\dot{x}=0\) and \(\dot{y}=0\). b) Find the sign of \(\dot{x},\dot{y}\) in different regions of the plane. c) Calculate the eigenvalues and eigenvectors of the saddle points at (\(-1,\,\pm 1\)). d) Consider the unstable manifold of (\(-1,\,-1\)). By making an argument about the signs of \(\dot{x},\dot{y}\), prove that this unstable manifold intersects the negative \(x\)-axis. Then use reversibility to prove the existence of a heteroclinic trajectory connecting (\(-1,\,-1\)) to (\(-1,\,1\)). e) Using similar arguments, prove that another heteroclinic trajectory exists, and sketch several other trajectories to fill in the phase portrait. (Oscillator with both positive and negative damping) Show that the system \(\ddot{x}+x\dot{x}+x=0\) is reversible and plot the phase portrait. (Reversible system on a cylinder) While studying chaotic streamlines inside a drop immersed in a steady Stokes flow, Stone et al. (1991) encountered the system \[\dot{x}=\tfrac{\sqrt{2}}{4}\,x(x-1)\sin\phi,\qquad\dot{\phi}=\tfrac{1}{2}\Big{[} \beta-\tfrac{1}{\sqrt{2}}\cos\phi-\tfrac{1}{8\sqrt{2}}\,x\cos\phi\Big{]}\] where \(0\leq x\leq 1\) and \(-\pi\leq\phi\leq\pi\). Since the system is \(2\pi\)-periodic in \(\phi\), it may be considered as a vector field on a _cylinder_. (See Section 6.7 for another vector field on a cylinder.) The \(x\)-axis runs along the cylinder, and the \(\phi\)-axis wraps around it. Note that the cylindrical phase space is finite, with edges given by the circles \(x=0\) and \(x=1\). a) Show that the system is reversible. b) Verify that for \(\tfrac{9}{8\sqrt{2}}>\beta>\tfrac{1}{\sqrt{2}}\), the system has three fixed points on the cylinder, one of which is a saddle. Show that this saddle is connected to itself by a homoclinic orbit that winds around the waist of the cylinder. Using reversibility, prove that there is a _band of closed orbits_ sandwiched between the circle \(x=0\) and the homoclinic orbit. Sketch the phase portrait on the cylinder, and check your results by numerical integration. c) Show that as \(\beta\to\tfrac{1}{\sqrt{2}}\) from above, the saddle point moves toward the circle \(x=0\), and the homoclinic orbit tightens like a noose. Show that all the closed orbits disappear when \(\beta=\tfrac{1}{\sqrt{2}}\). d) For \(0<\beta<\tfrac{1}{\sqrt{2}}\), show that there are two saddle points on the edge \(x=0\). Plot the phase portrait on the cylinder. #### 6.6.9 (Josephson junction array) As discussed in Exercises 4.6.4 and 4.6.5, the equations \[\frac{d\phi_{k}}{d\tau}=\Omega+a\sin\phi_{k}+\mbox{$\frac{1}{N}$}\sum_{j=1}^{N} \sin\phi_{j}\,,\,\mbox{for}\,\,k=1,2,\] arise as the dimensionless circuit equations for a resistively loaded array of Josephson junctions. * Let \(\theta_{k}=\phi_{k}-\frac{\pi}{2}\), and show that the resulting system for \(\theta_{k}\) is reversible. * Show that there are four fixed points (mod \(2\pi\)) when \(\left| \Omega/(a+1)\right|<1\), and none when \(\left|\Omega/(a+1)\right|>1\). * Using the computer, explore the various phase portraits that occur for \(a=1\), as \(\Omega\) varies over the interval \(0\leq\Omega\leq 3\). For more about this system, see Tsang et al. (1991). Is the origin a nonlinear center for the system \(\dot{x}=-y-x^{2}\), \(\dot{y}=x\)? 6.11 (Rotational dynamics and a phase portrait on a sphere) The rotational dynamics of an object in a shear flow are governed by \[\dot{\theta}=\cot\phi\,\cos\theta,\qquad\dot{\phi}=(\cos^{2}\phi+A\sin^{2}\phi )\sin\theta,\] where \(\theta\) and \(\phi\) are spherical coordinates that describe the orientation of the object. Our convention here is that \(-\pi<\theta\leq\pi\) is the "longitude," i.e., the angle around the \(z\)-axis, and \(-\frac{\pi}{2}\leq\phi\leq\frac{\pi}{2}\) is the "latitude," i.e., the angle measured northward from the equator. The parameter \(A\) depends on the shape of the object. * Show that the equations are reversible in two ways: under \(t\rightarrow-t\), \(\theta\rightarrow-\theta\) and under \(t\rightarrow-t\), \(\phi\rightarrow-\phi\). * Investigate the phase portraits when \(A\) is positive, zero, and negative. You may sketch the phase portraits as Mercator projections (treating \(\theta\) and \(\phi\) as rectangular coordinates), but it's better to visualize the motion on the sphere, if you can. * Relate your results to the tumbling motion of an object in a shear flow. What happens to the orientation of the object as \(t\rightarrow\infty\)? ### Pendulum 7.1 (Damped pendulum) Find and classify the fixed points of \(\ddot{\theta}+b\dot{\theta}+\sin\theta=0\) for all \(b>0\), and plot the phase portraits for the qualitatively different cases. 7.2 (Pendulum driven by constant torque) The equation \(\ddot{\theta}+\sin\theta=\gamma\) describes the dynamics of an undamped pendulum driven by a constant torque, or an undamped Josephson junction driven by a constant bias current. * Find all the equilibrium points and classify them as \(\gamma\) varies. b) Sketch the nullclines and the vector field. c) Is the system conservative? If so, find a conserved quantity. Is the system reversible? d) Sketch the phase portrait on the plane as \(\gamma\) varies. e) Find the approximate frequency of small oscillations about any centers in the phase portrait. 7.3 (Nonlinear damping) Analyze \(\ddot{\theta}+(1+a\cos\theta)\dot{\theta}+\sin\theta=0\), for all \(a\geq 0\). 7.4 (Period of the pendulum) Suppose a pendulum governed by \(\ddot{\theta}+\sin\theta=0\) is swinging with an amplitude \(\alpha\). Using some tricky manipulations, we are going to derive a formula for \(T(\alpha)\), the period of the pendulum. a) Using conservation of energy, show that \(\dot{\theta}^{2}=2(\cos\theta-\cos\alpha)\) and hence that \[T=4\int_{0}^{\alpha}\frac{d\theta}{\left[2(\cos\theta-\cos\alpha)\right]^{1/2}}\.\] b) Using the half-angle formula, show that \(T=4\int_{0}^{\alpha}\frac{d\theta}{\left[4(\sin^{2}\frac{1}{2}\alpha-\sin^{2} \frac{1}{2}\theta)\right]^{1/2}}\). c) The formulas in parts (a) and (b) have the disadvantage that \(\alpha\) appears in both the integrand and the upper limit of integration. To remove the \(\alpha\)-dependence from the limits of integration, we introduce a new angle \(\phi\) that runs from 0 to \(\frac{\pi}{2}\) when \(\theta\) runs from 0 to \(\alpha\). Specifically, let \((\sin\frac{1}{2}\alpha)\sin\phi=\sin\frac{1}{2}\theta\). Using this substitution, rewrite (b) as an integral with respect to \(\phi\). Thereby derive the exact result \[T=4\int_{0}^{\pi/2}\frac{d\phi}{\cos\frac{1}{2}\theta}=4K(\sin^{2}\frac{1}{2} \alpha),\] where the _complete elliptic integral of the first kind_ is defined as \[K(m)=\int_{0}^{\pi/2}\frac{d\phi}{(1-m\sin^{2}\phi)^{1/2}},\ \mbox{for}\ 0\leq m\leq 1.\] d) By expanding the elliptic integral using the binomial series and integrating term-by-term, show that \[T(\alpha)=2\pi\left[1+\frac{1}{16}\alpha^{2}+O(\alpha^{4})\right]\ \mbox{for}\ \alpha<<1.\] Note that larger swings take longer. 7.5 (Numerical solution for the period) Redo Exercise 6.7.4 using either numerical integration of the differential equation, or numerical evaluation of the elliptic integral. Specifically, compute the period \(T(\alpha)\), where \(\alpha\) runs from 0 to 180\({}^{\circ}\) in steps of 10\({}^{\circ}\). ### Index Theory Show that each of the following fixed points has an index equal to \(+1\). a) stable spiral b) unstable spiral c) center d) star e) degenerate node (Unusual fixed points) For each of the following systems, locate the fixed points and calculate the index. (Hint: Draw a small closed curve \(C\) around the fixed point and examine the variation of the vector field on \(C\).) \(\dot{x}=x^{2}\), \(\dot{y}=y\) **6.8.3** \(\dot{x}=y-x\), \(\dot{y}=x^{2}\) **6.8.4** \(\dot{x}=y^{3}\), \(\dot{y}=x\) **6.8.5** \(\dot{x}=xy\), \(\dot{y}=x+y\) **6.8.6** A closed orbit in the phase plane encircles \(S\) saddles, \(N\) nodes, \(F\) spirals, and \(C\) centers, all of the usual type. Show that \(N+F+C=1+S\). (Ruling out closed orbits) Use index theory to show that the system \(\dot{x}=x(4-y-x^{2})\), \(\dot{y}=y(x-1)\) has no closed orbits. A smooth vector field on the phase plane is known to have exactly three closed orbits. Two of the cycles, say \(C_{1}\) and \(C_{2}\), lie inside the third cycle \(C_{3}\). However, \(C_{1}\) does not lie inside \(C_{2}\), nor vice-versa. a) Sketch the arrangement of the three cycles. b) Show that there must be at least one fixed point in the region bounded by \(C_{1},C_{2}\), \(C_{3}\). A smooth vector field on the phase plane is known to have exactly two closed trajectories, one of which lies inside the other. The inner cycle runs clockwise, and the outer one runs counterclockwise. True or False: There must be at least one fixed point in the region between the cycles. If true, prove it. If false, provide a simple counterexample. (Open-ended question for the topologically minded) Does Theorem 6.8.2 hold for surfaces other than the plane? Check its validity for various types of closed orbits on a torus, cylinder, and sphere. (Complex vector fields) Let \(z=x+iy\). Explore the complex vector fields \(\dot{z}=z^{k}\) and \(\dot{z}=(\overline{z})^{k}\), where \(k>0\) is an integer and \(\overline{z}=x-iy\) is the complex conjugate of \(z\). a) Write the vector fields in both Cartesian and polar coordinates, for the cases \(k=1\), \(2\), \(3\). b) Show that the origin is the only fixed point, and compute its index. c) Generalize your results to arbitrary integer \(k>0\). ("Matter and antimatter") There's an intriguing analogy between bifurcations of fixed points and collisions of particles and anti-particles. Let's explore this in the context of index theory. For example, a two-dimensional version of the saddle-node bifurcation is given by \(\dot{x}=a+x^{2}\), \(\dot{y}=-y\), where \(a\) is a parameter. a) Find and classify all the fixed points as \(a\) varies from \(-\infty\) to \(+\infty\). b) Show that the sum of the indices of all the fixed points is conserved as \(a\) varies. c) State and prove a generalization of this result, for systems of the form \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},a),\) where \(\mathbf{x}\in\mathbf{R}^{2}\) and \(a\) is a parameter. (Integral formula for the index of a curve) Consider a smooth vector field \(\dot{x}=f(x,y),\)\(\dot{y}=g(x,y)\) on the plane, and let \(C\) be a simple closed curve that does not pass through any fixed points. As usual, let \(\phi=\tan^{-1}(\dot{y}/\dot{x})\) as in Figure 6.8.1. a) Show that \(d\phi=(f\,dg-g\,df)/(f^{2}+g^{2}).\) b) Derive the integral formula \[I_{c}=\tfrac{1}{2\pi}\oint\limits_{c}\frac{f\,dg-g\,df}{f^{2}+g^{2}}.\] Consider the family of linear systems \(\dot{x}=x\cos\alpha-y\sin\alpha\), \(\dot{y}=x\sin\alpha+y\cos\alpha\), where \(\alpha\) is a parameter that runs over the range \(0\leq\alpha\leq\pi\). Let \(C\) be a simple closed curve that does not pass through the origin. a) Classify the fixed point at the origin as a function of \(\alpha\). b) Using the integral derived in Exercise 6.8.13, show that \(I_{c}\) is _independent_ of \(\alpha\). c) Let \(C\) be a circle centered at the origin. Compute \(I_{c}\) explicitly by evaluating the integral for any convenient choice of \(\alpha\). ### 7.0 Introduction A _limit cycle_ is an isolated closed trajectory. _Isolated_ means that neighboring trajectories are not closed; they spiral either toward or away from the limit cycle (Figure 7.0.1). If all neighboring trajectories approach the limit cycle, we say the limit cycle is _stable_ or _attracting_. Otherwise the limit cycle is _unstable_, or in exceptional cases, _half-stable_. Stable limit cycles are very important scientifically--they model systems that exhibit self-sustained oscillations. In other words, these systems oscillate even in the absence of external periodic forcing. Of the countless examples that could be given, we mention only a few: the beating of a heart; the periodic firing of a pacemaker neuron; daily rhythms in human body temperature and hormone secretion; chemical reactions that oscillate spontaneously; and dangerous self-excited vibrations in bridges and airplane wings. In each case, there is a standard oscillation of some preferred period, waveform, and amplitude. If the system is perturbed slightly, it always returns to the standard cycle. Limit cycles are inherently nonlinear phenomena; they can't occur in linear systems. Of course, a linear system \(\dot{\mathbf{x}}=A\mathbf{x}\) can have closed orbits, but they won't be _isolated_, if \(\mathbf{x}(t)\) is a periodic solution, then so is \(c\mathbf{x}(t)\) for any constant \(c\neq 0\). Hence \(\mathbf{x}(t)\) is surrounded by a one-parameter family of closed orbits (Figure 7.0.2). Consequently, the amplitude of a linear oscillation is set entirely by its initial conditions; any slight disturbance to the amplitude will persist forever. In contrast, limit cycle oscillations are determined by the structure of the system itself. The next section presents two examples of systems with limit cycles. In the first case, the limit cycle is obvious by inspection, but normally it's difficult to tell whether a given system has a limit cycle, or indeed any closed orbits, from the governing equations alone. Sections 7.2-7.4 present some techniques for ruling out closed orbits or for proving their existence. The remainder of the chapter discusses analytical methods for approximating the shape and period of a closed orbit and for studying its stability. ### 7.1 Examples It's straightforward to construct examples of limit cycles if we use polar coordinates. **Example 7.1.1: A simple limit cycle** Consider the system \[\dot{r}=r(1-r^{2}),\qquad\dot{\theta}=1 \tag{1}\] where \(r\geq 0\). The radial and angular dynamics are uncoupled and so can be analyzed separately. Treating \(\dot{r}=r(1-r^{2})\) as a vector field on the line, we see that \(r^{\bullet}=0\) is an unstable fixed point and \(r^{\bullet}=1\) is stable (Figure 7.1.1). **Figure 7.1.1** #### Example 7.1.2: Van der Pol oscillator A less transparent example, but one that played a central role in the development of nonlinear dynamics, is given by the _van der Pol equation_ \[\ddot{x}+\mu(x^{2}-1)\dot{x}+x=0 \tag{2}\] where \(\mu\geq 0\) is a parameter. Historically, this equation arose in connection with the nonlinear electrical circuits used in the first radios (see Exercise 7.1.6 for the circuit). Equation (2) looks like a simple harmonic oscillator, but with a _nonlinear damping_ term \(\mu(x^{2}-1)\dot{x}\). This term acts like ordinary positive damping for \(|x|>1\), but like _negative_ damping for \(|x|<1\). In other words, it causes large-amplitude oscillations to decay, but it pumps them back up if they become too small. As you might guess, the system eventually settles into a self-sustained oscillation where the energy dissipated over one cycle balances the energy pumped in. This idea can be made rigorous, and with quite a bit of work, one can prove that _the van der Pol equation has a unique, stable limit cycle for each \(\mu>0\)_. This result follows from a more general theorem discussed in Section 7.4. To give a concrete illustration, suppose we numerically integrate (2) for \(\mu=1.5\), starting from \((x,\dot{x})=(0.5,\,0)\) at \(t=0\). Figure 7.1.4 plots the solution in the phase plane and Figure 7.1.5 shows the graph of \(x(t)\). Now, in contrast to Example 7.1.1, the limit cycle is not a circle and the stable waveform is not a sine wave. ### 7.2 Ruling Out Closed Orbits Suppose we have a strong suspicion, based on numerical evidence or otherwise, that a particular system has no periodic solutions. How could we prove this? In the last chapter we mentioned one method, based on index theory (see Examples 6.8.5 and 6.8.6). Now we present three other ways of ruling out closed orbits. They are of limited applicability, but they're worth knowing about, in case you get lucky. **Gradient Systems** Suppose the system can be written in the form \(\dot{\mathbf{x}}=-\nabla V\), for some continuously differentiable, single-valued scalar function \(V(\mathbf{x})\). Such a system is called a _gradient system_ with _potential function V_. **Theorem 7.2.1:** Closed orbits are impossible in gradient systems. **Proof:** Suppose there were a closed orbit. We obtain a contradiction by considering the change in \(V\) after one circuit. On the one hand, \(\Delta V=0\) since \(V\) is single-valued. But on the other hand,\[\Delta V = \int_{0}^{T}\frac{dV}{dt}\ dt\] \[= \int_{0}^{T}(\nabla V\cdot\dot{\mathbf{x}})dt\] \[= -\int_{0}^{T}\left\|\dot{\mathbf{x}}\right\|^{2}dt\] \[< 0\] (unless \(\dot{\mathbf{x}}\equiv\mathbf{0}\), in which case the trajectory is a fixed point, not a closed orbit). This contradiction shows that closed orbits can't exist in gradient systems. The trouble with Theorem 7.2.1 is that most two-dimensional systems are _not_ gradient systems. (Although, curiously, all vector fields _on the line_ are gradient systems; this gives another explanation for the absence of oscillations noted in Sections 2.6 and 2.7.) **Example 7.2.1:** Show that there are no closed orbits for the system \(\dot{x}=\sin y\), \(\dot{y}=x\cos y\). _Solution:_ The system is a gradient system with potential function \(V(x,y)=-x\sin y\), since \(\dot{x}=-\partial V/\partial x\) and \(\dot{y}=-\partial V/\partial y\). By Theorem 7.2.1, there are no closed orbits. How can you tell whether a system is a gradient system? And if it is, how do you find its potential function \(V\)? See Exercises 7.2.5 and 7.2.6. Even if the system is not a gradient system, similar techniques may still work, as in the following example. We examine the change in an energy-like function after one circuit around the putative closed orbit, and derive a contradiction. **Example 7.2.2:** Show that the nonlinearly damped oscillator \(\ddot{x}+(\dot{x})^{3}+x=0\) has no periodic solutions. _Solution:_ Suppose that there were a periodic solution \(x(t)\) of period \(T\). Consider the energy function \(E(x,\dot{x})=\frac{1}{2}(x^{2}+\dot{x}^{2})\). After one cycle, \(x\) and \(\dot{x}\) return to their starting values, and therefore \(\Delta E=0\) around any closed orbit. On the other hand, \(\Delta E=\int_{0}^{T}\dot{E}\,dt\). If we can show this integral is nonzero, we've reached a contradiction. Note that \(\dot{E}=\dot{x}\big{(}x+\ddot{x}\big{)}=\dot{x}(-\dot{x}^{3})=-\dot{x}^{4}\leq 0\). Therefore \(\Delta E=-\int_{0}^{T}(\dot{x})^{4}\ dt\leq 0\), with equality only if \(\dot{x}\equiv 0\). But \(\dot{x}\equiv 0\) would mean the trajectory is a fixed point, contrary to the original assumption that it's a closed orbit. Thus \(\Delta E\) is _strictly_ negative, which contradicts \(\Delta E=0\). Hence there are no periodic solutions. ### Liapunov Functions Even for systems that have nothing to do with mechanics, it is occasionally possible to construct an energy-like function that decreases along trajectories. Such a function is called a Liapunov function. If a Liapunov function exists, then closed orbits are forbidden, by the same reasoning as in Example 7.2.2. To be more precise, consider a system \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\) with a fixed point at \(\mathbf{x}\)*. Suppose that we can find a _Liapunov function_, i.e., a continuously differentiable, real-valued function \(V(\mathbf{x})\) with the following properties: 1. \(V(\mathbf{x})>0\) for all \(\mathbf{x}\approx\mathbf{x}\)*, and \(V(\mathbf{x}\)*\()=0\). (We say that \(V\)is _positive definite_.) 2. \(\dot{V}<0\) for all \(\mathbf{x}\approx\mathbf{x}\)*. (All trajectories flow "downhill" toward \(\mathbf{x}\)*.) Then \(\mathbf{x}\)* is globally asymptotically stable: for all initial conditions, \(\mathbf{x}(t)\rightarrow\mathbf{x}\)* as \(t\rightarrow\infty\). In particular the system has no closed orbits. (For a proof, see Jordan and Smith 1987.) The intuition is that all trajectories move monotonically down the graph of \(V(\mathbf{x})\) toward \(\mathbf{x}\)* (Figure 7.2.1). The solutions can't get stuck anywhere else because if they did, \(V\) would stop changing, but by assumption, \(\dot{V}<0\) everywhere except at \(\mathbf{x}\)*. Unfortunately, there is no systematic way to construct Liapunov functions. Divine inspiration is usually required, although sometimes one can work backwards. Sums of squares occasionally work, as in the following example. **Example 7.2.3:** By constructing a Liapunov function, show that the system \(\dot{x}=-x+4y\), \(\dot{y}=-x-y^{3}\) has no closed orbits. _Solution:_ Consider \(V(x,y)=x^{2}+ay^{2}\), where \(a\) is a parameter to be chosen later. Then \(\dot{V}=2x\dot{x}+2ay\dot{y}=2x(-x+4y)+2ay(-x-y^{3})=-2x^{2}+(8\ -2a)xy-2ay^{4}\). ### Ruining out closed orbits Figure 7.2.1: If we choose \(a\) = 4, the _xy_ term disappears and \(\dot{V} = - 2x^{2} - 8y^{4}\). By inspection, \(V > 0\) and \(\dot{V} < 0\) for all (_x_,_y_) = (0,0). Hence \(V = x^{2} + 4y^{2}\) is a Liapunov function and so there are no closed orbits. In fact, all trajectories approach the origin as \(t\ \to \infty\). ### Dulac's Criterion The third method for ruling out closed orbits is based on Green's theorem, and is known as Dulac's criterion. ### Dulac's Criterion Let \(\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})\) be a continuously differentiable vector field defined on a simply connected subset \(R\) of the plane. If there exists a continuously differentiable, real-valued function _g_(**x**) such that \(\nabla \cdot (g\dot{\mathbf{x}})\) has one sign throughout \(R\), then there are no closed orbits lying entirely in \(R\). **Proof:** Suppose there were a closed orbit \(C\) lying entirely in the region \(R\). Let \(A\) denote the region inside \(C\) (Figure 7.2.2). Then Green's theorem yields \[\iint\limits_{d}\nabla \cdot (g\dot{\mathbf{x}})dA = \oint\limits_{C}g\dot{\mathbf{x}} \cdot \mathbf{n}\ d\ell\] where **n** is the outward normal and \(d\ell\) is the element of arc length along \(C\). Look first at the double integral on the left: it must be _nonzero_, since \(\nabla \cdot (g\dot{\mathbf{x}})\) has one sign in \(R\). On the other hand, the line integral on the right equals _zero_ since \(\dot{\mathbf{x}} \cdot \mathbf{n} = 0\) everywhere, by the assumption that \(C\) is a trajectory (the tangent vector \(\dot{\mathbf{x}}\) is orthogonal to **n**). This contradiction implies that no such \(C\) can exist. Dulac's criterion suffers from the same drawback as Liapunov's method: there is no algorithm for finding _g_(**x**). Candidates that occasionally work are \(g\) = 1, 1/_x_'_y_'_s_, _e_'_xx_, and _e_'_xy_. ### Example 7.2.4 Show that the system \(\dot{x} = x(2 - x - y)\), \(\dot{y} = y(4x - x^{2} - 3)\) has no closed orbits in the positive quadrant \(x\), \(y\) > 0. Figure 7.2.2: _Solution:_ A hunch tells us to pick \(g=1/xy\). Then \[\nabla\cdot(g\dot{\mathbf{x}}) =\frac{\partial}{\partial x}(g\dot{x})+\frac{\partial}{\partial y}( g\dot{y})\] \[=\frac{\partial}{\partial x}\Bigg{(}\frac{2-x-y}{y}\Bigg{)}+\frac {\partial}{\partial y}\Bigg{(}\frac{4x-x^{2}-3}{\mathbf{x}}\Bigg{)}\] \[=-1/\,y\] \[<0.\] Since the region \(x\), \(y>0\) is simply connected and \(g\) and \(\mathbf{f}\) satisfy the required smoothness conditions, Dulac's criterion implies there are no closed orbits in the positive quadrant. **Example 7.2.5**: \(\;\) Show that the system \(\dot{x}=y\), \(\dot{y}=-x-y+x^{2}+y^{2}\) has no closed orbits. _Solution:_ Let \(g=e^{-2x}\). Then \(\nabla\cdot(g\dot{\mathbf{x}})=-2e^{-2x}y+e^{-2x}(-1+2y)=-e^{-2x}<0\). By Dulac's criterion, there are no closed orbits. ### Poincare-Bendixson Theorem Now that we know how to rule out closed orbits, we turn to the opposite task: finding methods to _establish that closed orbits exist_ in particular systems. The following theorem is one of the few results in this direction. It is also one of the key theoretical results in nonlinear dynamics, because it implies that chaos can't occur in the phase plane, as discussed briefly at the end of this section. **Poincare-Bendixson Theorem:** Suppose that: 1. \(R\) is a closed, bounded subset of the plane; 2. \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\) is a continuously differentiable vector field on an open set containing \(R\); 3. \(R\) does not contain any fixed points; and 4. There exists a trajectory \(C\) that is "confined" in \(R\), in the sense that it starts in \(R\) and stays in \(R\) for all future time (Figure 7.3.1). Then either \(C\) is a closed orbit, or it spirals toward a closed orbit as \(t\to\infty\). In either case, \(R\)_contains a closed orbit_ (shown as a heavy curve in Figure 7.3.1). The proof of this theorem is subtle, and requires some advanced ideas from Figure 7.3.1 topology. For details, see Perko (1991), Coddington and Levinson (1955), Hurewicz (1958), or Cesari (1963). In Figure 7.3.1, we have drawn \(R\) as a ring-shaped region because any closed orbit must encircle a fixed point (_P_ in Figure 7.3.1) and no fixed points are allowed in \(R\). When applying the Poincare-Bendixson theorem, it's easy to satisfy conditions (1)-(3); condition (4) is the tough one. How can we be sure that a confined trajectory \(C\) exists? The standard trick is to construct a _trapping region_ \(R\), i.e., a closed connected set such that the vector field points "inward" everywhere on the boundary of \(R\) (Figure 7.3.2). Then _all_ trajectories in \(R\) are confined. If we can also arrange that there are no fixed points in \(R\), then the Poincare-Bendixson theorem ensures that \(R\) contains a closed orbit. The Poincare-Bendixson theorem can be difficult to apply in practice. One convenient case occurs when the system has a simple representation in polar coordinates, as in the following example. **EXAMPLE 7.3.1:** Consider the system \[\begin{array}{l}\dot{r}=r(1-r^{2})+\mu r\cos\theta\\ \dot{\theta}=1.\end{array}\] When \(\mu=0\), there's a stable limit cycle at \(r\) = 1, as discussed in Example 7.1.1. Show that a closed orbit still exists for \(\mu>0\), as long as \(\mu\) is sufficiently small. _Solution:_ We seek two concentric circles with radii \(r_{\min}\) and \(r_{\max}\), such that \(\dot{r}<0\) on the outer circle and \(\dot{r}>0\) on the inner circle. Then the annulus \(0<r_{\min}\)\(\leq r\leq r_{\max}\) will be our desired trapping region. Note that there are no fixed points in the annulus since \(\dot{\theta}>0\); hence if \(r_{\min}\) and \(r_{\max}\) can be found, the Poincare-Bendixson theorem will imply the existence of a closed orbit. To find \(r_{\min}\), we require \(\dot{r}=r(1-r^{2})+\mu r\cos\theta>0\) for all \(\theta\). Since \(\cos\theta\geq-1\), a sufficient condition for \(r_{\min}\) is \(1-r^{2}-\mu>0\). Hence any \(r_{\min}<\sqrt[]{1-\mu}\) will work, as long as \(\mu<1\) so that the square root makes sense. We should choose \(r_{\min}\) as large as possible, to hem in the limit cycle as tightly as we can. For instance, we could pick \(r_{\min}=0.999\sqrt[]{1-\mu}\). (Even \(r_{\min}=\sqrt[]{1-\mu}\) works, but more careful reasoning is required.) By a similar argument, the flow is inward on the outer circle if \(r_{\max}=1.001\sqrt[]{1+\mu}\). Figure 7.3.2: Therefore a closed orbit exists for all \(\mu<1\), and it lies somewhere in the annulus \(0.999\sqrt{1-\mu}<r<1.001\sqrt{1+\mu}\). The estimates used in Example 7.3.1 are conservative. In fact, the closed orbit can exist even if \(\mu\geq 1\). Figure 7.3.3 shows a computer-generated phase portrait of (I) for \(\mu=1\). In Exercise 7.3.8, you're asked to explore what happens for larger \(\mu\), and in particular, whether there's a critical \(\mu\) beyond which the closed orbit disappears. It's also possible to obtain some analytical insight about the closed orbit for small \(\mu\) (Exercise 7.3.9). When polar coordinates are inconvenient, we may still be able to find an appropriate trapping region by examining the system's nullclines, as in the next example. Example 7.3.2: In the fundamental biochemical process called _glycolysis_, living cells obtain energy by breaking down sugar. In intact yeast cells as well as in yeast or muscle extracts, glycolysis can proceed in an _oscillatory_ fashion, with the concentrations of various intermediates waxing and waning with a period of several minutes. For reviews, see Chance et al. (1973) or Goldbeter (1980). A simple model of these oscillations has been proposed by Sel'kov (1968). In dimensionless form, the equations are \[\dot{x} = -x+ay+x^{2}y\] \[\dot{y} = b-ay-x^{2}y\] ### 7.3 Poincare-Bendixson theorem Figure 7.3.3: where \(x\) and \(y\) are the concentrations of ADP (adenosine diphosphate) and F6P (fructose-6-phosphate), and \(a\),\(b>0\) are kinetic parameters. Construct a trapping region for this system. _Solution:_ First we find the nullclines. The first equation shows that \(\dot{x}=0\) on the curve \(y=x/(a+x^{2})\) and the second equation shows that \(\dot{y}=0\) on the curve \(y=b/(a+x^{2})\). These nullclines are sketched in Figure 7.3.4, along with some representative vectors. How did we know how to sketch these vectors? By definition, the arrows are vertical on the \(\dot{x}=0\) nullcline, and horizontal on the \(\dot{y}=0\) nullcline. The direction of flow is determined by the signs of \(\dot{x}\) and \(\dot{y}\). For instance, in the region above both nullclines, the governing equations imply \(\dot{x}>0\) and \(\dot{y}<0\), so the arrows point down and to the right, as shown in Figure 7.3.4. Now consider the region bounded by the dashed line shown in Figure 7.3.5. _We claim that it's a trapping region._ To verify this, we have to show that all the vectors on the boundary point into the box. On the horizontal and vertical sides, there's no problem: the claim follows from Figure 7.3.4. The tricky part of the construction is the diagonal line of slope -1 extending from the point (\(b\), \(b/a\)) to the nullcline \(y=x/(a+x^{2})\). Where did this come from? Figure 7.3.4: To get the right intuition, consider \(\dot{x}\) and \(\dot{y}\) in the limit of very large \(x\). Then \(\dot{x}\approx x^{2}y\) and \(\dot{y}\approx-x^{2}y\), so \(\dot{y}/\dot{x}=dy/dx\approx-1\) along trajectories. Hence the vector field at large \(x\) is roughly parallel to the diagonal line. This suggests that in a more precise calculation, we should compare the sizes of \(\dot{x}\) and \(-\dot{y}\), for some sufficiently large \(x\). In particular, consider \(\dot{x}-(-\dot{y})\). We find \[\begin{array}{l}\dot{x}-(-\dot{y})=-x+ay+x^{2}y+(b-ay-x^{2}y)\\ =b-x.\end{array}\] Hence \[-\dot{y}>\dot{x}\;\;\mbox{if}\;x>b.\] This inequality implies that the vector field points inward on the diagonal line in Figure 7.3.5, because \(dy/dx\) is more negative than -1, and therefore the vectors are steeper than the diagonal line. Thus the region is a trapping region, as claimed. Can we conclude that there is a closed orbit inside the trapping region? No! There is a fixed point in the region (at the intersection of the nullclines), and so the conditions of the Poincare-Bendixson theorem are not satisfied. But if this fixed point is a _repeller_, then we _can_ prove the existence of a closed orbit by considering Figure 7.3.5: the modified "punctured" region shown in Figure 7.3.6. (The hole is infinitesimal, but drawn larger for clarity.) The repeller drives all neighboring trajectories into the shaded region, and since this region _is_ free of fixed points, the Poincare-Bendixson theorem applies. Now we find conditions under which the fixed point is a repeller. ##### Example 7.3.3: Onceagain, consider the glycolyticoscillator \(\dot{x} = - x + ay + x^{2}y,\;\;j = b - ay -\;\;x^{2}y\) of Example 7.3.2. Prove that a closed orbit exists if \(a\) and \(b\) satisfy an appropriate condition, to be determined. (As before, \(a\),\(b > 0\).) _Solution:_ By the argument above, it suffices to find conditions under which the fixed point is a repeller, i.e., an unstable node or spiral. In general, the Jacobian is \[A = \begin{pmatrix} - 1 + 2xy & a + x^{2} \\ - 2xy & - (a + x^{2}) \\ \end{pmatrix}.\] After some algebra, we find that at the fixed point \[x^{*} = b,\;\;\;\;\;\;\;\;\;\;\;\;y^{*} = \frac{b}{a + b^{2}},\] the Jacobian has determinant \(\Delta = a + b^{2} > 0\) and trace Figure 7.3.6: \[\tau=-\frac{b^{4}+(2a-1)b^{2}+(a+a^{2})}{a+b^{2}}.\] Hence the fixed point is unstable for \(\tau>0\), and stable for \(\tau<0\). The dividing line \(\tau=0\) occurs when \[b^{2}=\tfrac{1}{2}\bigl{(}1-2a\pm\sqrt{1-8a}\bigr{)}.\] This defines a curve in (\(a\),\(b\)) space, as shown in Figure 7.3.7. For parameters in the region corresponding to \(\tau>0\), we are guaranteed that the system has a closed orbit--numerical integration shows that it is actually a stable limit cycle. Figure 7.3.8 shows a computer-generated phase portrait for the typical case \(a=0.08\), \(b=0.6\). ### 7.3 Poincare-Bendixson Theorem Figure 7.3.8: Figure 7.3.7: ### No Chaos in the Phase Plane The Poincare-Bendixson theorem is one of the central results of nonlinear dynamics. It says that the dynamical possibilities in the phase plane are very limited: if a trajectory is confined to a closed, bounded region that contains no fixed points, then the trajectory must eventually approach a closed orbit. Nothing more complicated is possible. This result depends crucially on the two-dimensionality of the plane. In higher-dimensional systems (_n_ >= 3), the Poincare-Bendixson theorem no longer applies, and something radically new can happen: trajectories may wander around forever in a bounded region without settling down to a fixed point or a closed orbit. In some cases, the trajectories are attracted to a complex geometric object called a _strange attractor_, a fractal set on which the motion is aperiodic and sensitive to tiny changes in the initial conditions. This sensitivity makes the motion unpredictable in the long run. We are now face to face with _chaos_. We'll discuss this fascinating topic soon enough, but for now you should appreciate that the Poincare-Bendixson theorem implies that chaos can never occur in the phase plane. ### Lienard Systems In the early days of nonlinear dynamics, say from about 1920 to 1950, there was a great deal of research on nonlinear oscillations. The work was initially motivated by the development of radio and vacuum tube technology, and later it took on a mathematical life of its own. It was found that many oscillating circuits could be modeled by second-order differential equations of the form \[\ddot{x}+f(x)\dot{x}+g(x)=0,\] now known as _Lienard's equation._ This equation is a generalization of the van der Pol oscillator \(\ddot{x}+\mu(x^{2}-1)\dot{x}+x=0\) mentioned in Section 7.1. It can also be interpreted mechanically as the equation of motion for a unit mass subject to a nonlinear damping force \(-f(x)\dot{x}\) and a nonlinear restoring force \(-g(x)\). Lienard's equation is equivalent to the system \[\begin{array}{l}\dot{x}=y\\ \dot{y}=-g(x)-f(x)y.\end{array}\] The following theorem states that this system has a unique, stable limit cycle under appropriate hypotheses on \(f\) and \(g\). For a proof, see Jordan and Smith (1987), Grimshaw (1990), or Perko (1991). **Lienard's Theorem:** Suppose that \(f(x)\) and \(g(x)\) satisfy the following conditions:1. \(f(x)\) and \(g(x)\) are continuously differentiable for all \(x\); 2. \(g(\neg x)=-g(x)\) for all \(x\) (i.e., \(g(x)\) is an _odd_ function); 3. \(g(x)>0\) for \(x>0\); 4. \(f(\neg x)=f(x)\) for all \(x\) (i.e., \(f(x)\) is an _even_ function); 5. The odd function \(F(x)=\int_{0}^{x}f(u)du\) has exactly one positive zero at \(x=a\), is negative for \(0<x<a\), is positive and nondecreasing for \(x>a\), and \(F(x)\to\infty\) as \(x\to\infty\). Then the system (2) has a unique, stable limit cycle surrounding the origin in the phase plane. This result should seem plausible. The assumptions on \(g(x)\) mean that the restoring force acts like an ordinary spring, and tends to reduce any displacement, whereas the assumptions on \(f(x)\) imply that the damping is negative at small \(|x|\) and positive at large \(|x|\). Since small oscillations are pumped up and large oscillations are damped down, it is not surprising that the system tends to settle into a self-sustained oscillation of some intermediate amplitude. **Example 7.4.1:** Show that the van der Pol equation has a unique, stable limit cycle. _Solution:_ The van der Pol equation \(\ddot{x}+\mu(x^{2}-1)\dot{x}+x=0\) has\(f(x)=\mu(x^{2}-1)\) and \(g(x)=x\), so conditions (1)-(4) of Lienard's theorem are clearly satisfied. To check condition (5), notice that \[F(x)=\mu\left(\tfrac{1}{3}x^{3}-x\right)=\tfrac{1}{3}\mu x(x^{2}-3).\] Hence condition (5) is satisfied for \(a=\sqrt{3}\). Thus the van der Pol equation has a unique, stable limit cycle. There are several other classical results about the existence of periodic solutions for Lienard's equation and its relatives. See Stoker (1950), Minorsky (1962) Andronov et al. (1973), and Jordan and Smith (1987). ### Relaxation Oscillations It's time to change gears. So far in this chapter, we have focused on a qualitative question: Given a particular two-dimensional system, does it have any periodic solutions? Now we ask a quantitative question: Given that a closed orbit exists, what can we say about its shape and period? In general, such problems can't be solved exactly but we can still obtain useful approximations if some parameter is large or small. We begin by considering the van der Pol equation \[\ddot{x}+\mu(x^{2}-1)\dot{x}+x=0\] for \(\mu>>1\). In this _strongly nonlinear_ limit, we'll see that the limit cycle consists of an extremely slow buildup followed by a sudden discharge, followed by another slow buildup, and so on. Oscillations of this type are often called _relaxation oscillations_, because the "stress" accumulated during the slow buildup is "relaxed" during the sudden discharge. Relaxation oscillations occur in many other scientific contexts, from the stick-slip oscillations of a bowed violin string to the periodic firing of nerve cells driven by a constant current (Edelstein-Keshet 1988, Murray 2002, Rinzel and Ermentrout 1989). **EXAMPLE 7.5.1:** Give a phase plane analysis of the van der Pol equation for \(\mu>>1\). _Solution:_ It proves convenient to introduce different phase plane variables from the usual "\(\dot{x}=y\), \(\dot{y}=\ldots\)". To motivate the new variables, notice that \[\ddot{x}+\mu\dot{x}(x^{2}-1)=\frac{d}{dt}\Big{(}\dot{x}+\mu\big{[}\tfrac{1}{3} x^{3}-x\big{]}\Big{)}.\] So if we let \[F(x)=\tfrac{1}{3}x^{3}-x,\qquad w=\dot{x}+\mu F(x), \tag{1}\] the van der Pol equation implies that \[\dot{w}=\ddot{x}+\mu\dot{x}(x^{2}-1)=-x. \tag{2}\] Hence the van der Pol equation is equivalent to (1), (2), which may be rewritten as \[\dot{x}=w-\mu F(x) \tag{3}\] \[\dot{w}=-x.\] One further change of variables is helpful. If we let \[y=\frac{w}{\mu}\] then (3) becomes \[\dot{x}=\mu\big{[}y-F(x)\big{]} \tag{4}\] \[\dot{y}=-\tfrac{1}{\mu}x.\]Now consider a typical trajectory in the (_x_,_y_) phase plane. The nullclines are the key to understanding the motion. We claim that all trajectories behave like that shown in Figure 7.5.1; starting from any point except the origin, the trajectory zaps horizontally onto the _cubic nullcline_\(y=F(x)\). Then it crawls down the nullcline until it comes to the knee (point B in Figure 7.5.1), after which it zaps over to the other branch of the cubic at C. This is followed by another crawl along the cubic until the trajectory reaches the next jumping-off point at D, and the motion continues periodically after that. To justify this picture, suppose that the initial condition is not too close to the cubic nullcline, i.e., suppose \(y-F(x)\sim O(1)\). Then (4) implies \(\mid\dot{x}\mid\sim O(\mu)>>1\) whereas \(\mid\dot{y}\mid\sim O(\mu^{-1})<<1\); hence the velocity is enormous in the horizontal direction and tiny in the vertical direction, so trajectories move practically horizontally. If the initial condition is _above_ the nullcline, then \(y-F(x)>0\) and therefore \(\dot{x}>0\); thus the trajectory moves sideways _toward_ the nullcline. However, once the trajectory gets so close that \(y-F(x)\sim O(\mu^{-2})\), then \(\dot{x}\) and \(\dot{y}\) become comparable, both being \(O(\mu^{-1})\). What happens then? The trajectory crosses the nullcline vertically, as shown in Figure 7.5.1, and then moves slowly along the backside of the branch, with a velocity of size \(O(\mu^{-1})\), until it reaches the knee and can jump sideways again. This analysis shows that the limit cycle has two _widely separated time scales_: the crawls require \(\Delta t\sim O(\mu)\) and the jumps require \(\Delta t\sim O(\mu^{-1})\). Both time scales are apparent in the waveform of \(x(t)\) shown in Figure 7.5.2, obtained by numerical integration of the van der Pol equation for \(\mu=10\) and initial condition \((x_{0},y_{0})=(2,0)\). ### 7.5 Relaxation oscillations Figure 7.5.1: **Example 7.5.2:** Estimate the period of the limit cycle for the van der Pol equation for \(\mu>>1\). _Solution:_ The period \(T\) is essentially the time required to travel along the two _slow branches_, since the time spent in the jumps is negligible for large \(\mu\). By symmetry, the time spent on each branch is the same. Hence \(T \approx 2\int_{t_{A}}^{t_{B}}dt\). To derive an expression for \(dt\), note that on the slow branches, \(y \approx F(x)\) and thus \[\frac{dy}{dt} \approx F^{\prime}(x)\frac{dx}{dt} = (x^{2} - 1)\frac{dx}{dt}.\] But since \(dy/dt = -x/\mu\) from (4), we find \(dx/dt = -x/\mu(x^{2} - 1)\). Therefore \[dt \approx - \frac{\mu(x^{2} - 1)}{x}dx\] on a slow branch. As you can check (Exercise 7.5.I), the positive branch begins at \(x_{A} = 2\) and ends at \(x_{B} = 1\). Hence \[T \approx 2\int_{2}^{1}\frac{-\mu}{x}(x^{2} - 1)dx = 2\mu\left[ \frac{x^{2}}{2} - \ln x \right]_{1}^{2} = \mu[3 - 2\ln 2],\] which is \(O(\mu)\) as expected. The formula (6) can be refined. With much more work, one can show that \(T \approx \mu[3 - 2\ln 2] + 2\alpha\mu^{- 1/3} + \ldots\), where \(\alpha \approx 2.338\) is the smallest root of Ai(\(- \alpha) = 0\). Here Ai(\(x\)) is a special function called the Airy function. This correction term comes from an estimate of the time required to turn the corner between the jumps Figure 7.5.2:and the crawls. See Grimshaw (1990, pp. 161-163) for a readable derivation of this wonderful formula, discovered by Mary Cartwright (1952). See also Stoker (1950) for more about relaxation oscillations. One last remark: We have seen that a relaxation oscillation has two time scales that operate _sequentially_--a slow buildup is followed by a fast discharge. In the next section we will encounter problems where two time scales operate _concurrently_, and that makes the problems a bit more subtle. ### 7.6 Weakly Nonlinear Oscillators This section deals with equations of the form \[\ddot{x}+x+\varepsilon h(x,\dot{x})=0 \tag{1}\] where \(0\leq\varepsilon<<1\) and \(h\left(x,\dot{x}\right)\) is an arbitrary smooth function. Such equations represent small perturbations of the linear oscillator \(\ddot{x}+x=0\) and are therefore called _weakly nonlinear oscillators_. Two fundamental examples are the van der Pol equation \[\ddot{x}+x+\varepsilon(x^{2}-1)\dot{x}=0, \tag{2}\] (now in the limit of small nonlinearity), and the _Duffing equation_ \[\ddot{x}+x+\varepsilon x^{3}=0. \tag{3}\] To illustrate the kinds of phenomena that can arise, Figure 7.6.1 shows a computer-generated solution of the van der Pol equation in the \((x,\dot{x})\) phase plane, for \(\varepsilon=0.1\) and an initial condition close to the origin. The trajectory is a slowly winding spiral; it takes many cycles for the amplitude to grow substantially. Eventually the trajectory asymptotes to an approximately circular limit cycle whose radius is close to 2. ### 7.6 Weakly Nonlinear Oscillators Figure 7.6.1: We'd like to be able to predict the shape, period, and radius of this limit cycle. Our analysis will exploit the fact that the oscillator is "close to" a simple harmonic oscillator, which we understand completely. ### Regular Perturbation Theory and Its Failure As a first approach, we seek solutions of (1) in the form of a power series in. Thus if is a solution, we expand it as (4) where the unknown functions are to be determined from the governing equation and the initial conditions. The hope is that all the important information is captured by the first few terms--ideally, the first _two_--and that the higher-order terms represent only tiny corrections. This technique is called _regular perturbation theory_. It works well on certain classes of problems (for instance, Exercise 7.3.9), but as we'll see, it runs into trouble here. To expose the source of the difficulties, we begin with a practice problem that can be solved exactly. Consider the weakly damped linear oscillator (5) with initial conditions (6) Using the techniques of Chapter 5, we find the exact solution (7) Now let's solve the same problem using perturbation theory. Substitution of (4) into (5) yields (8) If we group the terms according to powers of, we get (9) Since (9) is supposed to hold for _all_ sufficiently small, the coefficients of each power of must vanish separately. Thus we find (10) (11)(We're ignoring the \(O(\varepsilon^{2})\) and higher equations, in the optimistic spirit mentioned earlier.) The appropriate initial conditions for these equations come from (6). At \(t=0\), (4) implies that \(0=x_{{}_{0}}(0)+\varepsilon x_{{}_{1}}(0)+\ldots\); this holds for all \(\varepsilon\), so \[x_{{}_{0}}(0)=0,\ x_{{}_{1}}(0)=0. \tag{12}\] By applying a similar argument to \(\dot{x}(0)\) we obtain \[\dot{x}_{{}_{0}}(0)=1,\ \ \dot{x}_{{}_{1}}(0)=0. \tag{13}\] Now we solve the initial-value problems one by one; they fall like dominoes. The solution of (10), subject to the initial conditions \(x_{{}_{0}}(0)=0,\ \dot{x}_{{}_{0}}(0)=1\), is \[x_{{}_{0}}(t)=\sin\,t. \tag{14}\] Plugging this solution into (1l) gives \[\ddot{x}_{{}_{1}}+x_{{}_{1}}=-2\cos\,t. \tag{15}\] Here's the first sign of trouble: the right-hand side of (15) is a _resonant_ forcing. The solution of (15) subject to \(x_{{}_{1}}(0)=0\), \(\dot{x}_{{}_{1}}(0)=0\) is \[x_{{}_{1}}(t)=-t\sin\,t, \tag{16}\] which is a _secular_ term, i.e., a term that _grows_ without bound as \(t\to\infty\). In summary, the solution of (5), (6) according to perturbation theory is \[x(t,\varepsilon)=\sin\,t-\varepsilon t\sin\,t+\,O(\varepsilon^{2}). \tag{17}\] How does this compare with the exact solution (7)? In Exercise 7.6.1, you are asked to show that the two formulas agree in the following sense: If (7) is expanded as power series in \(\varepsilon\), the first two terms are given by (17). In fact, (17) is the beginning of a _convergent_ series expansion for the true solution. For any fixed \(t\), (17) provides a good approximation as long as \(\varepsilon\) is small enough--specifically, we need \(\varepsilon t<<1\) so that the correction term (which is actually \(O(\varepsilon^{2}t^{2})\)) is negligible. But normally we are interested in the behavior for _fixed_\(\varepsilon\), not fixed \(t\). In that case we can only expect the perturbation approximation to work for times \(t<<O(1/\varepsilon)\). To illustrate this limitation, Figure 7.6.2 plots the exact solution (7) and the perturbation series (17) for \(\varepsilon=0.1\). As expected, the perturbation series works reasonably well if \(t<<\frac{t}{\varepsilon}=10\), but it breaks down after that. In many situations we'd like our approximation to capture the true solution's qualitative behavior for all \(t\), or at least for large \(t\). By this criterion, (17) is a failure, as Figure 7.6.2 makes obvious. There are two major problems: 1. The true solution (7) exhibits _two time scales_: a _fast time_\(t\sim O(1)\) for the sinusoidal oscillations and a _slow time_\(t\sim 1/\varepsilon\) over which the amplitude decays. Equation (17) completely misrepresents the slow time scale behavior. In particular, because of the secular term \(t\) sin \(t\), (17) falsely suggests that the solution grows with time whereas we know from (7) that the amplitude \(A=(1-\varepsilon^{2})^{-1/2}e^{-ct}\) decays exponentially. The discrepancy occurs because \(e^{-ct}=1-\varepsilon t+O(\varepsilon^{2}t^{2})\), so to this order in \(\varepsilon\), it appears (incorrectly) that the amplitude increases with \(t\). To get the correct result, we'd need to calculate an _infinite_ number of terms in the series. That's worthless; we want series approximations that work well with just one or two terms. 2. The frequency of the oscillations in (7) is \(\omega=(1-\varepsilon^{2})^{1/2}\approx 1-\frac{1}{2}\varepsilon^{2}\), which is shifted slightly from the frequency \(\omega=1\) of (17). After a _very_ long time \(t\sim O(1/\varepsilon^{2})\), this frequency error will have a significant cumulative effect. Note that this is a third, _super-slow_ time scale! ##### Two-Timing The elementary example above reveals a more general truth: There are going to be (at least) two time scales in weakly nonlinear oscillators. We've already met this phenomenon in Figure 7.6.1, where the amplitude of the spiral grew very slowly compared to the cycle time. An analytical method called _two-timing_ builds in the fact of two time scales from the start, and produces better approximations than Figure 7.6.2: regular perturbation theory. In fact, more than two times can be used, but we'll stick to the simplest case. To apply two-timing to (I), let \(\tau=t\) denote the fast \(O(\text{I})\) time, and let \(T=\varepsilon t\) denote the slow time. We'll treat these two times as if they were _independent_ variables. In particular, functions of the slow time \(T\) will be regarded as _constants_ on the fast time scale \(\tau\). It's hard to justify this idea rigorously, but it works! (Here's an analogy: it's like saying that your height is constant on the time scale of a day. Of course, over many months or years your height can change dramatically, especially if you're an infant or a pubescent teenager, but over one day your height stays constant, to a good approximation.) Now we turn to the mechanics of the method. We expand the solution of (I) as a series \[x(t,\varepsilon)=x_{0}(\tau,T)+\varepsilon x_{1}(\tau,T)+O(\varepsilon^{2}). \tag{18}\] The time derivatives in (I) are transformed using the chain rule: \[\dot{x}=\frac{dx}{dt}=\frac{\partial x}{\partial\tau}+\frac{\partial x}{ \partial T}\frac{\partial T}{\partial t}=\frac{\partial x}{\partial\tau}+ \varepsilon\frac{\partial x}{\partial T}. \tag{19}\] A subscript notation for differentiation is more compact; thus we write (19) as \[\dot{x}=\partial_{\tau}x+\varepsilon\partial_{T}x. \tag{20}\] After substituting (18) into (20) and collecting powers of \(\varepsilon\), we find \[\dot{x}=\partial_{\tau}x_{0}+\varepsilon\big{(}\partial_{\tau}x_{0}+\ \partial_{\tau}x_{1}\big{)}+O(\varepsilon^{2}). \tag{21}\] Similarly, \[\ddot{x}=\partial_{\tau\tau}x_{0}+\varepsilon(\partial_{\tau\tau}x_{1}+2 \partial_{\tau\tau}x_{0})+O(\varepsilon^{2}). \tag{22}\] To illustrate the method, let's apply it to our earlier test problem. **Example 7.6.1:** Use two-timing to approximate the solution to the damped linear oscillator \(\ddot{x}+2\varepsilon\dot{x}+x=0\), with initial conditions \(x(0)=0\), \(\dot{x}(0)=1\). _Solution:_ After substituting (21) and (22) for \(\dot{x}\) and \(\ddot{x}\), we get \[\partial_{\tau\tau}x_{0}+\varepsilon(\partial_{\tau\tau}x_{1}+2\partial_{\tau \tau}x_{0})+2\varepsilon\partial_{\tau}x_{0}+x_{0}+\varepsilon x_{1}+O( \varepsilon^{2})=0. \tag{23}\] Collecting powers of \(\varepsilon\) yields a pair of differential equations: \[O(1): \partial_{\tau\tau}x_{0}+x_{0}=0 \tag{24}\] \[O(\varepsilon): \partial_{\tau\tau}x_{1}+2\partial_{\tau\tau}x_{0}+2\partial_{\tau}x_{0}+x_{1}=0. \tag{25}\] Equation (24) is just a simple harmonic oscillator. Its general solution is \[x_{0} = A\sin\tau + B\cos\tau,\] (26) but now comes the interesting part: _The "constants" A and B are actually functions of the slow time T._ Here we are invoking the above-mentioned ideas that \(t\) and \(T\) should be regarded as independent variables, with functions of \(T\) behaving like constants on the fast time scale \(t\). To determine _A_(_T_) and _B_(_T_), we need to go to the next order of \(e\). Substituting (26) into (25) gives \[\begin{array}{l} {\partial_{{}_{\tau\tau}}x_{1} + x_{1} = - 2(\partial_{{}_{\eta\tau}}x_{0} + \partial_{{}_{\eta}}x_{0})} \\ {= - 2(A^{\prime} + A)\cos\tau + 2(B^{\prime} + B)\sin\tau} \\ \end{array}\] (27) where the prime denotes differentiation with respect to \(T\). Now we face the same predicament that ruined us after (15). As in that case, the right-hand side of (27) is a resonant forcing that will produce _secular terms_ like \(t\) sin_t_ and \(t\) cos_t_ in the solution for \(x\)1. These terms would lead to a convergent but useless series expansion for \(x\). Since we want an approximation free from secular terms, _we set the coefficients of the resonant terms to zero_--this manuever is characteristic of all two-timing calculations. Here it yields \[\begin{array}{l} {A^{\prime} + A = 0} \\ {B^{\prime} + B = 0.} \\ \end{array}\] (28) The solutions of (28) and (29) are \[\begin{array}{l} {A(T) = A(0)e^{- T}} \\ {B(T) = B(0)e^{- T}.} \\ \end{array}\] The last step is to find the initial values _A_(0) and _B_(0). They are determined by (18), (26), and the given initial conditions _x_(0) = 0, \(\dot{x}(0) = 1\), as follows. Equation (18) gives 0 = _x_(0) = \(x\)0(0,0) + _e_x_1(0,0) + _O_(_e_2). To satisfy this equation for _all_ sufficiently small \(e\), we must have \[x_{0}(0,0) = 0\] (30) and \(x\)1(0,0) = 0. Similarly, \[1 = \dot{x}(0) = \partial_{{}_{\tau}}x_{0}(0,0) + \varepsilon\left( {\partial_{{}_{T}}x_{0}(0,0) + \partial_{{}_{\tau}}x_{1}(0,0)} \right) + O(\varepsilon^{2})\] so \[\partial_{{}_{\tau}}x_{0}(0,0) = 1\] (31)and \(\partial_{\tau X_{0}}(0,0)+\partial_{\tau X_{1}}(0,0)=0\). Combining (26) and (30) we find \(B(0)=0\); hence \(B(T)\equiv 0\). Similarly, (26) and (31) imply \(A(0)=1\), so \(A(T)=e^{-T}\). Thus (26) becomes \[x_{0}(\tau,T)=e^{-T}\sin\tau. \tag{32}\] Hence \[x = e^{-T}\sin\tau+O(\varepsilon) \tag{33}\] \[= e^{-ct}\sin t+O(\varepsilon)\] is the approximate solution predicted by two-timing. Figure 7.6.3 compares the two-timing solution (33) to the exact solution (7) for \(\varepsilon=0.1\). The two curves are almost indistinguishable, even though \(\varepsilon\) is not terribly small. This is a characteristic feature of the method--it often works better than it has any right to. If we wanted to go further with Example 7.6.1, we could either solve for \(x_{1}\) and higher-order corrections, or introduce a super-slow time \(\Im=\varepsilon^{2}t\) to investigate the long-term phase shift caused by the \(O(\varepsilon^{2})\) error in frequency. But Figure 7.6.3 shows that we already have a good approximation. OK, enough practice problems! Now that we have calibrated the method, let's unleash it on a genuine nonlinear problem. _Solution:_ The equation is \(\ddot{x}+x+\varepsilon(x^{2}-1)\dot{x}=0\). Using (21) and (22) and collecting powers of \(\varepsilon\), we find the following equations: \[\begin{array}{ll}O(1):&\partial_{\tau\tau}x_{0}+x_{0}=0\\ O(\varepsilon):&\partial_{\tau\tau}x_{1}+x_{1}=-2\partial_{\tau\tau}x_{0}-(x_{0}^{2}-1)\partial_{\tau}x_{0}.\end{array} \tag{34}\] As always, the \(O(1)\) equation is a simple harmonic oscillator. Its general solution can be written as (26), or alternatively, as \[x_{0}=r(T)\cos(\tau+\phi(T)) \tag{36}\] where \(r(T)\) and \(\phi(T)\) are the _slowly-varying amplitude and phase_ of \(x_{0}\). To find equations governing \(r\) and \(\phi\), we insert (36) into (35). This yields \[\partial_{\tau\tau}x_{1}+x_{1}=-2(r^{\prime}\sin(\tau+\phi)+r\phi^{\prime}\cos(\tau+\phi))\] \[\begin{array}{l}-r\sin(\tau+\phi)[r^{2}\cos^{2}(\tau+\phi)-1].\end{array} \tag{37}\] As before, we need to avoid resonant terms on the right-hand side. These are terms proportional to \(\cos(\tau+\phi)\) and \(\sin(\tau+\phi)\). Some terms of this form already appear explicitly in (37). But--and this is the important point--there is also a resonant term lurking in \(\sin(\tau+\phi)\cos^{2}(\tau+\phi)\), because of the trigonometric identity \[\sin(\tau+\phi)\cos^{2}(\tau+\phi)={\frac{1}{4}}[\sin(\tau+\phi)+\sin 3(\tau+\phi)]. \tag{38}\] (Exercise 7.6.10 reminds you how to derive such identities, but usually we won't need them--shortcuts are available, as we'll see.) After substituting (38) into (37), we get \[\begin{array}{l}\partial_{\tau\tau}x_{1}+x_{1}=\left[-2r^{\prime}+r-{\frac{1}{4}}r^{3}\right]\sin(\tau+\phi)\\ +[-2r\phi^{\prime}]\cos(\tau+\phi)-{\frac{1}{4}}r^{3}\sin 3(\tau+\phi).\end{array} \tag{39}\] To avoid secular terms, we require \[-2r^{\prime}+r-{\frac{1}{4}}r^{3}=0 \tag{40}\] \[-2r\phi^{\prime}=0. \tag{41}\] First consider (40). It may be rewritten as a vector field \[r^{\prime}={\frac{1}{8}}r(4-r^{2}) \tag{42}\] on the half-line \(r\geq 0\). Following the methods of Chapter 2 or Example 7.1.1, we see that \(r^{\#}=0\) is an unstable fixed point and \(r^{*}=2\) is a stable fixed point. Hence\(r(T)\to 2\) as \(T\to\infty\). Secondly, (41) implies \(\phi^{\prime}=0\), so \(\phi(T)=\phi_{{}_{0}}\) for some constant \(\phi_{{}_{0}}\). Hence \(x_{{}_{0}}(\tau,T)\to 2\cos(\tau+\phi_{{}_{0}})\) and therefore \[x(t)\to 2\cos(t+\phi_{{}_{0}})+O(\varepsilon) \tag{43}\] as \(t\to\infty\). Thus \(x(t)\) approaches a stable limit cycle of radius \(=2+O(\varepsilon)\). To find the frequency implied by (43), let \(\theta=t+\phi(T)\) denote the argument of the cosine. Then the angular frequency \(\omega\) is given by \[\omega=\frac{d\theta}{dt}=1+\frac{d\phi}{dT}\frac{dT}{dt}=1+\varepsilon\phi^{ \prime}=1, \tag{44}\] through first order in \(\varepsilon\). Hence \(\omega=1+O(\varepsilon^{2})\); if we want an explicit formula for this \(O(\varepsilon^{2})\) correction term, we'd need to introduce a super-slow time \(\Im=\varepsilon^{2}t\), or we could use the Poincare-Lindstedt method, as discussed in the exercises. ## Averaged Equations The same steps occur again and again in problems about weakly nonlinear oscillators. We can save time by deriving some general formulas. Consider the equation for a general weakly nonlinear oscillator: \[\ddot{x}+x+\varepsilon h\bigl{(}x,\dot{x}\bigr{)}=0. \tag{45}\] The usual two-timing substitutions give \[O(1) :\quad\partial_{{}_{\tau\tau}}x_{{}_{0}}+x_{{}_{0}}=0 \tag{46}\] \[O(\varepsilon) :\quad\partial_{{}_{\tau\tau}}x_{{}_{1}}+x_{{}_{1}}=-2\partial_{{}_ {\tau\tau}}x_{{}_{0}}-h \tag{47}\] where now \(h=h(x_{{}_{0}},\partial_{{}_{\tau}}x_{{}_{0}})\). As in Example 7.6.2, the solution of the \(O(1)\) equation is \[x_{{}_{0}}=r(T)\cos(\tau+\phi(T)). \tag{48}\] Our goal is to derive differential equations for \(r^{\prime}\) and \(\phi^{\prime}\), analogous to (40) and (41). We'll find these equations by insisting, as usual, that there be no terms proportional to \(\cos(\tau+\phi)\) and \(\sin(\tau+\phi)\) on the right-hand side of (47). Substituting (48) into (47), we see that this right-hand side is \[2[r^{\prime}\sin(\tau+\phi)+r\phi^{\prime}\cos(t+\phi)]-h \tag{49}\] where now \(h=h(r\cos(\tau+\phi),-r\sin(\tau+\phi))\). To extract the terms in \(h\) proportional to \(\cos(\tau+\phi)\) and \(\sin(\tau+\phi)\), we borrow some ideas from Fourier analysis. (If you're unfamiliar with Fourier analysis, don't worry--we'll derive all that we need in Exercise 7.6.12.) Notice that \(h\) is a \(2\pi\)-periodic function of \(\tau+\phi\). Let \[\theta=\tau+\phi.\] Fourier analysis tells us that \(h(\theta)\) can be written as a _Fourier series_ \[h(\theta)=\sum_{k=0}^{\infty}a_{k}\cos k\theta+\sum_{k=1}^{\infty}b_{k}\sin k\theta \tag{50}\] where the _Fourier coefficients_ are given by \[\begin{array}{l}a_{0}=\frac{1}{2\pi}\int_{0}^{2\pi}h(\theta)d\theta\\ a_{k}=\frac{1}{\pi}\int_{0}^{2\pi}h(\theta)\,\cos k\theta\,d\theta,\quad k \geq 1\\ b_{k}=\frac{1}{\pi}\int_{0}^{2\pi}h(\theta)\,\sin k\theta\,d\theta,\quad k \geq 1.\end{array} \tag{51}\] Hence (49) becomes \[2\big{[}r^{\prime}\sin\theta+r\phi^{\prime}\cos\theta\big{]}-\sum_{k=0}^{\infty }a_{k}\cos k\theta-\sum_{k=1}^{\infty}b_{k}\sin k\theta. \tag{52}\] The only resonant terms in (52) are \([2r^{\prime}-b_{1}]\sin\theta\) and \([2r\phi^{\prime}-a_{1}]\cos\theta\). Therefore, to avoid secular terms we need \(r^{\prime}=b_{1}/2\) and \(r\phi^{\prime}=a_{1}/2\). Using the expressions in (51) for \(a_{1}\) and \(b_{1}\), we obtain \[\begin{array}{l}r^{\prime}=\frac{1}{2\pi}\int_{0}^{2\pi}h(\theta)\sin\theta\, d\theta\equiv\langle h\sin\theta\rangle\\ r\phi^{\prime}=\frac{1}{2\pi}\int_{0}^{2\pi}h(\theta)\cos\theta\,d\theta\equiv \langle h\cos\theta\rangle\end{array} \tag{53}\] where the angled brackets \(\langle\cdot\rangle\) denote an average over one cycle of \(\theta\). The equations in (53) are called the _averaged_ or _slow-time equations_. To use them, we write out \(h=h(r\cos(\tau+\phi)\), \(-r\sin(\tau+\phi))=h(r\cos\theta,-r\sin\theta)\) explicitly, and then compute the relevant averages over the fast variable \(\theta\), treating the slow variable \(r\) as constant. Here are some averages that appear often: \[\begin{array}{l}\langle\cos\rangle=\langle\sin\rangle=0,\quad\langle\sin \cos\rangle=0,\quad\langle\cos^{3}\rangle=\langle\sin^{3}\rangle=0,\quad\langle \cos^{2n+1}\rangle=\langle\sin^{2n+1}\rangle=0,\\ \langle\cos^{2}\rangle=\langle\sin^{2}\rangle=\frac{1}{2},\quad\langle\cos^{4 }\rangle=\langle\sin^{4}\rangle=\frac{3}{9},\quad\langle\cos^{2}\sin^{2} \rangle=\frac{1}{8},\\ \langle\cos^{2n}\rangle=\langle\sin^{2n}\rangle=\frac{135\ldots(2n-1)}{24 \ldots(2n)},\quad n\geq 1.\end{array} \tag{54}\] Other averages can either be derived from these, or found by direct integration. For instance,\[\left\langle\cos^{2}\sin^{4}\right\rangle=\left\langle(1-\sin^{2})\sin^{4}\right\rangle =\left\langle\sin^{4}\right\rangle-\left\langle\sin^{6}\right\rangle=\frac{3}{8}- \frac{15}{48}=\frac{1}{16}\] and \[\left\langle\cos^{3}\sin\right\rangle=\frac{1}{2\pi}\int_{0}^{2\pi}\cos^{3} \theta\sin\theta d\theta=-\frac{1}{2\pi}\left[\cos^{4}\theta\right]_{0}^{2\pi} =0.\] **Example 7.6.3:** Consider the van der Pol equation \(\ddot{x}+x+\varepsilon(x^{2}-1)\dot{x}=0\), subject to the initial conditions \(x(0)=1,\ \dot{x}(0)=0\). Find the averaged equations, and then solve them to obtain an approximate formula for \(x(t,\varepsilon)\). Compare your result to a numerical solution of the full equation, for \(\varepsilon=0.1\). _Solution:_ The van der Pol equation has \(h=(x^{2}-1)\dot{x}=(r^{2}\cos^{2}\theta-1)(-r\sin\theta)\). Hence (53) becomes \[r^{\prime} =\left\langle h\sin\theta\right\rangle=\left\langle(r^{2}\cos^{2 }\theta-1)(-r\sin\theta)\sin\theta\right\rangle\] \[=r\left\langle\sin^{2}\theta\right\rangle-r^{3}\left\langle\cos^ {2}\theta\,\sin^{2}\theta\right\rangle\] \[=\frac{1}{2}r-\frac{1}{8}r^{3}\] and \[r\phi^{\prime} =\left\langle h\cos\theta\right\rangle=\left\langle(r^{2}\cos^{2 }\theta-1)(-r\sin\theta)\cos\theta\right\rangle\] \[=r\left\langle\sin\theta\,\cos\theta\right\rangle-r^{3}\left\langle \cos^{3}\theta\,\sin\theta\right\rangle\] \[=0-\mathbf{0}=0.\] These equations match those found in Example 7.6.2, as they should. The initial conditions \(x(0)=1\) and \(\dot{x}(0)=0\) imply \(r(0)\approx\sqrt{x(0)^{2}+\dot{x}(0)^{2}}=1\) and \(\phi(0)\approx\tan^{-1}\left(\dot{x}(0)/x(0)\right)-\tau=0-0=0\). Since \(\phi^{\prime}=0\), we find \(\phi(T)\equiv 0\). To find \(r(T)\), we solve \(r^{\prime}=\frac{1}{2}r-\frac{1}{8}r^{3}\) subject to \(r(0)=1\). The differential equation separates to \[\int\frac{8dr}{r(4-r^{2})}=\int dT.\] After integrating by partial fractions and using \(r(0)=1\), we find \[r(T)=2(1+3e^{-T})^{-1/2}. \tag{55}\]Hence \[x(t,\varepsilon)\sim x_{0}(\tau,T)+O(\varepsilon)\] \[=\frac{2}{\sqrt{1+3e^{-ct}}}\cos t+O(\varepsilon). \tag{56}\] Equation (56) describes the transient dynamics of the oscillator as it spirals out to its limit cycle. Notice that \(r(T)\to 2\) as \(T\to\infty\), as in Example 7.6.2. In Figure 7.6.4 we plot the "exact" solution of the van der Pol equation, obtained by numerical integration for \(\varepsilon=0.1\) and initial conditions \(x(0)=1,\;\dot{x}(0)=0\). For comparison, the slowly-varying amplitude \(r(T)\) predicted by (55) is also shown. The agreement is striking. Alternatively, we could have plotted the whole solution (56) instead of just its envelope; then the two curves would be virtually indistinguishable, like those in Figure 7.6.3. Now we consider an example in which the frequency of an oscillator depends on its amplitude. This is a common phenomenon, and one that is intrinsically _nonlinear_--it cannot occur for linear oscillators. **Example 7.6.4:** Find an approximate relation between the amplitude and frequency of the Duffing oscillator \(\ddot{x}+x+\varepsilon x^{3}=0\), where \(\varepsilon\) can have either sign. Interpret the results physically. _Solution:_ Here \(h=x^{3}=r^{3}\cos^{3}\theta\). Equation (53) becomes Figure 7.6.4: \[r^{\prime}=\left\langle h\sin\theta\right\rangle=r^{3}\left\langle\cos^{3}\theta\, \sin\theta\right\rangle=0\] and \[r\phi^{\prime}=\left\langle h\cos\theta\right\rangle=r^{3}\left\langle\cos^{4} \theta\right\rangle=\tfrac{3}{8}r^{3}.\] Hence \(r(T)\equiv a\), for some constant \(a\), and \(\,\phi^{\prime}=\tfrac{3}{8}a^{2}\). As in Example 7.6.2, the frequency \(\omega\) is given by \[\omega=1+\varepsilon\phi^{\prime}=1+\tfrac{3}{8}\varepsilon a^{2}+O( \varepsilon^{2}). \tag{57}\] Now for the physical interpretation. The Duffing equation describes the undamped motion of a unit mass attached to a nonlinear spring with restoring force \(F(x)=-x-\varepsilon x^{3}\). We can use our intuition about ordinary linear springs if we write \(F(x)=-kx\), where the spring stiffness is now dependent on \(x\): \[k=k(x)=1+\varepsilon x^{2}.\] Suppose \(\varepsilon>0\). Then the spring gets _stiffer_ as the displacement \(x\) increases--this is called a _hardening spring._ On physical grounds we'd expect it to _increase_ the frequency of the oscillations, consistent with (57). For \(\varepsilon<0\) we have a _softening spring_, exemplified by the pendulum (Exercise 7.6.15). It also makes sense that \(r^{\prime}=0\). The Duffing equation is a conservative system and for all \(\varepsilon\) sufficiently small, it has a _nonlinear center_ at the origin (Exercise 6.5.13). Since all orbits close to the origin are periodic, there can be no long-term change in amplitude, consistent with \(r^{\prime}=0\). ### Validity of Two-Timing We conclude with a few comments about the validity of the two-timing method. The rule of thumb is that the one-term approximation \(x_{0}\) will be within \(O(\varepsilon)\) of the true solution \(x\) for all times up to and including \(t\sim O(1/\varepsilon)\), assuming that both \(x\) and \(x_{0}\) start from the same initial condition. If \(x\) is a periodic solution, the situation is even better: \(x_{0}\) remains within \(O(\varepsilon)\) of \(x\) for _all_\(t\). But for precise statements and rigorous results about these matters, and for discussions of the subtleties that can occur, you should consult more advanced treatments, such as Guckenheimer and Holmes (1983) or Grimshaw (1990). Those authors use the _method of averaging_, an alternative approach that yields the same results as two-timing. See Exercise 7.6.25 for an introduction to this powerful technique. Also, we have been very loose about the sense in which our formulas approximate the true solutions. The relevant notion is that of _asymptotic_ approximation. For introductions to asymptotics, see Lin and Segel (1988) or Bender and Orszag (1978). ### Examples Sketch the phase portrait for each of the following systems. (As usual, \(r\), \(\theta\) denote polar coordinates.) \[\begin{array}{llll}\mathbf{7.1.1}&\dot{r}=r^{3}-4r,\ \dot{\theta}=1&\mathbf{7.1.2}&\dot{r}=r(1-r^{2})(9-r^{2}),\ \dot{\theta}=1\\ \mathbf{7.1.3}&\dot{r}=r(1-r^{2})(4-r^{2}),\ \dot{\theta}=2-r^{2}&\mathbf{7.1.4}&\dot{r}=r\sin r,\ \dot{\theta}=1\\ \mathbf{7.1.5}&\text{(From polar to Cartesian coordinates) Show that the system }\ \dot{r}=r(1-r^{2}),\\ \dot{\theta}=1&\text{is equivalent to}\\ \dot{x}=x-y-x(x^{2}+y^{2}),&\dot{y}=x+y-y(x^{2}+y^{2}),\end{array}\] where \(x=r\cos\theta\), \(y=r\sin\theta\). (Hint: \(\dot{x}=\frac{d}{dt}(r\cos\theta)=\dot{r}\cos\theta-\dot{r}\dot{\theta}\sin\theta\).) #### 7.1.6 (Circuit for van der Pol oscillator) Figure 1 shows the "tetrode multivibrator" circuit used in the earliest commercial radios and analyzed by van der Pol. In van der Pol's day, the active element was a vacuum tube; today it would be a semiconductor device. It acts like an ordinary resistor when \(I\) is high, but like a negative resistor (energy source) when \(I\) is low. Its current-voltage characteristic \(V=f(I)\) resembles a cubic function, as discussed below. Suppose a source of current is attached to the circuit and then withdrawn. What equations govern the subsequent evolution of the current and the various voltages? * Let \(V=V_{32}=-V_{23}\) denote the voltage drop from point 3 to point 2 in the circuit. Show that \(\dot{V}=-I/C\) and \(V=L\dot{I}+f(I)\). * Show that the equations in (a) are equivalent to \[\frac{dw}{d\tau}=-x,\hskip 28.452756pt\frac{dx}{d\tau}=w-\mu F(x)\] where \(x=L^{1/2}I\), \(w=C^{1/2}\ V,\ \tau=(LC)^{-1/2}\ t\), and \(F(x)=f(L^{-1/2}x)\). In Section 7.5, we'll see that this system for \((w,x)\) is equivalent to the van der Pol equation, if \(F(x)=\frac{1}{3}\ x^{3}-x\). Thus the circuit produces self-sustained oscillations. #### 7.1.7 (Waveform) Consider the system \(\dot{r}=r(4-r^{2})\), \(\dot{\theta}=1\), and let \(x(t)=r(t)\cos\theta(t)\). Given the initial condition \(x(0)=0.1\), \(y(0)=0\), sketch the approximate waveform of \(x(t)\), _without_ obtaining an explicit expression for it. Figure 1: (A circular limit cycle) Consider \(\ddot{x}+a\dot{x}(x^{2}+\dot{x}^{2}-1)+x=0\), where \(a>0\). Find and classify all the fixed points. Show that the system has a circular limit cycle, and find its amplitude and period. Determine the stability of the limit cycle. Give an argument which shows that the limit cycle is unique, i.e., there are no other periodic trajectories. (Circular pursuit problem) A dog at the center of a circular pond sees a duck swimming along the edge. The dog chases the duck by always swimming straight toward it. In other words, the dog's velocity vector always lies along the line connecting it to the duck. Meanwhile, the duck takes evasive action by swimming around the circumference as fast as it can, always moving counterclockwise. Assuming the pond has unit radius and both animals swim at the same constant speed, derive a pair of differential equations for the path of the dog. (Hint: Use the coordinate system shown in Figure 2 and find equations for \(dR/d\theta\) and \(d\phi/d\theta\).) Analyze the system. Can you solve it explicitly? Does the dog ever catch the duck? Now suppose the dog swims \(k\) times faster than the duck. Derive the differential equations for the dog's path. If \(k=\frac{1}{2}\), what does the dog end up doing in the long run? Note: This problem has a long and intriguing history, dating back to the mid-1800s at least. It is much more difficult than similar _pursuit problems_--there is no known solution for the path of the dog in part (a), in terms of elementary functions. See Davis (1962, pp. 113-125) and Nahin (2007) for nice analyses and guides to the literature. ### Ruling Out Closed Orbits Plot the phase portraits of the following gradient systems \(\dot{\mathbf{x}}=-\nabla V\). \(V=x^{2}+y^{2}\) \(V=x^{2}-y^{2}\) \(V=e^{x}\sin y\) Show that all vector fields on the line are gradient systems. Is the same true of vector fields on the circle? Let \(\dot{x}=f\big{(}x,y\big{)}\), \(\dot{y}=g(x,y)\) be a smooth vector field defined on the phase plane. Show that if this is a gradient system, then \(\partial f/\partial y=\partial g/\partial x\). Is the condition in (a) also sufficient? Figure 2: Given that a system is a gradient system, here's how to find its potential function \(V\). Suppose that \(\dot{x}=f\left(x,y\right)\), \(\dot{y}=g(x,y)\). Then \(\dot{\mathbf{x}}=-\nabla V\) implies \(f(x,y)=-\partial V/\partial x\) and \(g(x,y)=-\partial V/\partial y\). These two equations may be "partially integrated" to find \(V\). Use this procedure to find \(V\) for the following gradient systems. * \(\dot{x}=y^{2}+y\cos x\), \(\dot{y}=2xy+\sin x\) * \(\dot{x}=3x^{2}-1-e^{2y}\), \(\dot{y}=-2xe^{2y}\) Consider the system \(\dot{x}=y+2xy\), \(\dot{y}=x+x^{2}-y^{2}\). * Show that \(\partial f/\partial y=\partial g/\partial x\). (Then Exercise 7.2.5(a) implies this is a gradient system.) * Find \(V\). * Sketch the phase portrait. Show that the trajectories of a gradient system always cross the equipotentials at right angles (except at fixed points). For each of the following systems, decide whether it is a gradient system. If so, find \(V\) and sketch the phase portrait. On a separate graph, sketch the equipotentials \(V\) = constant. (If the system is not a gradient system, go on to the next question.) * \(\dot{x}=y+x^{2}y\), \(\dot{y}=-x+2xy\) * \(\dot{x}=2x\), \(\dot{y}=8y\) * \(x=-2xe^{x^{2}+y^{2}}\), \(y=-2ye^{x^{2}+y^{2}}\) Show that the system \(\dot{x}=y-x^{3}\), \(\dot{y}=-x-y^{3}\) has no closed orbits, by constructing a Liapunov function \(V\) = _ax_2+ _by_2 with suitable \(a\),_b_. Show that \(V\) = _ax_2+ _2_bxy_+ _cy_2 is positive definite if and only if \(a\) > 0 and _ac_ - \(b\)2 > 0. (This is a useful criterion that allows us to test for positive definiteness when the quadratic form \(V\) includes a "cross term" _2_bxy_.) Show that \(\dot{x}=-x+2y^{3}-2y^{4}\), \(\dot{y}=-x-y+xy\) has no periodic solutions. (Hint: Choose \(a\), \(m\), and \(n\) such that \(V\) = _x__m_+ _ay__n_ is a Liapunov function.) Recall the competition model \[\dot{N}_{1}=r_{1}N_{1}(1-N_{1}/K_{1})-b_{1}N_{1}N_{2},\qquad\quad\dot{N}_{2}=r_ {2}N_{2}(1-N_{2}/K_{2})-b_{2}N_{1}N_{2},\] of Exercise 6.4.6. Using Dulac's criterion with the weighting function \(g=(N_{1}N_{2})^{-1}\), show that the system has no periodic orbits in the first quadrant \(N\)1, \(N\)2 > 0. Consider \(\dot{x}=x^{2}-y-1\), \(\dot{y}=y\big{(}x-2\big{)}\). * Show that there are three fixed points and classify them. * By considering the three straight lines through pairs of fixed points, show that there are no closed orbits. * Sketch the phase portrait. Consider the system \(\dot{x}=x(2-x-y)\), \(\dot{y}=y(4x-x^{2}-3)\). We know from Example 7.2.4 that this system has no closed orbits. Find the three fixed points and classify them. Sketch the phase portrait. If \(R\) is not simply connected, then the conclusion of Dulac's criterion is no longer valid. Find a counterexample. Assume the hypotheses of Dulac's criterion, except now suppose that \(R\) is topologically equivalent to an annulus, i.e., it has exactly one hole in it. Using Green's theorem, show that there exists _at most_ one closed orbit in \(R\). (This result can be useful sometimes as a way of proving that a closed orbit is unique.) Consider the predator-prey model \[\dot{x}=rx\left(1-\frac{x}{2}\right)-\frac{2x}{1+x}y\,,\ \ \ \ \dot{y}=-y+\frac{2x}{1+x}y\] where \(r>0\) and \(x,y\geq 0\). Prove this system has no closed orbits by invoking Dulac's criterion with the function \(g(x,y)=\frac{1+x}{x}y^{\alpha-1}\) for a suitable choice of \(\alpha\) (Hofbauer and Sigmund 1998). (Modeling the love story in "Gone with the Wind") Rinaldi et al. (2013) have modeled the stormy love affair between Scarlett O'Hara and Rhett Butler with the system \[\dot{R}=-R+A_{S}+kSe^{-S}\,,\ \ \ \ \ \dot{S}=-S+A_{R}+kRe^{-R}\,.\] Here \(R\) denotes Rhett's love for Scarlett, and \(S\) denotes Scarlett's love for Rhett. The parameters \(A_{R}\), \(A_{S}\), and \(k\) are all positive. Interpret the three terms on the right hand side of each equation. What do they mean, romantically speaking? In particular, what does the functional form of the third terms, \(kSe^{-S}\) and \(kRe^{-R}\), signify about how Rhett and Scarlett react to each other's endearments? Show that all trajectories that begin in the first quadrant \(R,S\geq 0\) stay in the first quadrant forever, and interpret that result psychologically. Using Dulac's criterion, prove that the model has no periodic solutions. (Hint: The simplest \(g\) you can think of will work.) Using a computer, plot the phase portrait for the system, assuming parameter values \(A_{S}=1.2\), \(A_{R}=1\), and \(k=15\). Assuming that Rhett and Scarlett are indifferent when they meet, so that \(R(0)=S(0)=0\), plot the predicted trajectory for what happens in the first stage of their relationship. Check out Rinaldi et al. (2013)--and the movie itself--if you're curious about the later twists and turns in this epic romance. ### Poincare-Bendixson Theorem Consider \(\dot{x}=x-y-x(x^{2}+5y^{2})\), \(\dot{y}=x\ +\ y-y(x^{2}+y^{2})\). Classify the fixed point at the origin. Rewrite the system in polar coordinates, using \(r\dot{r}=x\dot{x}+y\dot{y}\) and \(\dot{\theta}=\left(x\dot{y}-y\dot{x}\right)/r^{2}\). Determine the circle of maximum radius, \(r_{1}\), centered on the origin such that all trajectories have a radially _outward_ component on it. Determine the circle of minimum radius, \(r_{2}\), centered on the origin such that all trajectories have a radially _inward_ component on it. Prove that the system has a limit cycle somewhere in the trapping region \(r_{1}\leq r\leq r_{2}\). Using numerical integration, compute the limit cycle of Exercise 7.3.1 and verify that it lies in the trapping region you constructed. Show that the system \(\dot{x}=x-y-x^{3}\), \(\dot{y}=x\ +\ y-y^{3}\) has a periodic solution. Consider the system \[\dot{x}=x(1-4x^{2}-y^{2})-\tfrac{1}{2}y\big{(}1+x\big{)},\qquad\quad\dot{y}=y (1-4x^{2}-y^{2})+2x\big{(}1+x\big{)}.\] Show that the origin is an unstable fixed point. By considering \(\dot{V}\), where \(V=(1-4x^{2}-y^{2})^{2}\), show that all trajectories approach the ellipse \(4x^{2}+y^{2}=1\) as \(t\to\infty\). Show that the system \(\dot{x}=-x-y+x(x^{2}+2y^{2})\), \(\dot{y}=x-y+y(x^{2}+2y^{2})\) has at least one periodic solution. Consider the oscillator equation \(\ddot{x}+F(x,\dot{x})\dot{x}+x=0\), where \(F(x,\dot{x})<0\) if \(r\leq a\) and \(F(x,\dot{x})>0\) if \(r\geq b\), where \(r^{2}=x^{2}+\dot{x}^{2}\). Give a physical interpretation of the assumptions on \(F\). Show that there is at least one closed orbit in the region \(a<r<b\). Consider \(\dot{x}=y+ax(1-2b-r^{2})\), \(\dot{y}=-x+ay(1-r^{2})\), where \(a\) and \(b\) are parameters (\(0<a\leq 1\), \(0\leq b<\tfrac{1}{2}\)) and \(r^{2}=x^{2}+y^{2}\). Rewrite the system in polar coordinates. Prove that there is at least one limit cycle, and that if there are several, they all have the same period \(T(a,b)\). Prove that for \(b=0\) there is only one limit cycle. Recall the system \(\dot{r}=r(1-r^{2})+\mu r\cos\theta\), \(\dot{\theta}=1\) of Example 7.3.1. Using the computer, plot the phase portrait for various values of \(\mu>0\). Is there a critical value \(\mu_{c}\) at which the closed orbit ceases to exist? If so, estimate it. If not, prove that a closed orbit exists for _all_\(\mu>0\). (Series approximation for a closed orbit) In Example 7.3.1, we used the Poincare-Bendixson Theorem to prove that the system \(\dot{r}=r(1-r^{2})+\mu r\cos\theta\), \(\dot{\theta}=1\) has a closed orbit in the annulus \(\sqrt{1-\mu}<r<\sqrt{1+\mu}\) for all \(\mu<1\). 1. To approximate the shape \(r(\theta)\) of the orbit for \(\mu<<1\), assume a power series solution of the form \(r(\theta)=1+\mu r_{1}(\theta)+O(\mu^{2})\). Substitute the series into a differential equation for \(dr/d\theta\). Neglect all \(O(\mu^{2})\) terms, and thereby derive a simple differential equation for \(r_{1}(\theta)\). Solve this equation explicitly for \(r_{1}(\theta)\). (The approximation technique used here is called regular perturbation theory; see Section 7.6.) 2. Find the maximum and minimum \(r\) on your approximate orbit, and hence show that it lies in the annulus \(\sqrt{1-\mu}<r<\sqrt{1+\mu}\), as expected. 3. Use a computer to calculate \(r(\theta)\) numerically for various small \(\mu\), and plot the results on the same graph as your analytical approximation for \(r(\theta)\). How does the maximum error depend on \(\mu\)? Consider the two-dimensional system \(\dot{\mathbf{x}}=A\mathbf{x}-r^{2}\mathbf{x}\), where \(r=\|\mathbf{x}\|\) and \(A\) is a \(2\times 2\) constant real matrix with complex eigenvalues \(\alpha\pm i\omega\). Prove that there exists at least one limit cycle for \(\alpha>0\) and that there are none for \(\alpha<0\). (Cycle graphs) Suppose \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\) is a smooth vector field on \(\mathbf{R}^{2}\). An improved version of the Poincare-Bendixson theorem states that if a trajectory is trapped in a compact region, then it must approach a fixed point, a closed orbit, or something exotic called a _cycle graph_ (an invariant set containing a finite number of fixed points connected by a finite number of trajectories, all oriented either clockwise or counterclockwise). Cycle graphs are rare in practice; here's a contrived but simple example. 1. Plot the phase portrait for the system \[\dot{r} =r(1-r^{2})\left[r^{2}\sin^{2}\theta+(r^{2}\cos^{2}\theta-1)^{2}\right]\] \[\dot{\theta} =r^{2}\sin^{2}\theta+(r^{2}\cos^{2}\theta-1)^{2}\] where \(r,\theta\) are polar coordinates. (Hint: Note the common factor in the two equations; examine where it vanishes.) 2. Sketch \(x\) vs. \(t\) for a trajectory starting away from the unit circle. What happens as \(t\to\infty\)? (A heteroclinic cycle in rock-paper-scissors) The three-dimensional system \[\dot{P} =P\left[(aR-S)-(a-1)(PR+RS+PS)\right]\] \[\dot{R} =R\left[(aS-P)-(a-1)(PR+RS+PS)\right]\] \[\dot{S} =S\left[(aP-R)-(a-1)(PR+RS+PS)\right]\text{,}\]where the parameter \(a>0\), is a generalization of the rock-paper-scissors model studied in Exercise 6.5.20. Previously, we studied the special case \(a=1\) and showed that the system had two conserved quantities, \[E_{1}(P,R,S)=P+R+S\,\qquad E_{2}(P,R,S)=PRS\.\] For \(a\approx 1\) it turns out that the system above has a cycle graph, or what is more commonly known nowadays as a heteroclinic cycle. With a few deft strokes, we can use the functions \(E_{1}\) and \(E_{2}\) to prove that a heteroclinic cycle exists (Sigmund 2010, p. 42). The point of this exercise is to provide a more natural instance of a cycle graph than that in Exercise 7.3.11. * Show that for the system above, \(\dot{E}_{1}=(1-E_{1})(a-1)(PR+RS+PS)\). Hence, \(E_{1}\) is no longer conserved everywhere, but it _is_ conserved if we restrict attention to the set where \(E_{1}=1\). This set, defined by all ordered triples of real numbers \((P,R,S)\) such that \(P+R+S=1\), is invariant; any trajectory that starts on it stays on it forever. Describe this set geometrically; what simple shape is it? * Consider a subset of the set in (a), defined by the condition that \(P,R,S\geq 0\) in addition to \(P+R+S=1\). Show that this subset, which we'll call \(T\), is also invariant. What simple shape is it? From now on, we'll restrict attention to the dynamics on the set \(T\). * Show that the boundary of \(T\)consists of three fixed points connected by three trajectories, all oriented in the same sense, and hence is a cycle graph (heteroclinic cycle). * Show that \(\dot{E}_{2}=\frac{(a-1)E_{2}}{2}\left[(P-R)^{2}+(R-S)^{2}+(S-P)^{2}\right]\). * Using the results of parts (b)-(d), show that \(\dot{E}_{2}\) vanishes at the boundary of \(T\) and at the interior fixed point \((P*,R*,S*)=\frac{1}{3}(1,1,1)\). * Explain why the previous results imply that for \(a>1\), the interior fixed point attracts all trajectories on the interior of \(T\). * Finally, show that for \(a<1\), the heteroclinic cycle attracts all trajectories that start in the interior of \(T\)(except, of course, for the interior fixed point itself). ### Lienard Systems Show that the equation \(\ddot{x}+\mu(x^{2}-1)\dot{x}+\tanh x=0\), for \(\mu>0\), has exactly one periodic solution, and classify its stability. Consider the equation \(\ddot{x}+\mu(x^{4}-1)\dot{x}+x=0\). * Prove that the system has a unique stable limit cycle if \(\mu>0\). * Using a computer, plot the phase portrait for the case \(\mu=1\). * If \(\mu<0\), does the system still have a limit cycle? If so, is it stable or unstable? ### Relaxation Oscillations For the van der Pol oscillator with \(\mu>>1\), show that the positive branch of the cubic nullcline begins at \(x_{A}=2\) and ends at \(x_{B}=1\). In Example 7.5.1, we used a tricky phase plane (often called the _Lienard plane_) to analyze the van der Pol oscillator for \(\mu>>1\). Try to redo the analysis in the standard phase plane where \(\dot{x}=y\), \(\dot{y}=-x-\mu(x^{2}-1)\). What is the advantage of the Lienard plane? Estimate the period of the limit cycle of \(\ddot{x}+k\bigl{(}x^{2}-4\bigr{)}\dot{x}+x=1\) for \(k>>1\). (Piecewise-linear nullclines) Consider the equation \(\ddot{x}+\mu f(x)\dot{x}+x=0\), where \(f(x)=-1\) for \(|x|<1\) and \(f(x)=1\) for \(|x|\geq 1\). * Show that the system is equivalent to \(\dot{x}=\mu(y-F(x))\), \(\dot{y}=-x/\mu\), where \(F(x)\) is the piecewise-linear function \[F(x)=\begin{cases}x+2,&x\leq-1\\ -x,&|x|\leq 1\\ x-2,&x\geq 1.\end{cases}\] * Graph the nullclines. * Show that the system exhibits relaxation oscillations for \(\mu>>1\), and plot the limit cycle in the \((x,y)\) plane. * Estimate the period of the limit cycle for \(\mu>>1\). Consider the equation \(\ddot{x}+\mu\bigl{(}|x|-1\bigr{)}\dot{x}+x=0\). Find the approximate period of the limit cycle for \(\mu>>1\). (Biased van der Pol) Suppose the van der Pol oscillator is biased by a constant force: \(\ddot{x}+\mu(x^{2}-1)\dot{x}+x=a\), where \(a\) can be positive, negative, or zero. (Assume \(\mu>0\) as usual.) * Find and classify all the fixed points. * Plot the nullclines in the Lienard plane. Show that if they intersect on the _middle_ branch of the cubic nullcline, the corresponding fixed point is unstable. * For \(\mu>>1\), show that the system has a stable limit cycle if and only if \(|a|<a_{c}\), where \(a_{c}\) is to be determined. (Hint: Use the Lienard plane.) * Sketch the phase portrait for \(a\) slightly greater than \(a_{c}\). Show that the system is _excitable_ (it has a globally attracting fixed point, but certain disturbances can send the system on a long excursion through phase space before returning to the fixed point; compare Exercise 4.5.3.) This system is closely related to the Fitzhugh-Nagumo model of neural activity; see Murray (2002) or Edelstein-Keshet (1988) for an introduction. 5.7 (Cell cycle) Tyson (1991) proposed an elegant model of the cell division cycle, based on interactions between the proteins cdc2 and cyclin. He showed that the model's mathematical essence is contained in the following set of dimensionless equations: \[\dot{u}=b(v-u)(\alpha+u^{2})-u,\qquad\quad\dot{v}=c-u,\] where \(u\) is proportional to the concentration of the active form of a cdc2-cyclin complex, and \(v\) is proportional to the total cyclin concentration (monomers and dimers). The parameters \(b>>1\) and \(\alpha<<1\) are fixed and satisfy \(8\alpha b<1\), and \(c\) is adjustable. * Sketch the nullclines. * Show that the system exhibits relaxation oscillations for \(c_{1}<c<c_{2}\), where \(c_{1}\) and \(c_{2}\) are to be determined approximately. (It is too hard to find \(c_{1}\) and \(c_{2}\) exactly, but a good approximation can be achieved if you assume \(8\alpha b<<1\).) * Show that the system is excitable if \(c\) is slightly less than \(c_{1}\). ### Weakly Nonlinear Oscillators Show that if (7.6.7) is expanded as a power series in \(\varepsilon\), we recover (7.6.17). (Calibrating regular perturbation theory) Consider the initial value problem \(\ddot{x}+x+\varepsilon x=0,\ \text{with}\ x(0)=1,\ \dot{x}(0)=0\). * Obtain the exact solution to the problem. * Using regular perturbation theory, find \(x_{\varphi}\), \(x_{1}\), and \(x_{2}\) in the series expansion \(x\big{(}t,\varepsilon\big{)}=x_{0}(t)+\varepsilon x_{1}(t)+\varepsilon^{2}x_{ 2}(t)+O(\varepsilon^{3})\). * Does the perturbation solution contain secular terms? Did you expect to see any? Why? (More calibration) Consider the initial value problem \(\ddot{x}+x=\varepsilon\), with \(x(0)=1,\ \dot{x}(0)=0\). * Solve the problem exactly. * Using regular perturbation theory, find \(x_{\varphi}\), \(x_{1}\), and \(x_{2}\) in the series expansion \(x(t,\varepsilon)=x_{0}(t)+\varepsilon x_{1}(t)+\varepsilon^{2}x_{2}(t)+O( \varepsilon^{3})\). * Explain why the perturbation solution does or doesn't contain secular terms. For each of the following systems \(\ddot{x}+x+\varepsilon h\big{(}x,\dot{x}\big{)}=0\), with \(0<\varepsilon<<1\), calculate the averaged equations (7.6.53) and analyze the long-term behavior of the system. Find the amplitude and frequency of any limit cycles for the original system. If possible, solve the averaged equations explicitly for \(x(t,\varepsilon)\), given the initial conditions \(x(0)=a,\ \dot{x}(0)=0\). \[h\big{(}x,\dot{x}\big{)}=x\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mathbf{7.6.5}\quad h\big{(}x,\dot{x}\big{)}=x\dot{x}^{2}\] \[h\big{(}x,\dot{x}\big{)}=x\dot{x}\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mathbf{7.6.7}\quad h\big{(}x,\dot{x}\big{)}=(x^{4}-1)\dot{x}\] \[h\big{(}x,\dot{x}\big{)}=\big{(}\mid x\mid-1\big{)}\dot{x}\qquad\qquad\qquad \qquad\qquad\mathbf{7.6.9}\quad h\big{(}x,\dot{x}\big{)}=(x^{2}-1)\dot{x}^{3}\] Derive the identity \(\sin\theta\cos^{2}\theta=\frac{1}{4}[\sin\theta+\sin 3\theta]\) as follows: Use the complex representations \[\cos\theta=\frac{e^{\mu\theta}+e^{-i\theta}}{2},\qquad\qquad\sin\theta=\frac{e^{ \mu\theta}-e^{-i\theta}}{2i},\] multiply everything out, and then collect terms. This is always the most straightforward method of deriving such identities, and you don't have to remember any others. (Higher harmonics) Notice the third harmonic sin \(3(\tau+\phi)\) in Equation (7.6.39). The generation of _higher harmonics_ is a characteristic feature of nonlinear systems. To find the effect of such terms, return to Example 7.6.2 and solve for \(x_{\rm t}\), assuming that the original system had initial conditions \(x(0)=2,\ \dot{x}(0)=0\). (Deriving the Fourier coefficients) This exercise leads you through the derivation of the formulas (7.6.51) for the Fourier coefficients. For convenience, let brackets denote the average of a function: \(\left\langle f(\theta)\right\rangle\equiv\frac{1}{2\pi}\int_{0}^{2\pi}f(\theta )d\theta\) for any \(2\pi\)-periodic function \(f\). Let \(k\) and \(m\) be arbitrary integers. a) Using integration by parts, complex exponentials, trig identities, or otherwise, derive the _orthogonality relations_ \[\left\langle\cos k\theta\,\sin m\theta\right\rangle=0,\,\text{for all}\,k,m;\] \[\left\langle\cos k\theta\,\cos m\theta\right\rangle=\left\langle\sin k \theta\,\sin m\theta\right\rangle=0,\,\text{ for all}\,k\approx m;\] \[\left\langle\cos^{2}k\theta\right\rangle=\left\langle\sin^{2}k\theta\right\rangle =\frac{1}{2},\,\text{for}\,k\approx 0.\] b) To find \(a_{k}\) for \(k\approx 0\), multiply both sides of (7.6.50) by \(\cos m\theta\) and average both sides term by term over the interval \([0,\,2\pi]\). Now using the orthogonality relations from part (a), show that _all the terms on the right-hand side cancel out, except the \(k=m\) term_! Deduce that \(\left\langle h(\theta)\cos k\theta\right\rangle=\frac{1}{2}a_{k},\) which is equivalent to the formula for \(a_{k}\) in (7.6.51). c) Similarly, derive the formulas for \(b_{k}\) and \(a_{0}\). (Exact period of a conservative oscillator) Consider the Duffing oscillator \(\ddot{x}+x+\varepsilon x^{3}=0,\text{ where }0<\varepsilon<<1,\,x(0)=a,\text{ and }\,\dot{x}(0)=0\). a) Using conservation of energy, express the oscillation period \(T(\varepsilon)\) as a certain integral. b) Expand the integrand as a power series in \(\varepsilon\), and integrate term by term to obtain an approximate formula \(T(\varepsilon)=c_{0}+c_{1}\varepsilon+c_{2}\varepsilon^{2}+O(\varepsilon^{3}).\) Find \(c_{0}\), \(c_{1}\), \(c_{2}\) and check that \(c_{0}\), \(c_{1}\) are consistent with (7.6.57). #### 7.6.14 (Computer test of two-timing) Consider the equation \(\ddot{x}+\varepsilon x^{3}+x=0\). 1. Derive the averaged equations. 2. Given the initial conditions \(x(0)=a,\ \dot{x}(0)=0\), solve the averaged equations and thereby find an approximate formula for \(x(t,\varepsilon)\). 3. Solve \(\ddot{x}+\varepsilon x^{3}+x=0\) numerically for \(a=1\), \(\varepsilon=2\), \(0\leq t\leq 50\), and plot the result on the same graph as your answer to part (b). Notice the impressive agreement, even though \(\varepsilon\) is not small! #### 7.6.15 (Pendulum) Consider the pendulum equation \(\ddot{x}+\sin x=0\). 1. Using the method of Example 7.6.4, show that the frequency of small oscillations of amplitude \(a<<1\) is given by \(\omega\approx 1-\frac{1}{16}a^{2}\). (Hint: \(\sin x\approx x-\frac{1}{6}x^{3}\), where \(\frac{1}{6}x^{3}\) is a "small" perturbation.) 2. Is this formula for \(\omega\) consistent with the exact results obtained in Exercise 6.7.4? 6.16 (Amplitude of the van der Pol oscillator via Green's theorem) Here's another way to determine the radius of the nearly circular limit cycle of the van der Pol oscillator \(\ddot{x}+\varepsilon\dot{x}(x^{2}-1)+x=0\), in the limit \(\varepsilon<<1\). Assume that the limit cycle is a circle of unknown radius \(a\) about the origin, and invoke the normal form of Green's theorem (i.e., the 2-D divergence theorem): \[\oint\limits_{C}\nu\cdot\mathbf{n}\,dl=\iint\limits_{A}\nabla\cdot\mathbf{v}\,dA\] where \(C\) is the cycle and \(A\) is the region enclosed. By substituting \(\mathbf{v}=\dot{\mathbf{x}}=(\dot{x},\dot{y})\) and evaluating the integrals, show that \(a\approx 2\). #### 7.6.17 (Playing on a swing) A simple model for a child playing on a swing is \[\ddot{x}+(1+\varepsilon\gamma+\varepsilon\cos 2t)\sin x=0\] where \(\varepsilon\) and \(\gamma\) are parameters, and \(0<\varepsilon<<1\). The variable \(x\) measures the angle between the swing and the downward vertical. The term \(1+\varepsilon\gamma+\varepsilon\cos 2t\) models the effects of gravity and the periodic pumping of the child's legs at approximately twice the natural frequency of the swing. The question is: Starting near the fixed point \(x=0\), \(\dot{x}=0\), can the child get the swing going by pumping her legs this way, or does she need a push? 1. For small \(x\), the equation may be replaced by \(\ddot{x}+(1+\varepsilon\gamma+\varepsilon\cos 2t)x=0\). Show that the averaged equations (7.6.53) become \[r^{\prime}=\frac{1}{4}r\sin 2\phi,\ \ \ \ \ \phi^{\prime}=\frac{1}{2}(\gamma+ \frac{1}{2}\cos 2\phi),\] where \(x=r\cos\theta=r(T)\cos(t+\phi(T))\), \(\dot{x}=-r\sin\theta=-r(T)\sin(t+\phi(T))\), and prime denotes differentiation with respect to slow time \(T=\varepsilon t\). Hint: To average terms like \(\cos 2t\)\(\cos\theta\)\(\sin\theta\) over one cycle of \(\theta\), recall that \(t=\theta-\phi\) and use trig identities:\[\bigl{\langle}\cos 2t\cos\theta\sin\theta\bigr{\rangle} =\tfrac{1}{2}\bigl{\langle}\cos(2\theta-2\phi)\sin 2\theta \bigr{\rangle}\] \[=\tfrac{1}{2}\bigl{\langle}(\cos 2\theta\cos 2\phi+\sin 2\theta\sin 2 \phi)\sin 2\theta\bigr{\rangle}\] \[=\tfrac{1}{4}\sin 2\phi.\] 2. Show that the fixed point \(r=0\) is unstable to exponentially growing oscillations, i.e., \(r(T)=r_{o}e^{ikr}\) with \(k>0\), if \(|\gamma|<\gamma_{c}\) where \(\gamma_{c}\) is to be determined. (Hint: For \(r\) near \(0\), \(\phi^{\prime}>>r^{\prime}\) so \(\phi\) equilibrates relatively rapidly.) 3. For \(|\gamma|<\gamma_{c}\), write a formula for the growth rate \(k\) in terms of \(\gamma\). 4. How do the solutions to the averaged equations behave if \(|\gamma|>\gamma_{c}\)? 5. Interpret the results physically. (Mathieu equation and a super-slow time scale) Consider the _Mathieu equation_\(\ddot{x}+(a+\varepsilon\cos t)x=0\) with \(a\approx 1\). Using two-timing with a slow time \(T=\varepsilon^{2}t\), show that the solution becomes unbounded as \(t\to\infty\) if \(1-\tfrac{1}{12}\varepsilon^{2}+O(\varepsilon^{4})\leq a\leq 1+\tfrac{3}{12} \varepsilon^{2}+O(\varepsilon^{4})\). (Poincare-Lindstedt method) This exercise guides you through an improved version of perturbation theory known as the _Poincare-Lindstedt method_. Consider the Duffing equation \(\ddot{x}+x+\varepsilon x^{3}=0\), where \(0<\varepsilon<<1\), \(x(0)=a\), and \(\dot{x}(0)=0\). We know from phase plane analysis that the true solution \(x(t,\varepsilon)\) is periodic; our goal is to find an approximate formula for \(x(t,\varepsilon)\) that is valid for all \(t\). The key idea is to regard the frequency \(\omega\) as _unknown_ in advance, and to solve for it by demanding that \(x(t,\varepsilon)\) contains no secular terms. 1. Define a new time \(\tau=\omega t\) such that the solution has period \(2\pi\) with respect to \(\tau\). Show that the equation transforms to \(\omega^{2}x^{\prime\prime}+x+\varepsilon x^{3}=0\). 2. Let \(x(\tau,\varepsilon)=x_{0}(\tau)+\varepsilon x_{1}(\tau)+\varepsilon^{2}x_{2} (\tau)+O(\varepsilon^{3})\) and \(\omega=1+\varepsilon\omega_{1}+\varepsilon^{2}\omega_{2}+O(\varepsilon^{3})\). (We know already that \(\omega_{0}=1\) since the solution has frequency \(\omega=1\) when \(\varepsilon=0\).) Substitute these series into the differential equation and collect powers of \(\varepsilon\). Show that \[O(1) :\;x_{0}^{\prime\prime}+x_{0}=0\] \[O(\varepsilon) :\;x_{1}^{\prime\prime}+x_{1}=-2\omega_{1}x_{0}^{\prime\prime}-x _{0}^{3}.\] 3. Show that the initial conditions become \(x_{0}(0)=a\), \(\dot{x}_{0}(0)=0\); \(x_{k}(0)=\dot{x}_{k}(0)=0\) for all \(k>0\). 4. Solve the \(O(1)\) equation for \(x_{0}\). 5. Show that after substitution of \(x_{0}\) and the use of a trigonometric identity, the \(O(\varepsilon)\) equation becomes \(x_{1}^{\prime\prime}+x_{1}=(2\omega_{1}a-\tfrac{3}{4}a^{3})\cos\tau-\tfrac{1}{ 4}a^{3}\cos 3\tau\). Hence, _to avoid secular terms_, we need \(\omega_{1}=\tfrac{3}{8}a^{2}\). 6. Solve for \(x_{1}\). Two comments: (1) This exercise shows that the Duffing oscillator has a frequency that depends on amplitude: \(\omega=1+\tfrac{3}{8}\varepsilon a^{2}+O(\varepsilon^{2})\), in agreement with (7.6.57). (2) The Poincare-Lindstedt method is good for approximating periodic solutions, but that's _all_ it can do; if you want to explore transients or non-periodic solutions, you can't use this method. Use two-timing or averaging theory instead. Show that if we had used regular perturbation to solve Exercise 7.6.19, we would have obtained \(x(t,\varepsilon) = a\cos t + \varepsilon a^{3}[ - \tfrac{3}{8}t\sin t + \tfrac{1}{32}(\cos 3t - \cos t)] + O(\varepsilon^{2})\). Why is this solution inferior? Using the Poincare-Lindstedt method, show that the frequency of the limit cycle for the van der Pol oscillator \(\ddot{x} + \varepsilon(x^{2} - 1)\dot{x} + x = 0\) is given by \(\omega = 1 - \tfrac{1}{16}\varepsilon^{2} + O(\varepsilon^{3})\). (Asymmetric spring) Use the Poincare-Lindstedt method to find the first few terms in the expansion for the solution of \(\ddot{x} + x + \varepsilon x^{2} = 0\), with \(x(0) = a\), \(\dot{x}(0) = 0\). Show that the center of oscillation is at \(x \approx \tfrac{1}{2}\varepsilon a^{2}\), approximately. Find the approximate relation between amplitude and frequency for the periodic solutions of \(\ddot{x} - \varepsilon x\dot{x} + x = 0\). (Computer algebra) Using Mathematica, Maple, or some other computer algebra package, apply the Poincare-Lindstedt method to the problem \(\ddot{x} + x - \varepsilon x^{3} = 0\), with \(x(0) = a\), and \(\dot{x}(0) = 0\). Find the frequency \(\omega\) of periodic solutions, up to and including the \(O(\varepsilon^{3})\) term. (The method of averaging) Consider the weakly nonlinear oscillator \(\ddot{x} + x + \varepsilon h(x,\dot{x},t) = 0\). Let \(x(t) = r(t)\cos(t + \phi(t))\), \(\dot{x} = - r(t)\sin(t + \phi(t))\). This change of variables should be regarded as a definition of \(r(t)\) and \(\phi(t)\). * Show that \(\dot{r} = \varepsilon h\sin(t + \phi)\), \(r\dot{\phi} = \varepsilon h\cos(t + \phi)\). (Hence \(r\) and \(\phi\) are slowly varying for \(0 < \varepsilon << 1\), and thus \(x(t)\) is a sinusoidal oscillation modulated by a slowly drifting amplitude and phase.) * Let \(\big{\langle}r\big{\rangle}(t) = \overline{r}(t) = \tfrac{1}{2\varepsilon}\int_{t = \varepsilon}^{t + \varepsilon}r(\tau)d\tau\) denote the running average of \(r\) over one cycle of the sinusoidal oscillation. Show that \(d\big{\langle}r\big{\rangle}/dt = \big{\langle}dr/dt\big{\rangle}\), i.e., it doesn't matter whether we differentiate or time-average first. * r\sin(t + \phi), t]\sin(t + \theta)\big{\rangle}\). * \overline{r}\sin(t + \overline{\phi}), t]\sin(t + \overline{\phi})\big{\rangle} + O(\varepsilon^{2})\] \[\overline{r}d\overline{\phi}/dt = \varepsilon\big{\langle}h[ \overline{r}\cos(t + \overline{\phi}), - \overline{r}\sin(t + \overline{\phi}), t]\cos(t + \overline{\phi})\big{\rangle} + O(\varepsilon^{2})\] where the barred quantities are to be treated as constants inside the averages. These equations are just the _averaged equations_ (7.6.53), derived by a different approach in the text. It is customary to drop the overbars; one usually doesn't distinguish between slowly varying quantities and their averages. (Calibrating the method of averaging) Consider the equation \(\overset{\cdot}{x}=-\varepsilon x\sin^{2}t,\) with \(0\leq\varepsilon<<1\) and \(x=x_{0}\) at \(t=0\). a) Find the _exact_ solution to the equation. b) Let \(\overset{\cdot}{x}(t)=\frac{1}{2\pi}\int_{t-\pi}^{t+\pi}x(\tau)\,d\tau\). Show that \(x(t)=\overset{\cdot}{x}(t)+O(\varepsilon)\). Use the method of averaging to find an approximate differential equation satisfied by \(\overset{\cdot}{x}\), and solve it. c) Compare the results of parts (a) and (b); how large is the error incurred by averaging? ## 8.0 Introduction This chapter extends our earlier work on bifurcations (Chapter 3). As we move up from one-dimensional to two-dimensional systems, we still find that fixed points can be created or destroyed or destabilized as parameters are varied--but now the same is true of closed orbits as well. Thus we can begin to _describe the ways in which oscillations can be turned on or off._ In this broader context, what exactly do we mean by a bifurcation? The usual definition involves the concept of "topological equivalence" (Section 6.3): if the phase portrait changes its topological structure as a parameter is varied, we say that a _bifurcation_ has occurred. Examples include changes in the number or stability of fixed points, closed orbits, or saddle connections as a parameter is varied. This chapter is organized as follows: for each bifurcation, we start with a simple prototypical example, and then graduate to more challenging examples, either briefly or in separate sections. Models of genetic switches, chemical oscillators, driven pendula and Josephson junctions are used to illustrate the theory. ### 8.1 Saddle-Node, Transcritical, and Pitchfork Bifurcations The bifurcations of fixed points discussed in Chapter 3 have analogs in two dimensions (and indeed, in _all_ dimensions). Yet it turns out that nothing really new happens when more dimensions are added--all the action is confined to a one-dimensional subspace along which the bifurcations occur, while in the extra dimensions the flow is either simple attraction or repulsion from that subspace, as we'll see below. ### Saddle-Node Bifurcation The saddle-node bifurcation is the basic mechanism for the creation and destruction of fixed points. Here's the prototypical example in two dimensions: \[\begin{array}{l}\dot{x}=\mu-x^{2}\\ \dot{y}=-y.\end{array}\] In the \(x\)-direction we see the bifurcation behavior discussed in Section 3.1, while in the \(y\)-direction the motion is exponentially damped. Consider the phase portrait as \(\mu\) varies. For \(\mu>0\), Figure 8.1.1 shows that there are two fixed points, a stable node at \((x*,y*)=(\sqrt{\mu},0)\) and a saddle at \((-\sqrt{\mu},0)\). As \(\mu\) decreases, the saddle and node approach each other, then collide when \(\mu=0\), and finally disappear when \(\mu<0\). Even after the fixed points have annihilated each other, they continue to influence the flow--as in Section 4.3, they leave a _ghost_, a bottleneck region that sucks trajectories in and delays them before allowing passage out the other side. For the same reasons as in Section 4.3, the time spent in the bottleneck generically increases as \((\mu-\mu_{c})^{-1/2}\), where \(\mu_{c}\) is the value at which the saddle-node bifurcation occurs. Some applications of this scaling law in condensed-matter physics are discussed by Strogatz and Westervelt (1989). Figure 8.1.1 is representative of the following more general situation. Consider a two-dimensional system \(\dot{x}=f(x,y)\), \(\dot{y}=g(x,\,y)\) that depends on a parameter \(\mu\). Suppose that for some value of \(\mu\) the nullclines intersect as shown in Figure 8.1.2. Notice that each intersection corresponds to a fixed point since \(\dot{x}=0\) and \(\dot{y}=0\) simultane Figure 8.1.2:ously. Thus, to see how the fixed points move as \(\mu\) changes, we just have to watch the intersections. Now suppose that the nullclines pull away from each other as \(\mu\) varies, becoming _tangent_ at \(\mu=\mu_{{}_{\epsilon}}\). Then the fixed points approach each other and collide when \(\mu=\mu_{{}_{\epsilon}}\); after the nullclines pull apart, there are no intersections and the fixed points disappear with a bang. The point is that _all_ saddle-node bifurcations have this character locally. **EXAMPLE 8.1.1:** The following system has been discussed by Griffith (1971) as a model for a genetic control system. The activity of a certain gene is assumed to be directly induced by two copies of the protein for which it codes. In other words, the gene is stimulated by its own product, potentially leading to an autocatalytic feedback process. In dimensionless form, the equations are \[\begin{array}{l}\dot{x}=-ax+y\\ \dot{y}=\frac{x^{2}}{1+x^{2}}-by\end{array}\] where \(x\) and \(y\) are proportional to the concentrations of the protein and the messenger RNA from which it is translated, respectively, and \(a\), \(b>0\) are parameters that govern the rate of degradation of \(x\) and \(y\). Show that the system has three fixed points when \(a<a_{{}_{\epsilon}}\), where \(a_{{}_{\epsilon}}\) is to be determined. Show that two of these fixed points coalesce in a saddle-node bifurcation when \(a=a_{{}_{\epsilon}}\). Then sketch the phase portrait for \(a<a_{{}_{\epsilon}}\), and give a biological interpretation. _Solution:_ The nullclines are given by the line \(y=ax\) and the sigmoidal curve \[y=\frac{x^{2}}{b(1+x^{2})}\] as sketched in Figure 8.1.3. Now suppose we vary \(a\) while holding \(b\) fixed. This is simple to visualize, since \(a\) is the slope of the line. For small \(a\) there are three intersections, as in Figure 8.1.3. As \(a\) increases, the top two intersections approach each other and collide when the line intersects the curve tangentially. For larger values of \(a\), those fixed points disappear, leaving the origin as the only fixed point. To find \(a_{c}\), we compute the fixed points directly and find where they coalesce. The nullclines intersect when \[ax=\frac{x^{2}}{b(1+x^{2})}.\] One solution is \(x^{\ast}=0\), in which case \(y^{\ast}=0\). The other intersections satisfy the quadratic equation \[ab(1+x^{2})=x \tag{2}\] which has two solutions \[x^{\ast}=\frac{1\pm\sqrt{1-4a^{2}b^{2}}}{2ab}\] if \(1-4a^{2}b^{2}>0\), i.e., \(2ab<1\). These solutions coalesce when \(2ab=1\). Hence \[a_{c}=1/2b.\] For future reference, note that the fixed point \(x^{\ast}=1\) at the bifurcation. The nullclines (Figure 8.1.4) provide a lot of information about the phase portrait for \(a<a_{c}\). The vector field is vertical on the line \(y=ax\) and horizontal on the sigmoidal curve. Other arrows can be sketched by noting the signs of \(\dot{x}\) and \(\dot{y}\). It appears that the middle fixed point is a saddle and the other two are sinks. To confirm this, we turn now to the classification of the fixed points. The Jacobian matrix at (\(x\), \(y\)) is \[A=\begin{pmatrix}-a&1\\ \frac{2x}{(1+x^{2})^{2}}&-b\end{pmatrix}.\] ### Saddle-Node, Transcritical, and Pitchfork Bifurcations \(A\) has trace \(\tau=-(a+b)<0\) so all the fixed points are either sinks or saddles, depending on the value of the determinant \(\Delta\). At (0,0), \(\Delta=ab>0\), so the origin is always a stable fixed point. In fact, it is a _stable node_, since \(\tau^{2}-4\Delta=(a-b)^{2}>0\) (except in the degenerate case \(a=b\), which we disregard). At the other two fixed points, \(\Delta\) looks messy but it can be simplified using (2). We find \[\Delta=ab-\frac{2x*}{\left(1+(x*)^{2}\right)^{2}}=ab\left[1-\frac{2}{1+(x*)^{ 2}}\right]=ab\left[\frac{(x*)^{2}-1}{1+(x*)^{2}}\right].\] So \(\Delta<0\) for the "middle" fixed point, which has \(0<x*<1\); this is a _saddle point_. The fixed point with \(x*>1\) is always a _stable node_, since \(\Delta<ab\) and therefore \(\tau^{2}-4\Delta>(a-b)^{2}>0\). The phase portrait is plotted in Figure 8.1.5. By looking back at Figure 8.1.4, we can see that the unstable manifold of the saddle is necessarily trapped in the narrow channel between the two nullclines. More importantly, the _stable_ manifold separates the plane into two regions, each a basin of attraction for a sink. The biological interpretation is that the system can act like a _biochemical switch_, but only if the mRNA and protein degrade slowly enough--specifically, their decay rates must satisfy \(ab<1/2\). In this case, there are two stable steady states: one at the origin, meaning that the gene is silent and there is no protein around to turn it on; and one where \(x\) and \(y\) are large, meaning that the gene is active and sustained by the high level of protein. The stable manifold of the saddle acts like a threshold; it determines whether the gene turns on or off, depending on the initial values of \(x\) and \(y\). As advertised, the flow in Figure 8.1.5 is qualitatively similar to that in the idealized Figure 8.1.1. All trajectories relax rapidly onto the unstable manifold of the saddle, which plays a completely analogous role to the \(x\)-axis in Figure 8.1.1. Thus, in many respects, the bifurcation is a fundamentally one-dimensional event, with the fixed points sliding toward each other along the unstable manifold Figure 8.1.5: like beads on a string. _This is why we spent so much time looking at bifurcations in one-dimensional systems_--they're the building blocks of analogous bifurcations in higher dimensions. (The fundamental role of one-dimensional systems can be justified rigorously by "center manifold theory"--see Wiggins (1990) for an introduction.) **Transcritical and Pitchfork Bifurcations** Using the same idea as above, we can also construct prototypical examples of transcritical and pitchfork bifurcations at a stable fixed point. In the \(x\)-direction the dynamics are given by the normal forms discussed in Chapter 3, and in the \(y\)-direction the motion is exponentially damped. This yields the following examples: \[\begin{array}{ll}\dot{x}=\mu x-x^{2},&\dot{y}=-y&\mbox{(transcritical)}\\ \dot{x}=\mu x-x^{3},&\dot{y}=-y&\mbox{(supercritical pitchfork)}\\ \dot{x}=\mu x+x^{3},&\dot{y}=-y&\mbox{(subcritical pitchfork)}\end{array}\] The analysis in each case follows the same pattern, so we'll discuss only the supercritical pitchfork, and leave the other two cases as exercises. **Example 8.1.2**: _Plot the phase portraits for the supercritical pitchfork system \(\dot{x}=\mu x-x^{3}\), \(\dot{y}=-y\), for \(\mu<0\), \(\mu=0\), and \(\mu>0\)._ _Solution: For \(\mu<0\), the only fixed point is a stable node at the origin. For \(\mu=0\), the origin is still stable, but now we have very slow (algebraic) decay along the \(x\)-direction instead of exponential decay; this is the phenomenon of "critical slowing down" discussed in Section 3.4 and Exercise 2.4.9. For \(\mu>0\), the origin loses stability and gives birth to two new stable fixed points symmetrically located at \((x*,y*)=(\pm\sqrt[]{\mu},0)\). By computing the Jacobian at each point, you can check that the origin is a saddle and the other two fixed points are stable nodes. The phase portraits are shown in Figure 8.1.6._ As mentioned in Chapter 3, pitchfork bifurcations are common in systems that have a symmetry. Here's an example. **Example 8.1.3:** Show that a supercritical pitchfork bifurcation occurs at the origin in the system \[\begin{array}{l}\dot{x}=\mu x+y+\sin x\\ \dot{y}=x-y\end{array}\] and determine the bifurcation value \(\mu_{{}_{\varepsilon}}\). Plot the phase portrait near the origin for \(\mu\) slightly greater than \(\mu_{{}_{\varepsilon}}\). _Solution:_ The system is invariant under the change of variables \(x\rightarrow-x\), \(y\rightarrow-y\), so the phase portrait must be symmetric under reflection through the origin. The origin is a fixed point for all \(\mu\), and its Jacobian is \[A=\begin{pmatrix}\mu+1&1\\ 1&-1\end{pmatrix}\] which has \(\tau=\mu\) and \(\Delta=-(\mu+2)\). Hence the origin is a stable fixed point if \(\mu<-2\) and a saddle if \(\mu>-2\). This suggests that a pitchfork bifurcation occurs at \(\mu_{{}_{\varepsilon}}=-2\). To confirm this, we seek a symmetric pair of fixed points close to the origin for \(\mu\) close to \(\mu_{{}_{\varepsilon}}\). (Note that at this stage we don't know whether the bifurcation is sub- or supercritical.) The fixed points satisfy \(y=x\) and hence \((\mu+1)x+\sin x=0\). One solution is \(x=0\), but we've found that already. Now suppose \(x\) is small and nonzero, and expand the sine as a power series. Then \[(\mu+1)x+x-\frac{x^{3}}{3!}+O(x^{5})=0.\] Note that because of the approximations we've made, this picture is only valid _locally_ in both parameter and phase space--if we're not near the origin and if \(\mu\) is not close to \(\mu_{e}\), all bets are off. In all of the examples above, the bifurcation occurs when \(\Delta=0\), or equivalently, when one of the eigenvalues equals zero. More generally, the saddle-node, transcritical, and pitchfork bifurcations are all examples of _zero-eigenvalue bifurcations_. (There are other examples, but these are the most common.) Such bifurcations always involve the collision of two or more fixed points. In the next section we'll consider a fundamentally new kind of bifurcation, one that has no counterpart in one-dimensional systems. It provides a way for a fixed point to lose stability without colliding with any other fixed points. ### 8.2 Hopf Bifurcations Suppose a two-dimensional system has a stable fixed point. What are all the possible ways it could lose stability as a parameter \(\mu\) varies? The eigenvalues of the Jacobian are the key. If the fixed point is stable, the eigenvalues \(\lambda_{1}\), \(\lambda_{2}\) must both lie in the left half-plane \(\mathrm{Re}\ \lambda<0\). Since the \(\lambda\)'s satisfy a quadratic equation with t real coefficients, there are two possible pictures: either the eigenvalues are both real and negative (Figure 8.2.1a) or they are complex conjugates (Figure 8.2.1b). To destabilize the fixed point, we need one or both of the eigenvalues to cross into the right half-plane as \(\mu\) varies. In Section 8.1 we explored the cases in which a real eigenvalue passes through \(\lambda=0\). These were just our old friends from Chapter 3, namely the saddle-node, transcritical, and pitchfork bifurcations. Now we consider the other possible scenario, in which two complex conjugate eigenvalues simultaneously cross the imaginary axis into the right half-plane. ### 1.2 Supercritical Hopf Bifurcation Suppose we have a physical system that settles down to equilibrium through exponentially damped oscillations. In other words, small disturbances decay after "ringing" for a while (Figure 8.2.2a). Now suppose that the decay rate depends on a control parameter \(\mu\). If the decay becomes slower and slower and finally changes to _growth_ at a critical value \(\mu_{c}\), the equilibrium state will lose stability. In many cases the resulting motion is a small-amplitude, sinusoidal, limit cycle oscillation about the former steady state (Figure 8.2.2b). Then we say that the system has undergone a _supercritical Hopf bifurcation_. In terms of the flow in phase space, a supercritical Hopf bifurcation occurs when a stable spiral changes into an unstable spiral surrounded by a small, nearly elliptical limit cycle. Hopf bifurcations can occur in phase spaces Figure 8.2.1: of any dimension \(n\geq 2\), but as in the rest of this chapter, we'll restrict ourselves to two dimensions. A simple example of a supercritical Hopf bifurcation is given by the following system: \[\dot{r} =\mu r-r^{3}\] \[\dot{\theta} =\omega+br^{2}.\] There are three parameters: \(\mu\) controls the stability of the fixed point at the origin, \(\omega\) gives the frequency of infinitesimal oscillations, and \(b\) determines the dependence of frequency on amplitude for larger amplitude oscillations. Figure 2.3 plots the phase portraits for \(\mu\) above and below the bifurcation. When \(\mu<0\) the origin \(r=0\) is a stable spiral whose sense of rotation depends on the sign of \(\omega\). For \(\mu=0\) the origin is still a stable spiral, though a very weak one: the decay is only algebraically fast. (This case was shown in Figure 6.3.2. Recall that the linearization wrongly predicts a center at the origin.) Finally, for \(\mu>0\) there is an unstable spiral at the origin and a stable circular limit cycle at \(r=\sqrt{\mu}\). To see how the eigenvalues behave during the bifurcation, we rewrite the system in Cartesian coordinates; this makes it easier to find the Jacobian. We write \(x=r\cos\theta\), \(y=r\sin\theta\). Then \[\dot{x} =\dot{r}\cos\theta-r\dot{\theta}\sin\theta\] \[=(\mu r-r^{3})\cos\theta-r(\omega+br^{2})\sin\theta\] \[=\bigl{(}\mu-[x^{2}+y^{2}]\bigr{)}x-\bigl{(}\omega+b[x^{2}+y^{2}] \bigr{)}y\] \[=\mu x-\omega y+\text{cubic terms}\] and similarly \[\dot{y}=\omega x+\mu y+\text{cubic terms}.\] So the Jacobian at the origin is Figure 2.3: \[A=\begin{pmatrix}\mu&-\omega\\ \omega&\mu\end{pmatrix}\text{,}\] which has eigenvalues \[\lambda=\mu\pm i\omega\text{.}\] As expected, the eigenvalues cross the imaginary axis from left to right as \(\mu\) increases from negative to positive values. ### Rules of Thumb Our idealized case illustrates two rules that hold _generically_ for supercritical Hopf bifurcations: 1. The size of the limit cycle grows continuously from zero, and increases proportional to \(\sqrt{\mu-\mu_{c}}\), for \(\mu\) close to \(\mu_{c}\). 2. The frequency of the limit cycle is given approximately by \(\omega=\text{Im }\lambda\), evaluated at \(\mu=\mu_{c}\). This formula is exact at the birth of the limit cycle, and correct within \(O(\mu-\mu_{c})\) for \(\mu\) close to \(\mu_{c}\). The period is therefore \(T=(2\pi/\text{Im }\lambda)+O(\mu-\mu_{c})\). But our idealized example also has some artifactual properties. First, in Hopf bifurcations encountered in practice, the limit cycle is elliptical, not circular, and its shape becomes distorted as \(\mu\) moves away from the bifurcation point. Our example is only typical topologically, not geometrically. Second, in our idealized case the eigenvalues move on horizontal lines as \(\mu\) varies, i.e., Im \(\lambda\) is strictly independent of \(\mu\). Normally, the eigenvalues would follow a curvy path and cross the imaginary axis with nonzero slope (Figure 8.2.4). Figure 8.2.4 **Subcritical Hopf Bifurcation** Like pitchfork bifurcations, Hopf bifurcations come in both super- and subcritical varieties. The subcritical case is always much more dramatic, and potentially dangerous in engineering applications. After the bifurcation, the trajectories must _jump_ to a _distant_ attractor, which may be a fixed point, another limit cycle, infinity, or--in three and higher dimensions--a chaotic attractor. We'll see a concrete example of this last, most interesting case when we study the Lorenz equations (Chapter 9). But for now, consider the two-dimensional example \[\dot{r} = \mu r+r^{3}-r^{5}\] \[\dot{\theta} = \omega+br^{2}.\] The important difference from the earlier supercritical case is that the cubic term \(r^{3}\) is now _destabilizing_; it helps to drive trajectories away from the origin. The phase portraits are shown in Figure 8.2.5. For \(\mu<0\) there are two attractors, a stable limit cycle and a stable fixed point at the origin. Between them lies an unstable cycle, shown as a dashed curve in Figure 8.2.5; it's the player to watch in this scenario. As \(\mu\) increases, the unstable cycle tightens like a noose around the fixed point. A _subcritical Hopf bifurcation_ occurs at \(\mu=0\), where the unstable cycle shrinks to zero amplitude and engulfs the origin, rendering it unstable. For \(\mu>0\), the large-amplitude limit cycle is suddenly the only attractor in town. Solutions that used to remain near the origin are now forced to grow into large-amplitude oscillations. Note that the system exhibits _hysteresis_: once large-amplitude oscillations have begun, they cannot be turned off by bringing \(\mu\) back to zero. In fact, the large oscillations will persist until \(\mu=-1/4\) where the stable and unstable cycles collide and annihilate. This destruction of the large-amplitude cycle occurs via another type of bifurcation, to be discussed in Section 8.4. ### 8.2 Hopf Bifurcations Figure 8.2.5:Subcritical Hopf bifurcations occur in the dynamics of nerve cells (Rinzel and Ermentrout 1989), in aeroelastic flutter and other vibrations of airplane wings (Dowell and Ilgamova 1988, Thompson and Stewart 1986), and in instabilities of fluid flows (Drazin and Reid 1981). ## Subcritical, Supercritical, or Degenerate Bifurcation? Given that a Hopf bifurcation occurs, how can we tell if it's sub- or supercritical? The linearization doesn't provide a distinction: in both cases, a pair of eigenvalues moves from the left to the right half-plane. An analytical criterion exists, but it can be difficult to use (see Exercises 8.2.12-15 for some tractable cases). A quick and dirty approach is to use the computer. If a small, attracting limit cycle appears immediately after the fixed point goes unstable, and if its amplitude shrinks back to zero as the parameter is reversed, the bifurcation is supercritical; otherwise, it's probably subcritical, in which case the nearest attractor might be far from the fixed point, and the system may exhibit hysteresis as the parameter is reversed. Of course, computer experiments are not proofs and you should check the numerics carefully before making any firm conclusions. Finally, you should also be aware of a _degenerate Hopf bifurcation_. An example is given by the damped pendulum \(\ddot{x}+\mu\dot{x}+\sin x=0\). As we change the damping \(\mu\) from positive to negative, the fixed point at the origin changes from a stable to an unstable spiral. However at \(\mu=0\) we do _not_ have a true Hopf bifurcation because there are no limit cycles on either side of the bifurcation. Instead, at \(\mu=0\) we have a continuous band of closed orbits surrounding the origin. These are not limit cycles! (Recall that a limit cycle is an _isolated_ closed orbit.) This degenerate case typically arises when a nonconservative system suddenly becomes conservative at the bifurcation point. Then the fixed point becomes a nonlinear center, rather than the weak spiral required by a Hopf bifurcation. See Exercise 8.2.11 for another example. ## Example 8.2.1: Consider the system \(\dot{x}=\mu x-y+xy^{2}\), \(\dot{y}=x+\mu y+y^{3}\). Show that a Hopf bifurcation occurs at the origin as \(\mu\) varies. Is the bifurcation subcritical, supercritical, or degenerate? _Solution:_ The Jacobian at the origin is \(A=\begin{pmatrix}\mu&-1\\ 1&\mu\end{pmatrix}\), which has \(\tau=2\mu\), \(\Delta=\mu^{2}+1>0\), and \(\lambda=\mu\pm i\). Hence, as \(\mu\) increases through zero, the origin changes from a stable spiral to an unstable spiral. This suggests that some kind of Hopf bifurcation takes place at \(\mu=0\). To decide whether the bifurcation is subcritical, supercritical, or degenerate, we use simple reasoning and numerical integration. If we transform the system to polar coordinates, we find that \[\dot{r}=\mu r+ry^{2},\] as you should check. Hence \(\dot{r}\geq\mu r\). This implies that for \(\mu>0\), \(r(t)\) grows _at least_ as fast as \(r_{0}\,e^{\mu t}\). In other words, all trajectories are repelled out to infinity! So there are certainly no closed orbits for \(\mu>0\). In particular, the unstable spiral is not surrounded by a stable limit cycle; hence the bifurcation cannot be supercritical. Could the bifurcation be degenerate? That would require that the origin be a nonlinear center when \(\mu=0\). But \(\dot{r}\) is strictly positive away from the \(x\)-axis, so closed orbits are still impossible. By process of elimination, we expect that the bifurcation is _subcritical_. This is confirmed by Figure 8.2.6, which is a computer-generated phase portrait for \(\mu=-0.2\). Note that an _unstable_ limit cycle surrounds the stable fixed point, just as we expect in a subcritical bifurcation. Furthermore, the cycle is nearly elliptical and surrounds a gently winding spiral--these are typical features of _either_ kind of Hopf bifurcation. ### Oscillating Chemical Reactions For an application of Hopf bifurcations, we now consider a class of experimental systems known as _chemical oscillators_. These systems are remarkable, both for their spectacular behavior and for the story behind their discovery. After presenting this background information, we analyze a simple model for oscillations in the chlorine dioxide-iodine-malonic acid reaction. The definitive reference Figure 8.2.6: on chemical oscillations is the book edited by Field and Burger (1985). See also Epstein et al. (1983), Winfree (1987b) and Murray (2002). ### Belousov's "Supposedly Discovered Discovery" In the early 1950s the Russian biochemist Boris Belousov was trying to create a test tube caricature of the Krebs cycle, a metabolic process that occurs in living cells. When he mixed citric acid and bromate ions in a solution of sulfuric acid, and in the presence of a cerium catalyst, he observed to his astonishment that the mixture became yellow, then faded to colorless after about a minute, then returned to yellow a minute later, then became colorless again, and continued to oscillate dozens of times before finally reaching equilibrium after about an hour. Today it comes as no surprise that chemical reactions can oscillate spontaneously--such reactions have become a standard demonstration in chemistry classes, and you may have seen one yourself. (For recipes, see Winfree (1980).) But in Belousov's day, his discovery was so radical that he couldn't get his work published. It was thought that all solutions of chemical reagents must go _monotonically_ to equilibrium, because of the laws of thermodynamics. Belousov's paper was rejected by one journal after another. According to Winfree (1987b, p.16I), one editor even added a snide remark about Belousov's "supposedly discovered discovery" to the rejection letter. Belousov finally managed to publish a brief abstract in the obscure proceedings of a Russian medical meeting (Belousov 1959), although his colleagues weren't aware of it until years later. Nevertheless, word of his amazing reaction circulated among Moscow chemists in the late 1950s, and in 1961 a graduate student named Zhabotinsky was assigned by his adviser to look into it. Zhabotinsky confirmed that Belousov was right all along, and brought this work to light at an international conference in Prague in 1968, one of the few times that Western and Soviet scientists were allowed to meet. At that time there was a great deal of interest in biological and biochemical oscillations (Chance et al. 1973) and the BZ reaction, as it came to be called, was seen as a manageable model of those more complex systems. The analogy to biology turned out to be surprisingly close: Zaikin and Zhabotinsky (1970) and Winfree (1972) observed beautiful propagating _waves_ of oxidation in thin unstirred layers of BZ reagent, and found that these waves annihilate upon collision, just like waves of excitation in neural or cardiac tissue. The waves always take the shape of expanding concentric rings or spirals (Color plate 1). Spiral waves are now recognized to be a ubiquitous feature of chemical, biological, and physical excitable media; in particular, spiral waves and their three-dimensional analogs, "scroll waves", appear to be implicated in certain cardiac arrhythmias, a problem of great medical importance (Winfree 1987b). Boris Belousov would be pleased to see what he started. In 1980, he and Zhabotinsky were awarded the Lenin Prize, the Soviet Union's highest medal, for their pioneering work on oscillating reactions. Unfortunately, Belousov had passed away ten years earlier. For more about the history of the BZ reaction, see Winfree (1984, 1987b). An English translation of Belousov's original paper from 1951 appears in Field and Burger (1985). ### Chlorine Dioxide-Iodine-Malonic Acid Reaction The mechanisms of chemical oscillations can be very complex. The BZ reaction is thought to involve more than twenty elementary reaction steps, but luckily many of them equilibrate rapidly--this allows the kinetics to be reduced to as few as three differential equations. See Tyson (1985) for this reduced system and its analysis. In a similar spirit, Lengyel et al. (1990) have proposed and analyzed a particularly elegant model of another oscillating reaction, the chlorine dioxide-iodine-malonic acid (ClO2-I2-MA) reaction. Their experiments show that the following three reactions and empirical rate laws capture the behavior of the system: \[\begin{array}{l} {\text{MA} + \text{I}_{2}\to \text{IMA} + \text{I}^{-} + \text{H}^{+};\;\;\frac{d\left[ \text{I}_{2} \right]}{dt} = - \frac{k_{1a}\left[ \text{MA} \right]\left[ \text{I}_{2} \right]}{k_{1b} + \left[ \text{I}_{2} \right]}}\end{array}\] \[\begin{array}{l} {\text{ClO}_{2} + \text{I}^{-}\to \text{ClO}_{2}^{-} + \frac{1}{2}\text{I}_{2};\;\;\;\;\frac{d\left[ \text{ClO}_{2} \right]}{dt} = - k_{2}\left[ \text{ClO}_{2} \right]\left[ \text{I}^{-} \right]}\end{array}\] \[\begin{array}{l} {\text{ClO}_{2}^{-} + 4\text{I}^{-} + 4\text{H}^{+}\to \text{Cl}^{-} + 2\text{I}_{2} + 2\text{H}_{2}O;}}\end{array}\] \[\begin{array}{l} {\frac{d\left[ \text{ClO}_{2}^{-} \right]}{dt} = - k_{3a}\left[ \text{ClO}_{2}^{-} \right]\left[ \text{I}^{-} \right]\left[ \text{H}^{+} \right] - k_{3b}\left[ \text{ClO}_{2}^{-} \right]\left[ \text{I}_{2} \right]\frac{\left[ \text{I}^{-} \right]}{u + \left[ \text{I}^{-} \right]^{2}}}\end{array}\] Typical values of the concentrations and kinetic parameters are given in Lengyel et al. (1990) and Lengyel and Epstein (1991). Numerical integrations of (I)-(3) show that the model exhibits oscillations that closely resemble those observed experimentally. However this model is still too complicated to handle analytically. To simplify it, Lengyel et al. (1990) use a result found in their simulations: Three of the reactants (MA, I2, and ClO2) vary much more slowly than the intermediates I- and ClO2-, which change by several orders of magnitude during an oscillation period. By approximating the concentrations of the slow reactants as _constants_ and making other reasonable simplifications, they reduce the system to a two-variable model. (Of course, since this approximation neglects the slow consumption of the reactants, the model will be unable to account for the eventual approach to equilibrium.) After suitable nondimensionalization, the model becomes \[\dot{x}=a-x-\frac{4xy}{1+x^{2}} \tag{4}\] \[\dot{y}=bx\bigg{[}1-\frac{y}{1+x^{2}}\bigg{]} \tag{5}\] where \(x\) and \(y\) are the dimensionless concentrations of \(\Gamma\)- and \(\text{ClO}_{2}\)-. The parameters \(a\), \(b>0\) depend on the empirical rate constants and on the concentrations assumed for the slow reactants. We begin the analysis of (4), (5) by constructing a trapping region and applying the Poincare-Bendixson theorem. Then we'll show that the chemical oscillations arise from a supercritical Hopf bifurcation. **EXAMPLE 8.3.1:** Prove that the system (4), (5) has a closed orbit in the positive quadrant \(x,y>0\) if \(a\) and \(b\) satisfy certain constraints, to be determined. _Solution:_ As in Example 7.3.2, the nullclines help us to construct a trapping region. Equation (4) shows that \(\dot{x}=0\) on the curve \[y=\frac{(a-x)(1+x^{2})}{4x} \tag{6}\] and (5) shows that \(\dot{y}=0\) on the \(y\)-axis and on the parabola \(y=1+x^{2}\). These nullclines are sketched in Figure 8.3.1, along with some representative vectors. (We've taken some pedagogical license with Figure 8.3.1; the curvature of the nullcline (6) has been exaggerated to highlight its shape, and to give us more room to draw the vectors.) **260** **BIFUCAIONS REVISITED**Now consider the dashed box shown in Figure 8.3.2. It's a trapping region because all the vectors on the boundary point into the box. We can't apply the Poincare-Bendixson theorem yet, because there's a fixed point \[x^{\bullet}=a/5,\qquad y^{\bullet}=1+(x^{\bullet})^{2}=1+(a/5)^{2}\] inside the box at the intersection of the nullclines. But now we argue as in Example 7.3.3: if the fixed point turns out to be a _repeller_, we _can_ apply the Poincare-Bendixson theorem to the "punctured" box obtained by removing the fixed point. All that remains is to see under what conditions (if any) the fixed point is a repeller. The Jacobian at \((x^{\bullet},y^{\bullet})\) is \[\frac{1}{1+(x^{\bullet})^{2}}\begin{pmatrix}3(x^{\bullet})^{2}-5&-4x^{\bullet }\\ 2b(x^{\bullet})^{2}&-bx^{\bullet}\end{pmatrix}.\] (We've used the relation \(y^{\bullet}=1+(x^{\bullet})^{2}\) to simplify some of the entries in the Jacobian.) The determinant and trace are given by \[\Delta=\frac{5bx^{\bullet}}{1+(x^{\bullet})^{2}}>0,\quad\quad\tau=\frac{3(x^{ \bullet})^{2}-5-bx^{\bullet}}{1+(x^{\bullet})^{2}}.\] We're in luck--since \(\Delta>0\), the fixed point is never a saddle. Hence (\(x^{\bullet}\),\(y^{\bullet}\)) is a repeller if \(\tau>0\), i.e., if \[b<b_{\iota}\equiv 3a/5-25/a. \tag{7}\] When (7) holds, the Poincare-Bendixson theorem implies the existence of a closed orbit somewhere in the punctured box. **Example 8.3.2**: Using numerical integration, show that a Hopf bifurcation occurs at \(b=b_{c}\) and decide whether the bifurcation is sub- or supercritical. _Solution:_ The analytical results above show that as \(b\) decreases through \(b_{c}\), the fixed point changes from a stable spiral to an unstable spiral; this is the signature of a Hopf bifurcation. Figure 8.3.3 plots two typical phase portraits. (Here we have chosen \(a=10\); then (7) implies \(b_{c}=3.5\).) When \(b>b_{c}\), all trajectories spiral into the stable fixed point (Figure 8.3.3a), while for \(b<b_{c}\) they are attracted to a stable limit cycle (Figure 8.3.3b). Hence the bifurcation is _supercritical_--after the fixed point loses stability, it is surrounded by a stable limit cycle. Moreover, by plotting phase portraits as \(b\to b_{c}\) from below, we could confirm that the limit cycle shrinks continuously to a point, as required. Our results are summarized in the stability diagram in Figure 8.3.4. The boundary between the two regions is given by the Hopf bifurcation locus \(b=3a/5\)-\(25/a\). Figure 8.3.4: Figure 8.3.3: **Example 8.3.3**: _Approximate the period of the limit cycle for \(b\) slightly less than \(b_{c}\)._ _Solution:_ The frequency is approximated by the imaginary part of the eigenvalues at the bifurcation. As usual, the eigenvalues satisfy \(\lambda^{2}-\tau\lambda+\Delta=0\). Since \(\tau=0\) and \(\Delta>0\) at \(b=b_{c}\), we find \[\lambda=\pm i\sqrt{\Delta}.\] But at \(b_{c}\), \[\Delta=\frac{5b_{c}x^{*}}{1+(x^{*})^{2}}=\frac{5\left[\frac{3a}{5}-\frac{25}{a }\right]\left(\frac{a}{5}\right)}{1+(a/5)^{2}}=\frac{15a^{2}-625}{a^{2}+25}.\] Hence \(\omega\approx\Delta^{1/2}=\left[(15a^{2}-625)/(a^{2}+25)\right]^{1/2}\) and therefore \[T =2\pi/\omega\] \[=2\pi\left[(a^{2}+25)/(15a^{2}-625)\right]^{1/2}.\] A graph of \(T(a)\) is shown in Figure 8.3.5. As \(a\rightarrow\infty,\ T\to 2\pi/\sqrt{15}\approx 1.63\). ### 8.3 Oscillating chemical reactions Figure 8.3.5: ### 8.4 Global Bifurcations of Cycles In two-dimensional systems, there are four common ways in which limit cycles are created or destroyed. The Hopf bifurcation is the most famous, but the other three deserve their day in the sun. They are harder to detect because they involve large regions of the phase plane rather than just the neighborhood of a single fixed point. Hence they are called _global bifurcations_. In this section we offer some prototypical examples of global bifurcations, and then compare them to one another and to the Hopf bifurcation. A few of their scientific applications are discussed in Sections 8.5 and 8.6 and in the exercises. ### 8.5 Saddle-node Bifurcation of Cycles A bifurcation in which two limit cycles coalesce and annihilate is called a _fold_ or _saddle-node bifurcation of cycles_, by analogy with the related bifurcation of fixed points. An example occurs in the system \[\dot{r} = \mu r+r^{3}-r^{5}\] \[\dot{\theta} = \omega+br^{2}\] studied in Section 8.2. There we were interested in the subcritical Hopf bifurcation at \(\mu=0\); now we concentrate on the dynamics for \(\mu<0\). It is helpful to regard the radial equation \(\dot{r}=\mu r+r^{3}-r^{5}\) as a one-dimensional system. As you should check, this system undergoes a saddle-node bifurcation of fixed points at \(\mu_{c}=\) -1/4. Now returning to the two-dimensional system, these fixed points correspond to circular _limit cycles_. Figure 8.4.1 plots the "radial phase portraits" and the corresponding behavior in the phase plane. Figure 8.4.1At \(\mu_{{}_{c}}\) a half-stable cycle is born out of the clear blue sky. As \(\mu\) increases it splits into a pair of limit cycles, one stable, one unstable. Viewed in the other direction, a stable and unstable cycle collide and disappear as \(\mu\) decreases through \(\mu_{{}_{c}}\). Notice that the origin remains stable throughout; it does not participate in this bifurcation. For future reference, note that at birth the cycle has \(O(1)\) amplitude, in contrast to the Hopf bifurcation, where the limit cycle has small amplitude proportional to \((\mu-\mu_{{}_{c}})^{1/2}\). ### Infinite-period Bifurcation Consider the system \[\begin{split}&\dot{r}=r(1-r^{2})\\ &\dot{\theta}=\mu-\sin\theta\end{split}\] where \(\mu\geq 0\). This system combines two one-dimensional systems that we have studied previously in Chapters 3 and 4. In the radial direction, all trajectories (except \(r^{\bullet}=0\)) approach the unit circle monotonically as \(t\rightarrow\infty\). In the angular direction, the motion is everywhere counterclockwise if \(\mu>1\), whereas there are two invariant rays defined by sin \(\theta=\mu\) if \(\mu<1\). Hence as \(\mu\) decreases through \(\mu_{{}_{c}}=1\), the phase portraits change as in Figure 8.4.2. As \(\mu\) decreases, the limit cycle \(r=1\) develops a bottleneck at \(\theta=\pi/2\) that becomes increasingly severe as \(\mu\to 1^{\ast}\). The oscillation period lengthens and finally becomes infinite at \(\mu_{{}_{c}}=1\), when a fixed point appears on the circle; hence the term _infinite-period bifurcation_. For \(\mu<1\), the fixed point splits into a saddle and a node. As the bifurcation is approached, the amplitude of the oscillation stays \(O(1)\) but the period increases like \((\mu-\mu_{{}_{c}})^{-1/2}\), for the reasons discussed in Section 4.3. ### Global Bifurcations of Cycles Figure 8.4.2: ### Homoclinic Bifurcation In this scenario, part of a limit cycle moves closer and closer to a saddle point. At the bifurcation the cycle touches the saddle point and becomes a homoclinic orbit. This is another kind of infinite-period bifurcation; to avoid confusion, we'll call it a _saddle-loop_ or _homoclinic bifurcation_. It is hard to find an analytically transparent example, so we resort to the computer. Consider the system \[\begin{split}\dot{x}&=y\\ \dot{y}&=\mu y+x-x^{2}+xy.\end{split}\] Figure 8.4.3 plots a series of phase portraits before, during, and after the bifurcation; only the important features are shown. Numerically, the bifurcation is found to occur at \(\mu_{{}_{c}}\approx-0.8645\). For \(\mu<\mu_{{}_{c}}\), say \(\mu=-0.92\), a stable limit cycle passes close to a saddle point at the origin (Figure 8.4.3a). As \(\mu\) increases to \(\mu_{{}_{c}}\), the limit cycle swells (Figure 8.4.3b) and bangs into the saddle, creating a homoclinic orbit (Figure 8.4.3c). Once \(\mu>\mu_{{}_{c}}\), the saddle connection breaks and the loop is destroyed (Figure 8.4.3d). The key to this bifurcation is the behavior of the unstable manifold of the saddle. Look at the branch of the unstable manifold that leaves the origin to thenortheast: after it loops around, it either hits the origin (Figure 8.4.3c) or veers off to one side or the other (Figures 8.4.3a, d). **Scaling Laws** For each of the bifurcations given here, there are characteristic _scaling laws_ that govern the amplitude and period of the limit cycle as the bifurcation is approached. Let \(\mu\) denote some dimensionless measure of the distance from the bifurcation, and assume that \(\mu<<1\). The generic scaling laws for bifurcations of cycles in two-dimensional systems are given in Table 7.4.1. \begin{tabular}{|l|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Amplitude of**} & \multicolumn{1}{c|}{} \\ & **stable limit cycle** & **Period of cycle** \\ \hline Supercritical Hopf & \(O(\mu^{1/2})\) & \(O(1)\) \\ Saddle-node bifurcation of cycles & \(O(1)\) & \(O(1)\) \\ Infinite-period & \(O(1)\) & \(O(\mu^{-1/2})\) \\ Homoclinic & \(O(1)\) & \(O(\ln\mu)\) \\ \hline \end{tabular} **Table 8.4.1** All of these laws have been explained previously, except those for the homoclinic bifurcation. The scaling of the period in that case is obtained by estimating the time required for a trajectory to pass by a saddle point (see Exercise 8.4.12 and Gaspard 1990). Exceptions to these rules can occur, but only if there is some symmetry or other special feature that renders the problem nongeneric, as in the following example. **Example 8.4.1:** The van der Pol oscillator \(\ddot{x}+\varepsilon\dot{x}(x^{2}-1)+x=0\) does not seem to fit anywhere in Table 7.4.1. At \(\varepsilon=0\), the eigenvalues at the origin are pure imaginary (\(\lambda=\pm i\)), suggesting that a Hopf bifurcation occurs at \(\varepsilon=0\). But we know from Section 7.6 that for \(0<\varepsilon<<1\), the system has a limit cycle of amplitude \(r\approx 2\). Thus the cycle is born "full grown," not with size \(O(\varepsilon^{1/2})\) as predicted by the scaling law. What's the explanation? _Solution:_ The bifurcation at \(\varepsilon=0\) is degenerate. The nonlinear term \(\varepsilon\dot{x}x^{2}\) vanishes at precisely the same parameter value as the eigenvalues cross the imaginary axis. That's a nongeneric coincidence if there ever was one! We can rescale \(x\) to remove this degeneracy. Write the equation as \(\ddot{x}+x+\varepsilon x^{2}\dot{x}-\varepsilon\dot{x}=0\). Let \(u^{2}=\varepsilon\ x^{2}\) to remove the \(\varepsilon\)-dependence of the nonlinear term. Then \(u=\varepsilon^{1/2}x\) and the equation becomes \[\ddot{u}+u+u^{2}\dot{u}-\varepsilon\dot{u}=0.\] **8.4 Global Bifurcations of Cycles**Now the nonlinear term is not destroyed when the eigenvalues become pure imaginary. From Section 7.6 the limit cycle solution is \(x\left(t,\varepsilon\right)\approx 2\cos t\) for \(0<\varepsilon<<1\). In terms of \(u\) this becomes \[u(t,\varepsilon)\approx\left(2\sqrt{\varepsilon}\right)\cos t.\] Hence the amplitude grows like \(\varepsilon^{1/2}\), just as expected for a Hopf bifurcation. The scaling laws given here were derived by thinking about prototypical examples in _two-dimensional_ systems. In higher-dimensional phase spaces, the corresponding bifurcations obey the same scaling laws, but with two caveats: (1) Many _additional_ bifurcations of limit cycles become possible; thus our table is no longer exhaustive. (2) The homoclinic bifurcation becomes much more subtle to analyze. It often creates chaotic dynamics in its aftermath (Guckenheimer and Holmes 1983, Wiggins 1990). All of this begs the question: Why should you care about these scaling laws? Suppose you're an experimental scientist and the system you're studying exhibits a stable limit cycle oscillation. Now suppose you change a control parameter and the oscillation stops. By examining the scaling of the period and amplitude near this bifurcation, you can learn something about the system's dynamics (which are usually not known precisely, if at all). In this way, possible models can be eliminated or supported. For an example in physical chemistry, see Gaspard (1990). ### Hysteresis in the Driven Pendulum and Josephson Junction This section deals with a physical problem in which both homoclinic and infinite-period bifurcations arise. The problem was introduced back in Sections 4.4 and 4.6. At that time we were studying the dynamics of a damped pendulum driven by a constant torque, or equivalently, its high-tech analog, a superconducting Josephson junction driven by a constant current. Because we weren't ready for two-dimensional systems, we reduced both problems to vector fields on the circle by looking at the heavily _overdamped limit_ of negligible mass (for the pendulum) or negligible capacitance (for the Josephson junction). Now we're ready to tackle the full two-dimensional problem. As we claimed at the end of Section 4.6, for sufficiently weak damping the pendulum and the Josephson junction can exhibit intriguing hysteresis effects, thanks to the coexistence of a stable limit cycle and a stable fixed point. In physical terms, the pendulum can settle into either a rotating solution where it whirls over the top, or a stable rest state where gravity balances the applied torque. The final state depends on the initial conditions. Our goal now is to understand how this bistability comes about. We will phrase our discussion in terms of the Josephson junction, but will mention the pendulum analog whenever it seems helpful. ### Governing Equations As explained in Section 4.6, the governing equation for the Josephson junction is \[\frac{\hbar C}{2e}\ddot{\phi}+\frac{\hbar}{2eR}\dot{\phi}+I_{e}\sin\phi=I_{B} \tag{1}\] where \(\hbar\) is Planck's constant divided by \(2\pi\), \(e\) is the charge on the electron, \(I_{B}\) is the constant bias current, \(C\), \(R\), and \(I_{e}\) are the junction's capacitance, resistance, and critical current, and \(\phi(t)\) is the phase difference across the junction. To highlight the role of damping, we nondimensionalize (1) differently from in Section 4.6. Let \[\tilde{t}= \left(\frac{2eI_{e}}{\hbar C}\right)^{1/2}t,\ \ \ \ \ I=\frac{I_{B}}{I_{e}},\ \ \ \ \alpha=\left(\frac{\hbar}{2eI_{e}R^{2}C}\right)^{1/2}. \tag{2}\] Then (1) becomes \[\phi^{\prime\prime}+\alpha\phi^{\prime}+\sin\phi=I \tag{3}\] where \(\alpha\) and \(I\) are the dimensionless damping and applied current, and the prime denotes differentiation with respect to \(\tilde{t}\). Here \(\alpha>0\) on physical grounds, and we may choose \(I\geq 0\) without loss of generality (otherwise, redefine \(\phi\rightarrow-\phi\)). Let \(y=\phi^{\prime}\). Then the system becomes \[\begin{array}{l}\phi^{\prime}=y\\ y^{\prime}=I-\sin\phi-\alpha y.\end{array} \tag{4}\] As in Section 6.7 the phase space is a _cylinder_, since \(\phi\) is an angular variable and \(y\) is a real number (best thought of as an angular velocity). ### Fixed Points The fixed points of (4) satisfy \(y^{\ast}=0\) and \(\sin\phi^{\ast}=I\). Hence there are two fixed points on the cylinder if \(I<1\), and none if \(I>1\). When the fixed points exist, one is a saddle and other is a sink, since the Jacobian \[A= \left(\begin{matrix}0&1\\ -\cos\phi^{\ast}&-\alpha\end{matrix}\right)\] has \(\tau=-\alpha<0\) and \(\Delta=\cos\phi^{*}=\pm\sqrt{1-I^{2}}\). When \(\Delta>0\), we have a stable node if \(\tau^{2}-4\Delta=\alpha^{2}-4\sqrt{1-I^{2}}>0\), i.e., if the damping is strong enough or if \(I\) is close to 1; otherwise the sink is a stable spiral. At \(I=1\) the stable node and the saddle coalesce in a _saddle-node bifurcation of fixed points._ ### Existence of a Closed Orbit What happens when \(I>1\)? There are no more fixed points available; something new has to happen. We claim that _all trajectories are attracted to a unique, stable limit cycle._ The first step is to show that a periodic solution exists. The argument uses a clever idea introduced by Poincare long ago. Watch carefully--this idea will come up frequently in our later work. Consider the nullcline \(y=\alpha^{-1}\left(I-\sin\phi\right)\) where \(y^{\prime}=0\). The flow is downward above the nullcline and upward below it (Figure 8.5.1). In particular, all trajectories eventually enter the strip \(y_{1}\leq y\leq y_{2}\) (Figure 8.5.1), and stay in there forever. (Here \(y_{1}\) and \(y_{2}\) are any fixed numbers such that \(0<y_{1}<(I-1)/\alpha\) and \(y_{2}>(I+1)/\alpha\).) Inside the strip, the flow is always to the right, because \(y>0\) implies \(\phi^{\prime}>0\). Also, since \(\phi=0\) and \(\phi=2\pi\) are equivalent on the cylinder, we may as well confine our attention to the rectangular box \(0\leq\phi\leq 2\pi\), \(y_{1}\leq y\leq y_{2}\). This box contains all the information about the long-term behavior of the flow (Figure 8.5.2). Now consider a trajectory that starts at a height \(y\) on the left side of the box, and follow it until it intersects the right side of the box at some new height \(P(y)\), as shown in Figure 8.5.2. The mapping from \(y\) to \(P(y)\) is called the _Poincare map_. It tells us how the height of a trajectory changes after one lap around the cylinder (Figure 8.5.3). Figure 8.5.2: The Poincare map is also called the _first-return map_, because if a trajectory starts at a height \(y\) on the line \(\phi=0\) (mod \(2\pi\)), then \(P(y)\) is its height when it returns to that line for the first time. Now comes the key point: we can't compute \(P(y)\) explicitly, but _if we can show that there's a point \(y^{*}\) such that \(P(y^{*})=y^{*}\), then the corresponding trajectory will be a closed orbit_ (because it returns to the same location on the cylinder after one lap). To show that such a \(y^{*}\) must exist, we need to know what the graph of \(P(y)\) looks like, at least roughly. Consider a trajectory that starts at \(y=y_{1}\), \(\phi=0\). We claim that \[P(y_{1})>y_{1}.\] This follows because the flow is strictly upward at first, and the trajectory can never return to the line \(y=y_{1}\), since the flow is everywhere upward on that line (recall Figures 8.5.1 and 8.5.2). By the same kind of argument, \[P(y_{2})<y_{2}.\] Furthermore, \(P(y)\) is a _continuous_ function. This follows from the theorem that solutions of differential equations depend continuously on initial conditions, if the vector field is smooth enough. And finally, \(P(y)\) is a _monotonic_ function. (By drawing pictures, you can convince yourself that if \(P(y)\) were not monotonic, two trajectories would cross--and that's forbidden.) Taken together, these results imply that \(P(y)\) has the shape shown in Figure 8.5.4. By the intermediate value theorem (or common sense), the graph of \(P(y)\) must cross the \(45^{\circ}\) diagonal _somewhere_; that intersection is our desired \(y^{*}\). ### Hysteresis in the driven pendulum and Josephson junction Figure 8.5.3: ### Uniqueness of the Limit Cycle The argument above proves the _existence_ of a closed orbit, and almost proves its uniqueness. But we haven't excluded the possibility that \(P(y)\equiv y\) on some interval, in which case there would be a band of infinitely many closed orbits. To nail down the uniqueness part of our claim, we recall from Section 6.7 that there are two topologically different kinds of periodic orbits on a cylinder: _librations_ and _rotations_ (Figure 8.5.5). For \(I>1\), librations are impossible because any libration must encircle a fixed point, by index theory--but there are no fixed points when \(I>1\). Hence we only need to consider rotations. Suppose there were two different rotations. The phase portrait on the cylinder would have to look like Figure 8.5.6. One of the rotations would have to lie _strictly above_ the other because trajectories can't cross. Let \(y_{U}(\phi)\) and \(y_{L}(\phi)\) denote the "upper" and "lower" rotations, where \(y_{U}(\phi)>y_{L}(\phi)\) for all \(\phi\). The existence of two such rotations leads to a contradiction, as shown by the following energy argument. Let \[E=\tfrac{1}{2}y^{2}-\cos\phi. \tag{5}\] Figure 8.5.6 Figure 8.5.5After one circuit around any rotation \(y(\phi)\), the change in energy \(\Delta E\) must vanish. Hence \[0=\Delta E=\int_{0}^{2\pi}\frac{dE}{d\phi}d\phi. \tag{6}\] But (5) implies \[\frac{dE}{d\phi}=y\frac{dy}{d\phi}+\sin\phi \tag{7}\] and \[\frac{dy}{d\phi}=\frac{y^{\prime}}{\phi^{\prime}}=\frac{I-\sin\phi-\alpha y}{y}, \tag{8}\] from (4). Substituting (8) into (7) gives \(dE/d\phi\ =I-\alpha y\). Thus (6) implies \[0=\int_{0}^{2\pi}(I-\alpha y)\,d\phi\] on any rotation \(y(\phi)\). Equivalently, any rotation must satisfy \[\int_{0}^{2\pi}y(\phi)\,d\phi=\frac{2\pi I}{\alpha}. \tag{9}\] But since \(y_{U}(\phi)>\!y_{L}(\phi)\), \[\int_{0}^{2\pi}y_{U}(\phi)\,d\phi>\int_{0}^{2\pi}y_{L}(\phi)\,d\phi,\] and so (9) can't hold for _both_ rotations. This contradiction proves that the rotation for \(I>1\) is unique, as claimed. ### Homoclinic Bifurcation Suppose we slowly decrease \(I\), starting from some value \(I>1\). What happens to the rotating solution? Think about the pendulum: as the driving torque is reduced, the pendulum struggles more and more to make it over the top. At some critical value \(I_{c}<1\), the torque is insufficient to overcome gravity and damping, and the pendulum can no longer whirl. Then the rotation disappears and all solutions damp out to the rest state. Our goal now is to visualize the corresponding bifurcation in phase space. In Exercise 8.5.2, you're asked to show (by numerical computation of the phase portrait) that if \(\alpha\) is sufficiently small, the stable limit cycle is destroyed in a _homoclinic__bifurcation_ (Section 8.4). The following schematic drawings summarize the results you should get. First suppose \(I_{c}<I<1\). The system is bistable: a sink coexists with a stable limit cycle (Figure 8.5.7). Keep your eye on the trajectory labeled \(U\) in Figure 8.5.7. It is a branch of the unstable manifold of the saddle. As \(t\rightarrow\infty\), \(U\) asymptotically approaches the stable limit cycle. As \(I\) decreases, the stable limit cycle moves down and squeezes \(U\) closer to the stable manifold of the saddle. When \(I=I_{c}\), _the limit cycle merges with U_ in a homoclinic bifurcation. Now \(U\) is a homoclinic orbit--it joins the saddle to itself (Figure 8.5.8). Finally, when \(I<I_{c}\) the saddle connection breaks and \(U\) spirals into the sink (Figure 8.5.9). Figure 8.5.7: **Plate 1:** Spiral waves of chemical activity in a shallow dish of the Belousov-Zhabotinsky reaction (Section 8.3). These snapshots read from left to right and top to bottom. The complicated initial condition shown in the upper left was created by touching the liquid with a hot wire, thereby inducing an expanding circular wave of oxidation, and then disrupting this wave by gently rocking the dish. As time evolves, the blue waves propagate by diffusion through the motionless reddish-orange liquid. Whenever two waves collide, they annihilate each other, like grassfires rushing head on. Ultimately the system organizes itself into a pair of counterrotating spirals. Reproduced from Winfree (1974). Photographs by Fritz Goro. **Plate 2:** Divergence of nearby trajectories on the Lorenz attractor [Section 9.3]. The Lorenz attractor is shown in blue. The red points show the evolution of a small blob of 10,000 nearby initial conditions, at times \(t=3\), 6, 9, and 15. As each point moves according to the Lorenz equations, the blob is stretched into a long thin filament, which then wraps around the attractor. Ultimately the points spread over much of the attractor, showing that the final state could be almost anywhere, even though the initial conditions were almost identical. This sensitive dependence on initial conditions is the signature of a chaotic system. Plate inspired by a similar illustration in Crutchfield et al. (1986). Numerical integration and computer graphics by Thanos Sapas, using Equation (9.2.1) with parameters \(\sigma=10\), \(b=8/3\), \(r=28\). **Plate 3:** Fractal basin boundaries for the periodically forced double-well oscillator \[x^{\prime}=y,\hskip 28.452756pty^{\prime}=x-x^{3}-\delta y+F\cos\omega t,\] with \(\delta=0.25\), \(F=0.25\), \(\omega=1\) [Section 12.5]. For these parameter values, the system has two periodic attractors, corresponding to forced oscillations confined to the left or right well. (a) Color map: The square region -2.5\(\leq x,y\leq 2.5\) is subdivided into \(900\times 900\) cells, and each cell is color-coded according to the \(x\)-position of its center point. (b) Basins of attraction: Each cell is color-coded according to its fate after many drive cycles. Roughly speaking, if the trajectory ends up oscillating in the right well, the original cell is colored red; if it ends up in the left well, it is colored blue. More precisely, given an initial point (\(x_{0}\), \(y_{0}\)) at the center of \(\alpha\) cell, the state (\(x(t),y(t)\)) is computed at \(t=73\times 2\pi/\omega\) (that is, after 73 drive cycles), and the original cell is color-coded by the value of \(x(t)\). The basins have a complicated shape, and the boundary between them is fractal [Moon and Li 1985]. Near the boundary, slight variations in initial conditions can lead to totally different outcomes. Computations by Thanos Siapas on a Thinking Machines CM-5 parallel computer using a fifth-order Runge-Kutta-Fehlberg method. **Plate 4:** Maps of the short-term behavior of the periodically forced double-well oscillator. Equations, parameters, and color code as in Plate 3. However, instead of showing the system's asymptotic behavior, these plates show the color-coded value of \(x(t)\) after only 1, 2, 3, and 4 drive cycles, respectively. The red and blue regions correspond to initial conditions that converge rapidly to one of the two attractors. A rainbow of colors is found near the basin boundary, because those initial conditions lead to trajectories that linger far from either attractor during the time shown. The scenario described here is valid only if the dimensionless damping \(\alpha\) is sufficiently small. We know that something different has to happen for large \(\alpha\). After all, when \(\alpha\) is infinite we are in the overdamped limit studied in Section 4.6. Our analysis there showed that the periodic solution is destroyed by an _infinite-period bifurcation_ (a saddle and a node are born on the former limit cycle). So it's plausible that an infinite-period bifurcation should also occur if \(\alpha\) is large but finite. These intuitive ideas are confirmed by numerical integration (Exercise 8.5.2). Putting it all together, we arrive at the stability diagram shown in Figure 8.5.10. Three types of bifurcations occur: homoclinic and infinite-period bifurcations of periodic orbits, and a saddle-node bifurcation of fixed points. Our argument leading to Figure 8.5.10 has been heuristic. For rigorous proofs, see Levi et al. (1978). Also, Guckenheimer and Holmes (1983, p. 202) derive an analytical approximation for the homoclinic bifurcation curve for \(\alpha<<1\), using an advanced technique known as Melnikov's method. They show that the bifurcation curve is tangent to the line \(I=4\alpha/\pi\) as \(\alpha\to 0\). Even if \(\alpha\) is not so small, this approximation works nicely, thanks to the straightness of the homoclinic bifurcation curve in Figure 8.5.10. point, corresponding to the zero-voltage state. As \(I\) is increased, nothing changes until \(I\) exceeds 1. Then the stable fixed point disappears in a saddle-node bifurcation, and the junction jumps into a nonzero voltage state (the limit cycle). If \(I\) is brought back down, the limit cycle persists below \(I=1\) but its frequency tends to zero continuously as \(I_{c}\) is approached. Specifically, the frequency tends to zero like \([\ln(I-I_{c})]^{-1}\), just as expected from the scaling law discussed in Section 8.4. Now recall from Section 4.6 that the junction's dc-voltage is proportional to its oscillation frequency. Hence, the voltage also returns to zero continuously as \(I\to I_{c}^{+}\) (Figure 8.5.11). In practice, the voltage appears to jump discontinuously back to zero, but that is to be expected because \([\ln(I-I_{c})]^{-1}\) has _infinite derivatives of all orders at \(I_{c}\)_! (See Exercise 8.5.1.) The steepness of the curve makes it impossible to resolve the continuous return to zero. For instance, in experiments on pendula, Sullivan and Zimmerman (1971) measured the mechanical analog of the \(I-V\) curve--namely, the curve relating the rotation rate to the applied torque. Their data show a jump back to zero rotation rate at the bifurcation. ### Coupled Oscillators and Quasiperiodicity Besides the plane and the cylinder, another important two-dimensional phase space is the _torus_. It is the natural phase space for systems of the form \[\begin{array}{l}\dot{\theta}_{1}=f_{1}(\theta_{1},\theta_{2})\\ \dot{\theta}_{2}=f_{2}(\theta_{1},\theta_{2})\end{array}\] where \(f_{1}\) and \(f_{2}\) are periodic in both arguments. For instance, a simple model of _coupled oscillators_ is given by Figure 8.5.11: \[\begin{split}&\dot{\theta}_{1}=\omega_{1}+K_{1}\sin(\theta_{2}-\theta_ {1})\\ &\dot{\theta}_{2}=\omega_{2}+K_{2}\sin(\theta_{1}-\theta_{2}),\end{split} \tag{1}\] where \(\theta_{1},\theta_{2}\) are the _phases_ of the oscillators, \(\omega_{1},\omega_{2}>0\) are their _natural frequencies_, and \(K_{1}\), \(K_{2}\geq 0\) are _coupling constants_. Equation (1) has been used to model the interaction between human circadian rhythms and the sleep-wake cycle (Strogatz 1986, 1987). An intuitive way to think about (1) is to imagine two friends jogging on a circular track. Here \(\theta_{1}(t)\), \(\theta_{2}(t)\) represent their positions on the track, and \(\omega_{1}\), \(\omega_{2}\)are proportional to their preferred running speeds. If they were uncoupled, then each would run at his or her preferred speed and the faster one would periodically overtake the slower one (as in Example 4.2.1). But these are _friends_--they want to run around _together_! So they need to compromise, with each adjusting his or her speed as necessary. If their preferred speeds are too different, phase-locking will be impossible and they may want to find new running partners. Here we consider (1) more abstractly, to illustrate some general features of flows on the torus and also to provide an example of a saddle-node bifurcation of cycles (Section 8.4). To visualize the flow, imagine two points running around a circle at instantaneous rates \(\dot{\theta}_{1},\dot{\theta}_{2}\) (Figure 8.6.1). Alternatively, we could imagine a _single_ point tracing out a trajectory on a torus with coordinates \(\theta_{1}\), \(\theta_{2}\) (Figure 8.6.2). The coordinates are analogous to latitude and longitude. But since the curved surface of a torus makes it hard to draw phase portraits, we prefer to use an equivalent representation: a _square with periodic boundary conditions_. Then if a trajectory runs off an edge, it magically reappears on the opposite edge, as in some video games (Figure 8.6.3). ### 8.6 Coupled oscillators and quasiperiodicity Figure 8.6.2: **Uncoupled System** Even the seemingly trivial case of uncoupled oscillators (\(K_{\mathrm{i}}\), \(K_{\mathrm{2}}=0\)) holds some surprises. Then (I) reduces to \(\dot{\theta}_{\mathrm{1}}=\omega_{\mathrm{1}}\), \(\dot{\theta}_{\mathrm{2}}=\omega_{\mathrm{2}}\). The corresponding trajectories on the square are straight lines with constant slope \(d\theta_{\mathrm{2}}/d\theta_{\mathrm{1}}=\omega_{\mathrm{2}}/\omega_{\mathrm{1}}\). There are two qualitatively different cases, depending on whether the slope is a rational or an irrational number. If the slope is _rational_, then \(\omega_{\mathrm{1}}/\omega_{\mathrm{2}}=p/q\) for some integers \(p\), \(q\) with no common factors. In this case _all trajectories are closed orbits_ on the torus, because \(\theta_{\mathrm{1}}\) completes \(p\) revolutions in the same time that \(\theta_{\mathrm{2}}\) completes \(q\) revolutions. For example, Figure 8.6.4 shows a trajectory on the square with \(p=3\), \(q=2\). When plotted on the torus, the same trajectory gives \(\ldots\) a _trefoil knot_! Figure 8.6.5 shows a trefoil, alongside a top view of a torus with a trefoil wound around it. **278**: **BIFUCAIONS REVISITED**Do you see why this knot corresponds to \(p=3\), \(q=2\)? Follow the knotted trajectory in Figure 8.6.5, and count the number of revolutions made by \(\theta_{2}\) during the time that \(\theta_{1}\) makes one revolution, where \(\theta_{1}\) is latitude and \(\theta_{2}\) is longitude. Starting on the outer equator, the trajectory moves onto the top surface, dives into the hole, travels along the bottom surface, and then reappears on the outer equator, _two-thirds_ of the way around the torus. Thus \(\theta_{2}\) makes _two-thirds_ of a revolution while \(\theta_{1}\) makes one revolution; hence \(p=3\), \(q=2\). In fact the trajectories are always knotted if \(p\), \(q\geq 2\) have no common factors. The resulting curves are called _p:q torus knots._ The second possibility is that the slope is _irrational_ (Figure 8.6.6). Then the flow is said to be _quasiperiodic._ Every trajectory winds around endlessly on the torus, never intersecting itself and yet never quite closing. How can we be sure the trajectories never close? Any closed trajectory necessarily makes an integer number of revolutions in both \(\theta_{1}\) and \(\theta_{2}\); hence the slope would have to be rational, contrary to assumption. Furthermore, when the slope is irrational, each trajectory is _dense_ on the torus: in other words, each trajectory comes arbitrarily close to any given point on the torus. This is _not_ to say that the trajectory passes _through_ each point; it just comes arbitrarily close (Exercise 8.6.3). Quasiperiodicity is significant because it is a new type of long-term behavior. Unlike the earlier entries (fixed point, closed orbit, homoclinic and heteroclinic orbits and cycles), quasiperiodicity occurs only on the torus. #### Coupled System Now consider (1) in the coupled case where \(K_{1}\), \(K_{2}>0\). The dynamics can be deciphered by looking at the _phase difference_\(\phi=\theta_{1}-\theta_{2}\). Then (1) yields Figure 8.6.6: \[\begin{array}{l}\dot{\phi}=\dot{\theta}_{1}-\dot{\theta}_{2}\\ =\omega_{1}-\omega_{2}-(K_{1}+K_{2})\sin\phi\,\end{array} \tag{2}\] which is just the nonuniform oscillator studied in Section 4.3. By drawing the standard picture (Figure 8.6.7), we see that there are two fixed points for (2) if \(|\omega_{1}-\omega_{2}|\)\(<K_{1}+K_{2}\)and none if \(|\omega_{1}-\omega_{2}|>K_{1}+K_{2}\). A saddle-node bifurcation occurs when \(|\omega_{1}-\omega_{2}|=K_{1}+K_{2}\). Suppose for now that there are two fixed points, defined implicitly by \[\sin\phi^{\star}=\frac{\omega_{1}-\omega_{2}}{K_{1}+K_{2}}.\] As Figure 8.6.7 shows, all trajectories of (2) asymptotically approach the stable fixed point. Therefore, back on the torus, the trajectories of (I) approach a stable _phase-locked_ solution in which the oscillators are separated by a constant phase difference \(\phi\)*. The phase-locked solution is _periodic_; in fact, both oscillators run at a constant frequency given by \(\omega^{\star}\!=\!\dot{\theta}_{1}\!=\!\dot{\theta}_{2}\!=\!\omega_{2}+K_{2} \sin\phi\)*. Substituting for \(\sin\phi\)* yields \[\omega^{\star}\!=\frac{K_{1}\omega_{2}+K_{2}\omega_{1}}{K_{1}+K_{2}}.\] This is called the _compromise frequency_ because it lies between the natural frequencies of the two oscillators (Figure 8.6.8). ## 9 Bifurcations revisited Figure 8.6.8: Figure 8.6.7: The compromise is not generally halfway; instead the frequencies are shifted by an amount proportional to the coupling strengths, as shown by the identity \[\left|\frac{\Delta\omega_{1}}{\Delta\omega_{2}}\right|=\left|\frac{\omega_{1}- \omega*}{\omega_{2}-\omega*}\right|=\left|\frac{K_{1}}{K_{2}}\right|.\] Now we're ready to plot the phase portrait on the torus (Figure 8.6.9). The stable and unstable locked solutions appear as diagonal lines of slope 1, since \(\dot{\theta}_{1}=\dot{\theta}_{2}=\omega*\). If we pull the natural frequencies apart, say by detuning one of the oscillators, then the locked solutions approach each other and coalesce when \(\left|\,\omega_{1}-\omega_{2}\,\right|=K_{1}+K_{2}\). Thus the locked solution is destroyed in a _saddle-node bifurcation of cycles_ (Section 8.4). After the bifurcation, the flow is like that in the uncoupled case studied earlier: we have either quasiperiodic or rational flow, depending on the parameters. The only difference is that now the trajectories on the square are curvy, not straight. ### Poincare Maps In Section 8.5 we used a Poincare map to prove the existence of a periodic orbit for the driven pendulum and Josephson junction. Now we discuss Poincare maps more generally. Poincare maps are useful for studying swirling flows, such as the flow near a periodic orbit (or as we'll see later, the flow in some chaotic systems). Consider an \(n\)-dimensional system \(\,\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\,\). Let \(S\) be an \(n-1\) dimensional _surface of section_ (Figure 8.7.1). \(S\) is required to be transverse to the flow, i.e., all trajectories starting on \(S\) flow through it, not parallel to it. ### Poincare maps Figure 8.6.9: The _Poincare map_\(P\) is a mapping from \(S\) to itself, obtained by following trajectories from one intersection with \(S\) to the next. If \(\mathbf{x}_{k}\in S\) denotes the \(k\)th intersection, then the Poincare map is defined by \[\mathbf{x}_{k+1} = P(\mathbf{x}_{k}).\] Suppose that \(\mathbf{x}^{*}\) is _a fixed point_ of \(P\), i.e., \(P(\mathbf{x}^{*})=\mathbf{x}^{*}\). Then a trajectory starting at \(\mathbf{x}^{*}\) returns to \(\mathbf{x}^{*}\) after some time \(T\), and is therefore _a closed orbit_ for the original system \(\dot{\mathbf{x}}=\mathbf{f}\left(\mathbf{x}\right)\). Moreover, by looking at the behavior of \(P\) near this fixed point, we can determine the stability of the closed orbit. Thus the Poincare map converts problems about closed orbits (which are difficult) into problems about fixed points of a mapping (which are easier in principle, though not always in practice). The snag is that it's typically impossible to find a formula for \(P\). For the sake of illustration, we begin with two examples for which \(P\) can be computed explicitly. **EXAMPLE 8.7.1:** Consider the vector field given in polar coordinates by \(\dot{r}=r(1-r^{2})\), \(\dot{\theta}=1\). Let \(S\) be the positive \(x\)-axis, and compute the Poincare map. Show that the system has a unique periodic orbit and classify its stability. _Solution:_ Let \(r_{0}\) be an initial condition on \(S\). Since \(\dot{\theta}=1\), the first return to \(S\) occurs after a _time of flight_\(t=2\pi\). Then \(r_{1}=P(r_{0})\), where \(r_{1}\) satisfies \[\int_{\epsilon_{1}}^{\epsilon_{2}}\frac{dr}{r(1-r^{2})}=\int_{0}^{2\pi}dt=2\pi.\] Evaluation of the integral (Exercise 8.7.1) yields \(r_{1}=\left[1+e^{-4\pi}(r_{0}^{-2}-1)\right]^{-1/2}\). Hence \(P(r)=\left[1+e^{-4\pi}(r^{-2}-1)\right]^{-1/2}\). The graph of \(P\) is plotted in Figure 8.7.2. A fixed point occurs at \(r^{*}=1\) where the graph intersects the \(45^{\circ}\) line. The _cobweb_ construction in Figure 8.7.2 enables us to iterate the map graphically. Given an Figure 8.7.2: input \(r_{k}\), draw a vertical line until it intersects the graph of \(P\); that height is the output \(r_{k+1}\). To iterate, we make \(r_{k+1}\)the new input by drawing a horizontal line until it intersects the \(45^{\circ}\) diagonal line. Then repeat the process. Convince yourself that this construction works; we'll be using it often. The cobweb shows that the fixed point \(r^{*}=1\) is stable and unique. No surprise, since we knew from Example 7.1.1 that this system has a stable limit cycle at \(r=1\). **Example 8.7.2**: \(Since \(P\) has slope less than 1, it intersects the diagonal at a unique point. Furthermore, the cobweb shows that the deviation of \(x_{k}\) from the fixed point is reduced by a constant factor with each iteration. Hence the fixed point is unique and globally stable. In physical terms, the circuit always settles into the same forced oscillation, regardless of the initial conditions. This is a familiar result from elementary physics, looked at in a new way. ### Linear Stability of Periodic Orbits Now consider the general case: Given a system \(\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})\) with a closed orbit, how can we tell whether the orbit is stable is not? Equivalently, we ask whether the corresponding fixed point \(\mathbf{x}^{*}\) of the Poincare map is stable. Let \(\mathbf{v}_{0}\) be an infinitesimal perturbation such that \(\mathbf{x}^{*} + \mathbf{v}_{0}\) is in \(S\). Then after the first return to \(S\), \[\mathbf{x}^{*} + \mathbf{v}_{1} = P(\mathbf{x}^{*} + \mathbf{v}_{0})\] \[= P(\mathbf{x}^{*}) + \left[ DP(\mathbf{x}^{*}) \right]\mathbf{v}_{0} + O\left( \left\| \mathbf{v}_{0} \right\|^{2}\right)\] where \(DP(\mathbf{x}^{*})\) is an \((n-1)\times(n-1)\) matrix called the _linearized Poincare map_ at \(\mathbf{x}^{*}\). Since \(\mathbf{x}^{*} = P(\mathbf{x}^{*})\), we get \[\mathbf{v}_{1} = [ DP(\mathbf{x}^{*})]\mathbf{v}_{0}\] assuming that we can neglect the small \(O\left( \left\| \mathbf{v}_{0} \right\|^{2}\right)\) terms. The desired stability criterion is expressed in terms of the eigenvalues \(\lambda_{j}\) of \(DP(\mathbf{x}^{*}):\)_The closed orbit is linearly stable if and only if \(|\lambda_{j}|<1\) for all \(j=1,\ldots,n-1\)._ Figure 8.7.3: To understand this criterion, consider the generic case where there are no repeated eigenvalues. Then there is a basis of eigenvectors \(\left\{\mathbf{e}_{j}\right\}\) and so we can write \(\mathbf{v}_{0}=\sum\limits_{j=1}^{n-1}v_{j}\mathbf{e}_{j}\) for some scalars \(v_{j}\). Hence \[\mathbf{v}_{1}=\left(DP(\mathbf{x}^{\bullet})\right)\sum\limits_{j=1}^{n-1}v_{ j}\mathbf{e}_{j}=\sum\limits_{j=1}^{n-1}v_{j}\lambda_{j}\mathbf{e}_{j}.\] Iterating the linearized map \(k\) times gives \[\mathbf{v}_{k}=\sum\limits_{j=1}^{n-1}v_{j}(\lambda_{j})^{k}\mathbf{e}_{j}.\] Hence, if all \(|\lambda_{j}|<1\), then \(\left\|\mathbf{v}_{k}\right\|\to 0\) geometrically fast. This proves that \(\mathbf{x}^{\bullet}\) is linearly stable. Conversely, if \(|\lambda_{j}|>1\) for some \(j\), then perturbations along \(\mathbf{e}_{j}\) grow, so \(\mathbf{x}^{\bullet}\) is unstable. A borderline case occurs when the largest eigenvalue has magnitude \(|\lambda_{m}|=1\); this occurs at bifurcations of periodic orbits, and then a nonlinear stability analysis is required. The \(\lambda_{j}\) are called the _characteristic_ or _Floquet multipliers_ of the periodic orbit. (Strictly speaking, these are the _nontrivial_ multipliers; there is always an additional trivial multiplier \(\lambda\equiv 1\) corresponding to perturbations _along_ the periodic orbit. We have ignored such perturbations since they just amount to time-translation.) In general, the characteristic multipliers can only be found by numerical integration (see Exercise 8.7.10). The following examples are two of the rare exceptions. **Example 8.7.3:** Find the characteristic multiplier for the limit cycle of Example 8.7.1. _Solution:_ We linearize about the fixed point \(r^{\bullet}=1\) of the Poincare map. Let \(r=1+\eta\), where \(\eta\) is infinitesimal. Then \(\dot{r}=\dot{\eta}=(1+\eta)(1-(1+\eta)^{2})\). After neglecting \(O(\eta^{2})\) terms, we get \(\dot{\eta}=-2\eta\). Thus \(\eta(t)=\eta_{0}e^{-2t}\). After a time of flight \(t=2\pi\), the new perturbation is \(\eta_{i}=e^{-4\pi}\eta_{0}\). Hence \(e^{-4\pi}\) is the characteristic multiplier. Since \(|e^{-4\pi}|<1\), the limit cycle is linearly stable. For this simple two-dimensional system, the linearized Poincare map degenerates to a \(1\times 1\) matrix, i.e., a number. Exercise 8.7.1 asks you to show explicitly that \(P^{\prime}(r^{\bullet})=e^{-4\pi}\), as expected from the general theory above. Our final example comes from an analysis of coupled Josephson junctions. **Example 8.7.4:** The \(N\)-dimensional system \[\dot{\phi}_{i}=\Omega+a\sin\phi_{i}+\mathop{\perp}\limits_{N}\sum\limits_{j-1}^{N} \sin\phi_{j}, \tag{1}\] for \(i=\)1,..., \(N\), describes the dynamics of a series array of overdamped Josephson junctions in parallel with a resistive load (Tsang et al. 1991). For technological reasons, there is great interest in the solution where all the junctions oscillate in phase. This _in-phase_ solution is given by \(\phi_{i}(t)=\phi_{2}(t)=\ldots=\phi_{N}(t)=\phi^{*}(t)\), where \(\phi^{*}(t)\) denotes the common waveform. Find conditions under which the in-phase solution is periodic, and calculate the characteristic multipliers of this solution. _Solution:_ For the in-phase solution, all \(N\) equations reduce to \[\frac{d\phi^{*}}{dt}=\Omega+(a+1)\sin\phi^{*}. \tag{2}\] This has a periodic solution (on the circle) if and only if \(|\Omega|>|a+1|\). To determine the stability of the in-phase solution, let \(\phi_{i}(t)=\phi^{*}(t)+\eta_{i}(t)\), where the \(\eta_{i}(t)\) are infinitesimal perturbations. Then substituting \(\phi_{i}\) into (1) and dropping quadratic terms in \(\eta\) yields \[\dot{\eta}_{i}=[a\cos\phi^{*}(t)]\eta_{i}+[\cos\phi^{*}(t)]\mathop{\perp} \limits_{N}\sum\limits_{j=1}^{N}\eta_{j}. \tag{3}\] We don't have \(\phi^{*}(t)\) explicitly, but that doesn't matter, thanks to two tricks. First, the linear system decouples if we change variables to \[\mu=\mathop{\perp}\limits^{1}\mathop{\perp}\limits^{N}\gamma_{j -1}\eta_{j},\] \[\xi_{i}=\eta_{i+1}-\eta_{i},\ \ i=1...,\,N-1.\] Then \(\dot{\xi}_{i}=[a\cos\phi^{*}(t)]\xi_{i}\). Separation of variables yields \[\frac{d\xi_{i}}{\xi_{i}}=[a\cos\phi^{*}(t)]dt=\frac{[a\cos\phi^{*}]d\phi^{*}}{ \Omega+(a+1)\sin\phi^{*}},\] where we've used (2) to eliminate \(dt\). (That was the second trick.) Now we compute the change in the perturbations after one circuit around the closed orbit \(\phi^{*}\):\[\oint\frac{d\xi_{i}}{\xi_{i}}=\int_{0}^{2\pi}\frac{[a\cos\phi*]d\phi*}{ \Omega+(a+1)\sin\phi*}\] \[\Rightarrow\ \ln\frac{\xi_{i}(T)}{\xi_{i}(0)}=\frac{a}{a+1}\ \ln\ [\Omega+(a+1)\sin\phi*]_{0}^{2\pi}=0.\] Hence \(\xi_{i}(T)=\xi_{i}(0)\). Similarly, we can show that \(\mu(T)=\mu(0)\). Thus \(\eta_{i}(T)=\eta_{i}(0)\) for all \(i\); all perturbations are unchanged after one cycle! Therefore all the characteristic multipliers \(\lambda_{j}=1\). This calculation shows that the in-phase state is (linearly) neutrally stable. That's discouraging technologically--one would like the array to lock into coherent oscillation, thereby greatly increasing the output power over that available from a single junction. Since the calculation above is based on linearization, you might wonder whether the neglected nonlinear terms could stabilize the in-phase state. In fact they don't: a reversibility argument shows that the in-phase state is not attracting, even if the nonlinear terms are kept (Exercise 8.7.11). ## 8.1 Saddle-Node, Transcritical, and Pitchfork Bifurcations For the following prototypical examples, plot the phase portraits as \(\mu\) varies: a) \(\dot{x}=\mu x-x^{2}\), \(\dot{y}=-y\) (transcritical bifurcation) b) \(\dot{x}=\mu x+x^{3}\), \(\dot{y}=-y\) (subcritical pitchfork bifurcation) For each of the following systems, find the eigenvalues at the stable fixed point as a function of \(\mu\), and show that one of the eigenvalues tends to zero as \(\mu\to 0\). \(\dot{x}=\mu-x^{2}\), \(\dot{y}=-y\) \(\dot{x}=\mu x-x^{2}\), \(\dot{y}=-y\) \(\dot{x}=\mu x+x^{3}\), \(\dot{y}=-y\) True or false: at any zero-eigenvalue bifurcation in two dimensions, the nullclines always intersect tangentially. (Hint: Consider the geometrical meaning of the rows in the Jacobian matrix.) Consider the system \(\dot{x}=y-2x\), \(\dot{y}=\mu+x^{2}-y\). a) Sketch the nullclines. b) Find and classify the bifurcations that occur as \(\mu\) varies. c) Sketch the phase portrait as a function of \(\mu\). **8.1.7**: Find and classify all bifurcations for the system \(\dot{x}=y-ax\), \(\dot{y}=-by+x/(1+x)\). **8.1.8**: ( Bead on rotating hoop, revisited) In Section 3.5, we derived the following dimensionless equation for the motion of a bead on a rotating hoop: \[\varepsilon\frac{d^{2}\phi}{d\tau^{2}}=-\frac{d\phi}{d\tau}-\sin\phi+\gamma\sin \phi\cos\phi.\] Here \(\varepsilon>0\) is proportional to the mass of the bead, and \(\gamma>0\) is related to the spin rate of the hoop. Previously we restricted our attention to the overdamped limit \(\varepsilon\to 0\). * Now allow any \(\varepsilon>0\). Find and classify all bifurcations that occur as \(\varepsilon\) and \(\gamma\) vary. * Plot the stability diagram in the positive quadrant of the \(\varepsilon\),\(\gamma\) plane. **8.1.9**: Plot the stability diagram for the system \(\ddot{x}+b\dot{x}-kx+x^{3}=0\), where \(b\) and \(k\) can be positive, negative, or zero. Label the bifurcation curves in the (\(b\), \(k\)) plane. **8.1.10**: (Budworms vs. the forest) Ludwig et al. (1978) proposed a model for the effects of spruce budworm on the balsam fir forest. In Section 3.7, we considered the dynamics of the budworm population; now we turn to the dynamics of the forest. The condition of the forest is assumed to be characterized by \(S(t)\), the average size of the trees, and \(E(t)\), the "energy reserve" (a generalized measure of the forest's health). In the presence of a constant budworm population \(B\), the forest dynamics are given by \[\dot{S}=r_{S}S\left(1-\frac{S}{K_{S}}\frac{K_{E}}{E}\right),\quad\dot{E}=r_{E} E\left(1-\frac{E}{K_{E}}\right)-P\frac{B}{S},\] where \(r_{S}\), \(r_{E}\), \(K_{S}\), \(K_{E}\), \(P>0\) are parameters. * Interpret the terms in the model biologically. * Nondimensionalize the system. * Sketch the nullclines. Show that there are two fixed points if \(B\) is small, and none if \(B\) is large. What type of bifurcation occurs at the critical value of \(B\)? * Sketch the phase portrait for both large and small values of \(B\). **8.1.11**: In a study of isothermal autocatalytic reactions, Gray and Scott (1985) considered a hypothetical reaction whose kinetics are given in dimensionless form by \[\dot{u}=a(1-u)-uv^{2},\;\;\;\dot{v}=uv^{2}-(a+k)v,\]where \(a\), \(k>0\) are parameters. Show that saddle-node bifurcations occur at \(k=-a\pm\frac{1}{2}\sqrt{a}\). #### 8.1.12 (Interacting bar magnets) Consider the system \[\dot{\theta}_{1} = K\sin(\theta_{1}-\theta_{2})-\sin\theta_{1}\] \[\dot{\theta}_{2} = K\sin(\theta_{2}-\theta_{1})-\sin\theta_{2}\] where \(K\geq 0\). For a rough physical interpretation, suppose that two bar magnets are confined to a plane, but are free to rotate about a common pin joint, as shown in Figure 1. Let \(\theta_{1}\), \(\theta_{2}\) denote the angular orientations of the north poles of the magnets. Then the term \(K\sin(\theta_{2}-\theta_{1})\) represents a repulsive force that tries to keep the two north poles \(180^{\circ}\) apart. This repulsion is opposed by the \(\sin\theta\) terms, which model external magnets that pull the north poles of both bar magnets to the east. If the inertia of the magnets is negligible compared to viscous damping, then the equations above are a decent approximation to the true dynamics. 1. Find and classify all the fixed points of the system. 2. Show that a bifurcation occurs at \(K=\frac{1}{2}\). What type of bifurcation is it? (Hint: Recall that \(\sin(a-b)=\cos b\,\sin a-\sin b\,\cos a\).) 3. Show that the system is a "gradient" system, in the sense that \(\dot{\theta}_{i}=-\partial V/\partial\theta_{i}\) for some potential function \(V(\theta_{i},\theta_{2})\), to be determined. 4. Use part (c) to prove that the system has no periodic orbits. 5. Sketch the phase portrait for \(0<K<\frac{1}{2}\), and then for \(K>\frac{1}{2}\). #### 8.1.13 (Laser model) In Exercise 3.3.1 we introduced the laser model \[\dot{n}=GnN-kn\] \[\dot{N}=-GnN-fN+p\] where \(N(t)\) is the number of excited atoms and \(n(t)\) is the number of photons in the laser field. The parameter \(G\) is the gain coefficient for stimulated emission, \(k\) is the decay rate due to loss of photons by mirror transmission, scattering, etc., \(f\) is the decay rate for spontaneous emission, and \(p\) is the pump strength. All parameters are positive, except \(p\), which can have either sign. For more information, see Milonni and Eberly (1988). Figure 1: * Nondimensionalize the system. * Find and classify all the fixed points. * Sketch all the qualitatively different phase portraits that occur as the dimensionless parameters are varied. * Plot the stability diagram for the system. What types of bifurcation occur? #### 8.1.14 (Binocular rivalry) Normally when you look at something, your left and right eyes see images that are very similar. (Try closing one eye, then the other; the resulting views look almost the same, except for the disparity caused by the spacing between your eyes.) But what would happen if two completely different images were shown to your left and right eyes simultaneously? What would you see? A combination of both images? Experiments like this have been performed for hundreds of years (Wade 1996), and the results are amazing: your brain typically perceives one image for a few seconds, then the other, then the first again, and so on. This switching phenomenon is known as _binocular rivalry_. Mathematical models of binocular rivalry often posit that there are two neural populations corresponding to the brain's representations of the two competing images. These populations battle with each other for dominance--each tends to suppress the other. The following exercise, kindly suggested by Bard Ermentrout, involves the analysis of a minimal model for such neuronal competition. Let \(x_{1}\) and \(x_{2}\) denote the averaged firing rates (essentially, the activity levels) of the two populations of neurons. Assume \[\dot{x}_{1}=-x_{1}+F(I-bx_{2})\,,\quad\dot{x}_{2}=-x_{2}+F(I-bx_{1})\,,\] where the gain function is given by \(F(x)=1/(1+e^{-x})\), \(I\) is the strength of the input stimulus (in this case, the stimuli are the images; note that each is assumed to be equally potent), and \(b\) is the strength of the mutual antagonism. * Sketch the phase plane for various values of \(I\) and \(b\) (both positive). * Show that the symmetric fixed point, \(x_{1}*=x_{2}*=x*\), is always a solution (in other words, it exists for all positive values of \(I\) and \(b\)), and show that it is unique. * Show that at a sufficiently large value of \(b\), the symmetric solution loses stability at a pitchfork bifurcation. Which type of pitchfork bifurcation is it? For a refinement of this model that allows for rhythmic switching between the two perceived images, see Exercise 8.2.17. For more elaborate models and a comparative study of their bifurcation structure, see Shpiro et al. (2007). #### 8.1.15 (The power of true believers) Sometimes a small band of unwavering advocates can win an entire population over to their point of view, as in the case of the civil rights or women's suffrage movements in the United States. Consider the following stylized model of such situations, which was studied by Marvel et al. (2012) and inspired by the earlier work of Xie et al. (2011). The population is divided into four non-overlapping groups. An initially small group of true believers holds opinion A (for example, that women deserve the right to vote) and they are committed to this belief. Nothing that anyone says or does can change their minds. Another group of people currently agrees with them, but they are uncommitted to A. If someone argues for the opposing position, B, an uncommitted A-believer instantly becomes an AB, meaning someone who sees the merit in both positions. Likewise, B-believers instantly turn into members of the AB subpopulation when confronted with an argument for A. The people in the AB group, being fence-sitters, don't try to persuade anyone of anything. And they can be pushed to either side with the slightest nudge; when confronted with an argument for A--or for B--they join that camp. At each time step, we select two people at random and have one of them act as an advocate and the other as a listener. Assuming the members of the four groups mix with each other at random, the governing equations for the dynamics are \[\begin{array}{l}\dot{n}_{A}=\left(p+n_{A}\right)n_{AB}-n_{A}n_{B}\\ \dot{n}_{B}=n_{B}n_{AB}-\left(p+n_{A}\right)n_{AB}\end{array}\] where \(n_{AB}=1-\left(p+n_{A}\right)-n_{B}\). Here the parameter \(p\) denotes the unchanging fraction of true believers in the population. The time-dependent variables \(n_{A}\), \(n_{B}\), and \(n_{AB}\) are the current fractions in the A, B, and AB subpopulations. 1. Interpret and justify the form of the various terms in the governing equations. 2. Assume that initially everyone believes in B, except for the true believers in A. Thus, \(n_{B}(0)=1-p\), and \(n_{A}(0)=n_{AB}(0)=0\). Numerically integrate the system until it reaches equilibrium. Show that the final state changes discontinuously as a function of \(p\). Specifically, show there is a critical fraction of true believers (call it \(p_{c}\)) such that for \(p<p_{c}\), most people still accept B, whereas for \(p>p_{c}\), everyone comes around to A. 3. Show analytically that \(p_{c}=1-\sqrt{3}/2\approx 0.134\). Thus, in this model, only about 13% of the population needs to be unwavering advocates to get everyone else to agree with them eventually. d) What type of bifurcation occurs at \(p_{c}\)? ### 8.2 Hopf Bifurcations Consider the biased van der Pol oscillator \(\ddot{x}+\mu(x^{2}-1)\dot{x}+x=a\). Find the curves in \((\mu,a)\) space at which Hopf bifurcations occur. The next three exercises deal with the system \(\dot{x}=-y+\mu x+xy^{2}\), \(\dot{y}=x+\mu y-x^{2}\). By calculating the linearization at the origin, show that the system \(\dot{x}=-y+\mu x+xy^{2}\), \(\dot{y}=x+\mu y-x^{2}\) has pure imaginary eigenvalues when \(\mu=0\). 2.3 (Computer work) By plotting phase portraits on the computer, show that the system \(\dot{x}=-y+\mu x+xy^{2}\), \(\dot{y}=x+\mu y-x^{2}\) undergoes a Hopf bifurcation at \(\mu=0\). Is it subcritical, supercritical, or degenerate? 2.4 (A heuristic analysis) The system \(\dot{x}=-y+\mu x+xy^{2}\), \(\dot{y}=x+\mu y-x^{2}\) can be analyzed in a rough, intuitive way as follows. 1. Rewrite the system in polar coordinates. 2. Show that if \(r<<1\), then \(\dot{\theta}\approx 1\) and \(\dot{r}\approx\mu r+\frac{1}{8}r^{3}+\ldots\), where the terms omitted are oscillatory and have essentially zero time-average around one cycle. 3. The formulas in part (b) suggest the presence of an unstable limit cycle of radius \(r\approx\sqrt{-8\mu}\) for \(\mu<0\). Confirm that prediction numerically. (Since we assumed that \(r<<1\), the prediction is expected to hold only if \(|\mu|<<1\).) The reasoning above is shaky. See Drazin (1992, pp. 188-190) for a proper analysis via the Poincare-Lindstedt method. For each of the following systems, a Hopf bifurcation occurs at the origin when \(\mu=0\). Using a computer, plot the phase portrait and determine whether the bifurcation is subcritical or supercritical. #### 8.2.5 \(\dot{x}=y+\mu x\), \(\dot{y}=-x+\mu y-x^{2}y\) #### 8.2.6 \(\dot{x}=\mu x+y-x^{3}\), \(\dot{y}=-x+\mu y-2y^{3}\) #### 8.2.7 \(\dot{x}=\mu x+y-x^{2}\), \(\dot{y}=-x+\mu y-2x^{2}\) #### 8.2.8 (Predator-prey model) Odell (1980) considered the system \[\dot{x}=x\big{[}x(1-x)-y\big{]},\ \ \dot{y}=y(x-a),\] where \(x\geq 0\) is the dimensionless population of the prey, \(y\geq 0\) is the dimensionless population of the predator, and \(a\geq 0\) is a control parameter. 1. Sketch the nullclines in the first quadrant \(x\), \(y\geq 0\). 2. Show that the fixed points are (0, 0), (1, 0), and (\(a\), \(a-a^{2}\)), and classify them. 3. Sketch the phase portrait for \(a>1\), and show that the predators go extinct. 4. Show that a Hopf bifurcation occurs at \(a_{c}=\frac{1}{2}\). Is it subcritical or supercritical? 5. Estimate the frequency of limit cycle oscillations for \(a\) near the bifurcation. 6. Sketch all the topologically different phase portraits for \(0<a<1\). The article by Odell (1980) is worth looking up. It is an outstanding pedagogical introduction to the Hopf bifurcation and phase plane analysis in general. Consider the predator-prey model \[\dot{x}=x\bigg{(}b-x-\frac{y}{1+x}\bigg{)},\ \ \ \dot{y}=y\bigg{(}\frac{x}{1+x}-ay \bigg{)},\]where \(x\), \(y\geq 0\) are the populations and \(a\), \(b>0\) are parameters. a) Sketch the nullclines and discuss the bifurcations that occur as \(b\) varies. b) Show that a positive fixed point \(x*>0\), \(y*>0\) exists for all \(a\), \(b>0\). (Don't try to find the fixed point explicitly; use a graphical argument instead.) c) Show that a Hopf bifurcation occurs at the positive fixed point if \[a=a_{c}=\frac{4(b-2)}{b^{2}(b+2)}\] and \(b>2\). (Hint: A necessary condition for a Hopf bifurcation to occur is \(\tau=0\), where \(\tau\) is the trace of the Jacobian matrix at the fixed point. Show that \(\tau=0\) if and only if \(2x*=b-2\). Then use the fixed point conditions to express \(a_{c}\) in terms of \(x*\). Finally, substitute \(x*=(b-2)/2\) into the expression for \(a_{c}\) and you're done.) d) Using a computer, check the validity of the expression in (c) and determine whether the bifurcation is subcritical or supercritical. Plot typical phase portraits above and below the Hopf bifurcation. (Bacterial respiration) Fairen and Velarde (1979) considered a model for respiration in a bacterial culture. The equations are \[\dot{x}=B-x-\frac{xy}{1+qx^{2}},\ \ \ \ \ \dot{y}=A-\frac{xy}{1+qx^{2}}\] where \(x\) and \(y\) are the levels of nutrient and oxygen, respectively, and \(A\), \(B\), \(q>0\) are parameters. Investigate the dynamics of this model. As a start, find all the fixed points and classify them. Then consider the nullclines and try to construct a trapping region. Can you find conditions on \(A\), \(B\), \(q\) under which the system has a stable limit cycle? Use numerical integration, the Poincare-Bendixson theorem, results about Hopf bifurcations, or whatever else seems useful. (This question is deliberately open-ended and could serve as a class project; see how far you can go.) (Degenerate bifurcation, not Hopf) Consider the damped Duffing oscillator \(\ddot{x}+\mu\dot{x}+x-x^{3}=0\). a) Show that the origin changes from a stable to an unstable spiral as \(\mu\) decreases though zero. b) Plot the phase portraits for \(\mu>0\), \(\mu=0\), and \(\mu<0\), and show that the bifurcation at \(\mu=0\) is a degenerate version of the Hopf bifurcation. (Analytical criterion to decide if a Hopf bifurcation is subcritical or supercritical) Any system at a Hopf bifurcation can be put into the following form by suitable changes of variables: \[\dot{x}=-\omega y+f(x,y),\ \ \ \ \ \dot{y}=\omega x+g(x,y),\]where \(f\) and \(g\) contain only higher-order nonlinear terms that vanish at the origin. As shown by Guckenheimer and Holmes (1983, pp. 152-156), one can decide whether the bifurcation is subcritical or supercritical by calculating the sign of the following quantity: \[16a = f_{xxx} + f_{yy} + g_{xxy} + g_{yy} \left. + \frac{1}{\omega}\left[f_{xy}\left(f_{xx} + f_{yy}\right) - g_{xy}\left(g_{xx} + g_{yy}\right) - f_{xx}g_{xx} + f_{yy}g_{yy}\right] \right.\] where the subscripts denote partial derivatives evaluated at (0,0). The criterion is: If \(a\) < 0, the bifurcation is supercritical; if \(a\) > 0, the bifurcation is subcritical. * y + xy^{2}\), \(\dot{y} = x * y + \mu x + xy^{2}\), \(\dot{y} = x + \mu y - x^{2}\) at \(\mu = 0\). (Compare the results of Exercises 8.2.2-8.2.4.) (You might be wondering what \(a\) measures. Roughly speaking, \(a\) is the coefficient of the cubic term in the equation \(\dot{r} = ar^{3}\) governing the radial dynamics at the bifurcation. Here \(r\) is a slightly transformed version of the usual polar coordinate. For details, see Guckenheimer and Holmes (1983) or Grimshaw (1990).) For each of the following systems, a Hopf bifurcation occurs at the origin when \(m\) = 0. Use the analytical criterion of Exercise 8.2.12 to decide if the bifurcation is sub- or supercritical. Confirm your conclusions on the computer. \[\begin{array}{ll} {\bf 8.2.13} & \dot{x} = y + \mu x,\;\;\;\dot{y} = - x + \mu y - x^{2}y \\ {\bf 8.2.14} & \dot{x} = \mu x + y - x^{3},\;\;\;\dot{y} = - x + \mu y + 2 y^{3} \\ {\bf 8.2.15} & \dot{x} = \mu x + y - x^{2},\;\;\;\dot{y} = - x + \mu y + 2 x^{2} \\ \end{array}\] In Example 8.2.1, we argued that the system \(\dot{x} = \mu x - y + xy^{2}\), \(\dot{y} = x + \mu y + y^{3}\) undergoes a subcritical Hopf bifurcation at \(m\) = 0. Use the analytical criterion to confirm that the bifurcation is subcritical. (Binocular rivalry revisited) Exercise 8.1.14 introduced a minimal model of binocular rivalry, a remarkable visual phenomenon in which, when two different images are presented to the left and right eyes, the brain perceives only one image at a time. First one image wins, then the other, then the first again, and so on, with the winning image alternating every few seconds. The model studied in Exercise 8.1.14 could account for the complete suppression of one image by the other, but not for the rhythmic alternation between them. Now we extend that model to allow for the observed oscillations. (Many thanks to Bard Ermentrout for creating and sharing this exercise and the earlier Exercise 8.1.14.) Let \(x_{1}\) and \(x_{2}\) denote the activity levels of the two populations of neurons coding for the two images, as in Exercise 8.1.14, but now assume the neurons get tired after winning a while, as adaptation builds up. The governing equations then become \[\dot{x_{1}} = -x_{1} + F(I - bx_{2} - gy_{1})\] \[\dot{y_{1}} = ( -y_{1} +x_{1})/T\] \[\dot{x_{2}} = -x_{2} + F(I - bx_{1} - gy_{2})\] \[\dot{y_{2}} = ( -y_{2} +x_{2})/T\] where the \(y\)-variables represent adaptation building up on a time scale \(T\) and provoking tiredness with strength \(g\) in their associated neuronal population. As in Exercise 8.1.14, the gain function is given by \(F(x) = 1/(1 + e^{-x})\), \(I\) is the strength of the input stimuli (the images), and \(b\) is the strength of the mutual antagonism between the neuronal populations. This is now a four-dimensional system, but its key stability properties can be inferred from two-dimensional calculations as follows. a) Show that \(x_{1}* = y_{1}* = x_{2}* = y_{2}* = u\) is a fixed point for all choices of parameters and that \(u\) is uniquely defined. b) Show that the stability matrix (the Jacobian) for the linearization about this fixed point has the form \[\begin{pmatrix}-c_{1}&-c_{2}&-c_{3}&0\\ d_{1}&-d_{1}&0&0\\ -c_{3}&0&-c_{1}&-c_{2}\\ 0&0&d_{1}&-d_{1}\end{pmatrix}.\] We can write this in block-matrix form as \[\begin{pmatrix}A&B\\ B&A\end{pmatrix}\] where \(A\) and \(B\) are \(2\times 2\) matrices. Show that the four eigenvalues of the \(4\times 4\) block matrix are given by the eigenvalues of \(A-B\) and \(A+B\). c) Show that the eigenvalues of \(A+B\) are all negative, by considering the trace and determinant of this matrix. d) Show that depending on the sizes of \(g\) and \(T\), the matrix \(A-B\) can have either a negative determinant (leading to a pitchfork bifurcation of \(u\)) or a positive trace (resulting in a Hopf bifurcation). e) Using a computer, show that the Hopf bifurcation can be supercritical; the resulting stable limit cycle mimics the oscillations we're trying to explain. (Hint: When looking for trajectories that approach a stable limit cycle, be sure to use initial conditions that are different for populations 1 and 2. In other words, break the symmetry by starting with \(x_{1}\approx x_{2}\) and \(y_{1}\approx y_{2}\).) ### 8.3 Oscillating Chemical Reactions 3.1 (Brusselator) The Brusselator is a simple model of a hypothetical chemical oscillator, named after the home of the scientists who proposed it. (This is a common joke played by the chemical oscillator community; there is also the "Oregonator," "Palo Altonator," etc.) In dimensionless form, its kinetics are \[\begin{array}{l}\dot{x}=1-(b+1)x+ax^{2}y\\ \dot{y}=bx-ax^{2}y\end{array}\] where \(a,b>0\) are parameters and \(x\), \(y\geq 0\) are dimensionless concentrations. * Find all the fixed points, and use the Jacobian to classify them. * Sketch the nullclines, and thereby construct a trapping region for the flow. * Show that a Hopf bifurcation occurs at some parameter value \(b=b_{c}\), where \(b_{c}\) is to be determined. * Does the limit cycle exist for \(b>b_{c}\) or \(b<b_{c}\)? Explain, using the Poincare-Bendixson theorem. * Find the approximate period of the limit cycle for \(b\approx b_{c}\). Schnackenberg (1979) considered the following hypothetical model of a chemical oscillator: \[X\mathop{\xleftarrow{k_{1}}}\limits_{k_{-1}}A,\qquad B\mathop{\xleftarrow{k_{2}}}\limits_{k_{-1}}Y,\qquad\quad 2X+Y\mathop{\xleftarrow{k_{1}}}\limits_{k_{-1}}3X.\] After using the Law of Mass Action and nondimensionalizing, Schnackenberg reduced the system to \[\begin{array}{l}\dot{x}=a-x+x^{2}y\\ \dot{y}=b-x^{2}y\end{array}\] where \(a,b>0\) are parameters and \(x\), \(y>0\) are dimensionless concentrations. * Show that all trajectories eventually enter a certain trapping region, to be determined. Make the trapping region as small as possible. (Hint: Examine the ratio \(\dot{y}/\dot{x}\) for large \(x\).) * Show that the system has a unique fixed point, and classify it. * Show that the system undergoes a Hopf bifurcation when \(b-a=(a+b)^{3}\). * Is the Hopf bifurcation subcritical or supercritical? Use a computer to decide. * Plot the stability diagram in \(a\), \(b\) space. (Hint: It is a bit confusing to plot the curve \(b-a=(a+b)^{3}\), since this requires analyzing a cubic. As in Section 3.7, the _parametric form_ of the bifurcation curve comes to the rescue. Show that the bifurcation curve can be expressed as \[a=\tfrac{1}{2}x*\left(1-(x*)^{2}\right),\quad\ b=\tfrac{1}{2}x*\left(1+(x*)^{2}\right)\]where \(x^{*}>0\) is the \(x\)-coordinate of the fixed point. Then plot the bifurcation curve from these parametric equations. This trick is discussed in Murray (2002).) (Relaxation limit of a chemical oscillator) Analyze the model for the chlorine dioxide-iodine-malonic acid oscillator, (8.3.4), (8.3.5), in the limit \(b<<1\). Sketch the limit cycle in the phase plane and estimate its period. ### Global Bifurcations of Cycles Consider the system \(\dot{r}=r(1-r^{2})\), \(\dot{\theta}=\mu-\sin\theta\) for \(\mu\) slightly greater than 1. Let \(x=r\cos\theta\) and \(y=r\sin\theta\). Sketch the waveforms of \(x(t)\) and \(y(t)\). (These are typical of what one might see experimentally for a system on the verge of an infinite-period bifurcation.) Discuss the bifurcations of the system \(\dot{r}=r(\mu-\sin r)\), \(\dot{\theta}=1\) as \(\mu\) varies. (Homoclinic bifurcation) Using numerical integration, find the value of \(\mu\) at which the system \(\dot{x}=\mu x+y-x^{2}\), \(y=-x+\mu y+2x^{2}\) undergoes a homoclinic bifurcation. Sketch the phase portrait just above and below the bifurcation. (Second-order phase-locked loop) Using a computer, explore the phase portrait of \(\ddot{\theta}+(1-\mu\cos\theta)\dot{\theta}+\sin\theta=0\) for \(\mu\geq 0\). For some values of \(\mu\), you should find that the system has a stable limit cycle. Classify the bifurcations that create and destroy the cycle as \(\mu\) increases from 0. Exercises 8.4.5-8.4.11 deal with the _forced Duffing oscillator_ in the limit where the forcing, detuning, damping, and nonlinearity are all weak: \[\ddot{x}+x+\varepsilon(bx^{3}+kx-ax-F\cos t)=0,\] where \(0<\varepsilon<<1\), \(b>0\) is the nonlinearity, \(k>0\) is the damping, \(a\) is the detuning, and \(F>0\) is the forcing strength. This system is a small perturbation of a harmonic oscillator, and can therefore be handled with the methods of Section 7.6. We have postponed the problem until now because saddle-node bifurcations of cycles arise in its analysis. (Averaged equations) Show that the averaged equations (7.6.53) for the system are \[r^{\prime}=-\tfrac{1}{2}(kr+F\sin\phi),\quad\ \phi^{\prime}=-\tfrac{1}{8}(4a-3 br^{2}+\tfrac{4F}{r}\cos\phi),\] where \(x=r\cos(t+\phi)\), \(\dot{x}=-r\sin(t+\phi)\), and prime denotes differentiation with respect to slow time \(T=\varepsilon\,t\), as usual. (If you skipped Section 7.6, accept these equations on faith.) (Correspondence between averaged and original systems) Show that fixed points for the averaged system correspond to phase-locked periodic solutions for the original forced oscillator. Show further that saddle-node bifurcations of fixed points for the averaged system correspond to saddle-node bifurcations of cycles for the oscillator. #### 8.4.7 (No periodic solutions for averaged system) Regard (_r_,_ph_) as polar coordinates in the phase plane. Show that the averaged system has no closed orbits. (Hint: Use Dulac's criterion with _g_(_r_,_ph_) = 1. Let \(\mathbf{x}^{\prime} = (r^{\prime},\;r\phi^{\prime})\). Compute \(\nabla \cdot \mathbf{x}^{\prime} = \frac{1}{r}\frac{\partial}{\partial\nu}(rr^{\prime}) + \frac{1}{r}\frac{\partial}{\partial\phi}(r\phi^{\prime})\) and show that it has one sign.) #### 8.4.8 (No sources for averaged system) The result of the previous exercise shows that we only need to study the fixed points of the averaged system to determine its long-term behavior. Explain why the divergence calculation above also implies that the fixed points cannot be sources; only sinks and saddles are possible. #### 8.4.9 (Resonance curves and cusp catastrophe) In this exercise you are asked to determine how the equilibrium amplitude of the driven oscillations depends on the other parameters. * a)^{2} \right] = F^{2}\). * From now on, assume that \(k\) and \(F\) are fixed. Graph \(r\) vs. \(a\) for the linear oscillator (_b_ = 0). This is the familiar resonance curve. * Graph \(r\) vs. \(a\) for the nonlinear oscillator (_b_ 0). Show that the curve is single-valued for small nonlinearity, say \(b\) < _b__c__, but triple-valued for large nonlinearity (_b_ > _b__c__), and find an explicit formula for _b__c__. (Thus we obtain the intriguing conclusion that the driven oscillator can have three limit cycles for some values of \(a\) and _b_!) * Show that if \(r\) is plotted as a surface above the (_a_, _b_) plane, the result is a cusp catastrophe surface (recall Section 3.6). Now for the hard part: analyze the bifurcations of the averaged system. * Plot the nullclines \(r^{\prime} = 0\) and \(\phi^{\prime} = 0\) in the phase plane, and study how their intersections change as the detuning \(a\) is increased from negative values to large positive values. * Assuming that \(b\) > _b__c__, show that as \(a\) increases, the number of _stable_ fixed points changes from one to two and then back to one again. #### 8.4.11 (Numerical exploration) Fix the parameters \(k = 1,\;b = \frac{4}{3},\;F = 2\). * Using numerical integration, plot the phase portrait for the averaged system with \(a\) increasing from negative to positive values. * Show that for \(a\) = 2.8, there are two stable fixed points. * Go back to the original forced Duffing equation. Numerically integrate it and plot _x_(_t_) as \(a\) increases slowly from \(a\) = -1 to \(a\) = 5, and then decreases slowly back to \(a\) = -1. You should see a dramatic hysteresis effect with the limit cycle oscillation suddenly jumping up in amplitude at one value of \(a\), and then back down at another. 4.12 (Scaling near a homoclinic bifurcation) To find how the period of a closed orbit scales as a homoclinic bifurcation is approached, we estimate the time it takes for a trajectory to pass by a saddle point (this time is much longer than all others in the problem). Suppose the system is given locally by \(\dot{x}\approx\lambda_{x}x,\;\;\dot{y}\approx-\lambda_{x}y\). Let a trajectory pass through the point (\(\mu\), I), where \(\mu<<1\) is the distance from the stable manifold. How long does it take until the trajectory has escaped from the saddle, say out to \(x(t)\approx 1\)? (See Gaspard (1990) for a detailed discussion.) ### Hysteresis in the Driven Pendulum and Josephson Junction Show that \([\ln(I-I_{c})]^{-1}\) has infinite derivatives of all orders at \(I_{c}\). (Hint: Consider \(f(I)=(\ln I)^{-1}\) and try to derive a formula for \(f^{(n+1)}(I)\) in terms of \(f^{(n)}(I)\), where \(f^{(n)}(I)\) denotes the \(n\)th derivative of \(f(I)\).) Consider the driven pendulum \(\phi^{\prime\prime}+\alpha\phi^{\prime}+\sin\phi=I\). By numerical computation of the phase portrait, verify that if \(\alpha\) is fixed and sufficiently small, the system's stable limit cycle is destroyed in a homoclinic bifurcation as \(I\) decreases. Show that if \(\alpha\) is too large, the bifurcation is an infinite-period bifurcation instead. (Logistic equation with periodically varying carrying capacity) Consider the logistic equation \(\dot{N}=rN(1-N/K(t))\), where the carrying capacity is positive, smooth, and \(T\)-periodic in \(t\). 1. Using a Poincare map argument like that in the text, show that the system has at least one stable limit cycle of period \(T\), contained in the strip \(K_{\min}\leq N\leq K_{\max}\). 2. Is the cycle necessarily unique? (Logistic equation with sinusoidal harvesting) In Exercise 3.7.3 you were asked to consider a simple model of a fishery with constant harvesting. Now consider a generalization in which the harvesting varies periodically in time, perhaps due to daily or seasonal variations. To keep things simple, assume the periodic harvesting is purely sinusoidal (Benardete et al. 2008). Then, if the fish population grows logistically in the absence of harvesting, the model is given in dimensionless form by \(\dot{x}=rx(1-x)-h(1+\alpha\sin t)\). Assume that \(r\), \(h>0\) and \(0<\alpha<1\). 1. Show that if \(h>r/4\) the system has no periodic solutions, even though the fish are being harvested periodically with period \(T=2\pi\). What happens to the fish population in this case? 2. By using a Poincare map argument like that in the text, show that if \(h<\frac{r}{4(1+\alpha)}\), there exists a \(2\pi\)-periodic solution--in fact, a stable limit cycle--in the strip \(1/2<x<1\). Similarly, show there exists an unstable limit cycle in the strip \(0<x<1/2\). Interpret your results biologically. 3. What happens in between cases (a) and (b), i.e., for \(\frac{r}{4(1+\alpha)}<h<\frac{r}{4}\)? #### 8.5.5 (Driven pendulum with quadratic damping) Consider a pendulum driven by a constant torque and damped by air resistance. In dimensionless form, the governing equation is \[\ddot{\theta}+\alpha\,\dot{\theta}\big{|}\dot{\theta}\big{|}+\sin\theta=F\] where \(\alpha>0\) and \(F>0\) are the dimensionless damping strength and torque, respectively. The new feature here is that we assume the damping is quadratic, rather than linear, in the velocity \(v=\dot{\theta}\). This is more realistic if the damping is primarily due to drag, but the trouble is that the damping becomes nonlinear, which normally would make the analysis harder. But as you'll see, the pleasant surprise here is that quadratic damping actually makes the system easier--in fact, it becomes explicitly solvable! * Find and classify the fixed points in the \((\theta,v)\) phase plane. If you find a center according to the linearization, decide whether this borderline case is truly a nonlinear center, a stable spiral, or an unstable spiral. (Hint: Find a local Liapunov function in the neighborhood of the fixed point.) * For \(F>1\), prove that the system has a stable limit cycle (where we now regard the phase space as a cylinder rather than a plane). Then prove that this limit cycle is unique. Remarkably, exact formulas for the limit cycle and the homoclinic bifurcation curve can be found (Pedersen and Saermark 1973); this is one of the advantages of the quadratically damped case. However, the solution involves some tricky changes of variables. Here's how it works: * In the region \(v>0\), \(\theta(t)\) increases monotonically. Therefore it can be inverted formally to yield \(t(\theta)\). Now regard \(\theta\) as a new independent (time-like) variable, and introduce the new dependent variable \(u=\frac{1}{2}v^{2}\). Use the chain rule (carefully, showing all your steps) to deduce that \(\frac{du}{d\theta}=\ddot{\theta}\). * Hence, in the region \(v>0\), the pendulum equation becomes \(\frac{du}{d\theta}+2\alpha u+\sin\theta=F\), which is a linear equation in \(u(\theta)\). Assuming that this equation is correct (even if you were unable to derive it), find an exact formula for the limit cycle when \(F>1\). * Now decrease \(F\) while keeping \(\alpha\) fixed. Show that the limit cycle undergoes a homoclinic bifurcation at some critical value of \(F\), call it \(F=F_{c}(\alpha)\), and give an exact formula for the bifurcation curve \(F_{c}(\alpha)\). ### Coupled Oscillators and Quasiperiodicity #### 8.6.1 ("Oscillator death" and bifurcations on a torus) In a paper on systems of neural oscillators, Ermentrout and Kopell (1990) illustrated the notion of "oscillator death" with the following model: \[\dot{\theta}_{\rm i}=\omega_{\rm i}+\sin\theta_{\rm i}\cos\theta_{2},\ \ \ \ \dot{\theta}_{2}=\omega_{2}+\sin\theta_{2}\cos\theta_{\rm i},\] where \(\omega_{\rm i}\), \(\omega_{2}\geq 0\). * Sketch all the qualitatively different phase portraits that arise as \(\omega_{\rm i}\), \(\omega_{2}\) vary. * Find the curves in \(\omega_{\rm i}\), \(\omega_{2}\) parameter space along which bifurcations occur, and classify the various bifurcations. * Plot the stability diagram in \(\omega_{\rm i}\), \(\omega_{2}\) parameter space. Reconsider the system (8.6.1): \[\dot{\theta}_{\rm i}=\omega_{\rm i}+K_{1}\sin(\theta_{2}-\theta_{\rm i}),\ \ \ \ \dot{\theta}_{2}=\omega_{2}+K_{2}\sin(\theta_{\rm i}-\theta_{2}).\] * Show that the system has no fixed points, given that \(\omega_{\rm i}\), \(\omega_{2}>0\) and \(K_{1}\), \(K_{2}>0\). * Find a conserved quantity for the system. (Hint: Solve for \(\sin(\theta_{2}-\theta_{\rm i})\) in two ways. The existence of a conserved quantity shows that this system is a non-generic flow on the torus; normally there would not be any conserved quantities.) * Suppose that \(K_{1}=K_{2}\). Show that the system can be nondimensionalized to \[d\theta_{\rm i}/d\tau=1+a\sin(\theta_{2}-\theta_{\rm i}),\ \ \ \ d\theta_{2}/d\tau=\omega+a\sin(\theta_{\rm i}-\theta_{2}).\] * Find the _winding number_\(\lim\limits_{\tau\to\infty}\theta_{\rm i}(\tau)/\theta_{\rm i}(\tau)\) analytically. (Hint: Evaluate the long-time averages \(\langle d\left(\theta_{\rm i}+\theta_{\rm i}\right)/d\tau\rangle\) and \(\langle d\left(\theta_{\rm i}-\theta_{\rm i}\right)/d\tau\rangle\), where the brackets are defined by \(\langle f\rangle\equiv\lim\limits_{T\to\infty}\frac{1}{\tau}\gamma\int_{0}^{T} f(\tau)d\tau\). For another approach, see Guckenheimer and Holmes (1983, p. 299).) (Irrational flow yields dense orbits) Consider the flow on the torus given by \(\dot{\theta}_{\rm i}=\omega_{\rm i},\dot{\theta}_{\rm i}=\omega_{2},\) where \(\omega_{\rm i}/\omega_{\rm i}\) is irrational. Show each trajectory is _dense_; i.e., given any point \(p\) on the torus, any initial condition \(q\), and any \(\varepsilon>0\), there is some \(t<\infty\) such that the trajectory starting at \(q\) passes within a distance \(\varepsilon\) of \(p\). Consider the system \[\dot{\theta}_{\rm i}=E-\sin\theta_{\rm i}+K\sin(\theta_{2}-\theta_{\rm i}),\ \ \ \ \dot{\theta}_{2}=E+\sin\theta_{2}+K\sin(\theta_{\rm i}-\theta_{2})\] where \(E\), \(K\geq 0\). * Find and classify all the fixed points. * Show that if \(E\) is large enough, the system has periodic solutions on the torus. What type of bifurcation creates the periodic solutions?3. Find the bifurcation curve in \((E,K)\) space at which these periodic solutions are created. A generalization of this system to \(N>>1\) phases has been proposed as a model of switching in charge-density waves (Strogatz et al. 1988, 1989). #### 8.6.5 (Plotting Lissajous figures) Using a computer, plot the curve whose parametric equations are \(x(t)=\sin t\), \(y(t)=\sin\omega t\), for the following rational and irrational values of the parameter \(\omega\): (a) \(\omega=3\) (b) \(\omega=\frac{2}{3}\) (c) \(\omega=\frac{5}{3}\) (d) \(\omega=\sqrt{2}\) (e) \(\omega=\pi\) (f) \(\omega=\frac{1}{2}(1+\sqrt{5})\). The resulting curves are called _Lissajous figures_. In the old days they were displayed on oscilloscopes by using two ac signals of different frequencies as inputs. 6.6 (Explaining Lissajous figures) Lissajous figures are one way to visualize the knots and quasiperiodicity discussed in the text. To see this, consider a pair of uncoupled harmonic oscillators described by the four-dimensional system \(\ddot{x}+x=0\), \(\ddot{y}+\omega^{2}y=0\). 1. Show that if \(x=A(t)\) sin \(\theta(t)\), \(y=B(t)\) sin \(\phi(t)\), then \(\dot{A}=\dot{B}=0\) (so \(A\), \(B\) are constants) and \(\dot{\theta}=1\), \(\dot{\theta}=\omega\). 2. Explain why (a) implies that trajectories are typically confined to two-dimensional tori in a four-dimensional phase space. 3. How are the Lissajous figures related to the trajectories of this system? (Mechanical example of quasiperiodicity) The equations \[m\ddot{r}=\frac{h^{2}}{mr^{3}}-k\text{,}\quad\dot{\theta}=\frac{h}{mr^{2}}\] govern the motion of a mass \(m\) subject to a central force of constant strength \(k>0\). Here \(r\), \(\theta\) are polar coordinates and \(h>0\) is a constant (the angular momentum of the particle). 1. Show that the system has a solution \(r=r_{\theta\text{ }}\dot{\theta}=\omega_{\theta\text{ }}\), corresponding to uniform circular motion at a radius \(r_{0}\) and frequency \(\omega_{\theta\text{ }}\). Find formulas for \(r_{0}\) and \(\omega_{\theta\text{ }}\). 2. Find the frequency \(\omega_{r\text{ }}\) of small radial oscillations about the circular orbit. 3. Show that these small radial oscillations correspond to quasiperiodic motion by calculating the winding number \(\omega_{r\text{ }}/\omega_{\theta\text{ }}\). 4. Show by a geometric argument that the motion is either periodic or quasiperiodic for _any_ amplitude of radial oscillation. (To say it in a more interesting way, the motion is never chaotic.) 5. Can you think of a mechanical realization of this system? #### 8.6.8 (catalog ) Solve the equations of Exercise 8.6.7 on a computer, and plot the particle's path in the plane with polar coordinates \(r\), \(\theta\). (Japanese tree frogs) Many thanks to Bard Ermentrout for suggesting the following exercise. An isolated male Japanese tree frog will call nearly periodically. When two frogs are placed close together (say, 50 cm apart), they can hear each other calling and tend to adjust their croak rhythms so that they call in alternation, half a cycle apart--a form of phase-locking known as antiphase synchronization. So what happens when three frogs interact? This situation frustrates them; there's no way all three can get half a cycle away from everyone else. Aihara et al. (2011) found experimentally that in this case, the three frogs settle into one of two distinctive patterns (and they occasionally seem to switch between them, probably due to noise in the environment). One stable pattern involves a pair of frogs calling in unison, with the third frog calling approximately half a cycle out of phase from both of them. The other stable pattern has the three frogs maximally out of sync, with each calling one-third of a cycle apart from the other two. Aihara et al. (2011) explored a coupled oscillator model of these phenomena, the essence of which is contained in the following systems for two frogs, \[\dot{\theta}_{1} = \omega+H(\theta_{2}-\theta_{1})\] \[\dot{\theta}_{2} = \omega+H(\theta_{1}-\theta_{2}),\] and three frogs, \[\dot{\theta}_{1} = \omega+H(\theta_{2}-\theta_{1})+H(\theta_{3}-\theta_{1})\] \[\dot{\theta}_{2} = \omega+H(\theta_{1}-\theta_{2})+H(\theta_{3}-\theta_{2})\] \[\dot{\theta}_{3} = \omega+H(\theta_{1}-\theta_{3})+H(\theta_{2}-\theta_{3}).\] Here \(\theta_{i}\) denotes the phase of the calling rhythm of frog \(i\), and the function \(H\) quantifies the interaction between any two of them. For simplicity we'll assume all the frogs are identically coupled (same \(H\) for all of them) and have identical natural frequencies \(\omega\). Furthermore, assume that \(H\) is odd, smooth, and \(2\pi\)-periodic. 1. Rewrite the systems for both two and three frogs in terms of the phase differences \(\phi=\theta_{1}-\theta_{2}\) and \(\psi=\theta_{2}-\theta_{3}\). 2. Show that the experimental results for two frogs are consistent with the simplest possible interaction function, \(H(x)=a\sin x\), if the sign of \(a\) is chosen appropriately. But then show that this simple \(H\) cannot account for the three-frog results. 3. Next, consider more complicated interaction functions of the form \(H(x)=a\sin x+b\sin 2x\). For the three-frog model, use a computer to plot the phase portraits in the \((\phi,\psi)\) plane for various values of \(a\) and \(b\). Show that for suitable choices of \(a\) and \(b\), you can explain all the experimental results for two and three frogs. That is, you can find a domain in the \((a,b)\) parameter space for which the system has:i) a stable antiphase solution for the two-frog model; ii) a stable phase-locked solution for the three-frog model, in which frogs 1 and 2 are in sync and approximately \(\pi\) out of phase from frog 3; iii) a co-existing stable phase-locked solution with the three frogs one-third of a cycle apart. d) Show numerically that adding a small even periodic component to \(H\) does not alter these results qualitatively. Caveat: The three-frog model studied here is more symmetrical than that considered by Aihara et al. (2011). They assumed unequal coupling strengths because in their experiments one frog was positioned midway between the other two. The frogs at either end therefore interacted less strongly with each other than with the frog in the middle. ### Poincare Maps 7.1 Use partial fractions to evaluate the integral \(\int_{\tau_{1}}^{\tau_{1}}\frac{dr}{r(1-r^{2})}\) that arises in Example 8.7.1, and show that \(r_{1}=[1+e^{-4x}(r_{0}^{-2}-1])^{-1/2}\). Then confirm that \(P^{\prime}(r^{\ast})=e^{-4x}\), as expected from Example 8.7.3. 7.2 Consider the vector field on the cylinder given by \(\dot{\theta}=1\), \(\dot{y}=ay\). Define an appropriate Poincare map and find a formula for it. Show that the system has a periodic orbit. Classify its stability for all real values of \(a\). #### 8.7.3 (Overdamped system forced by a square wave) Consider an overdamped linear oscillator (or an \(RC\)-circuit) forced by a square wave. The system can be nondimensionalized to \(\dot{x}+x=F(t)\), where \(F(t)\) is a square wave of period \(T\). To be more specific, suppose \[F(t)=\begin{cases}+A,&0<t<T/2\\ -A,&T/2<t<T\end{cases}\] for \(t\in(0,\,T)\), and then \(F(t)\) is periodically repeated for all other \(t\). The goal is to show that all trajectories of the system approach a unique periodic solution. We could try to solve for \(x(t)\) but that gets a little messy. Here's an approach based on the Poincare map--the idea is to "strobe" the system once per cycle. 1. Let \(x(0)=x_{0}\). Show that \(x(T)=x_{0}e^{-T}-A(1-e^{-Tt2})^{2}\). 2. Show that the system has a unique periodic solution, and that it satisfies \(x_{0}=-A\tanh(T/4)\). 3. Interpret the limits of \(x(T)\) as \(T\to 0\) and \(T\to\infty\). Explain why they're plausible. d) Let \(x_{1}=x(T)\), and define the Poincare map \(P\) by \(x_{1}=P(x_{0})\). More generally, \(x_{n+1}=P(x_{n})\). Plot the graph of \(P\). e) Using a cobweb picture, show that \(P\) has a globally stable fixed point. (Hence the original system eventually settles into a periodic response to the forcing.) A Poincare map for the system \(\dot{x}+x=A\sin\omega t\) was shown in Figure 8.7.3, for a particular choice of parameters. Given that \(\omega>0\), can you deduce the sign of \(A?\) If not, explain why not. (Another driven overdamped system) By considering an appropriate Poincare map, prove that the system \(\dot{\theta}+\sin\theta=\sin t\) has at least two periodic solutions. Can you say anything about their stability? (Hint: Regard the system as a vector field on a cylinder: \(i=1,\ \dot{\theta}=\sin t-\sin\theta\). Sketch the nullclines and thereby infer the shape of certain key trajectories that can be used to bound the periodic solutions. For instance, sketch the trajectory that passes through \((t,\theta)=(\frac{\pi}{2},\frac{\pi}{2})\).) Give a mechanical interpretation of the system \(\dot{\theta}+\sin\theta=\sin t\) considered in the previous exercise. (Computer work) Plot a computer-generated phase portrait of the system \(\dot{t}=1,\ \dot{\theta}=\sin t-\sin\theta\). Check that your results agree with your answer to Exercise 8.7.5. Consider the system \(\dot{x}+x=F(t)\), where \(F(t)\) is a smooth, \(T\)-periodic function. Is it true that the system necessarily has a stable \(T\)-periodic solution \(x(t)?\) If so, prove it; if not, find an \(F\) that provides a counterexample. Consider the vector field given in polar coordinates by \(\dot{r}=r-r^{2},\ \dot{\theta}=1\). a) Compute the Poincare map from \(S\) to itself, where \(S\) is the positive \(x\)-axis. b) Show that the system has a unique periodic orbit and classify its stability. c) Find the characteristic multiplier for the periodic orbit. Explain how to find Floquet multipliers numerically, starting from perturbations along the coordinate directions. (Reversibility and the in-phase periodic state of a Josephson array) Use a reversibility argument to prove that the in-phase periodic state of (8.7.1) is not attracting, even if the nonlinear terms are kept. (Globally coupled oscillators) Consider the following system of \(N\) identical oscillators: \[\dot{\theta}_{i}=f(\theta_{i})+\frac{K}{N}\sum_{j=1}^{N}f(\theta_{j}),\ \ \mbox{for}\ i=1,\ldots,N,\]where \(K>0\) and \(f(\theta)\) is smooth and \(2\pi\)-periodic. Assume that \(f(\theta)>0\) for all \(\theta\) so that the in-phase solution is periodic. By calculating the linearized Poincare map as in Example 8.7.4, show that all the characteristic multipliers equal \(+1\). Thus the neutral stability found in Example 8.7.4 holds for a broader class of oscillator arrays. In particular, the reversibility of the system is not essential. This example is from Tsang et al. (1991).
## 9 Lorenz equations ### 9.0 Introduction We begin our study of chaos with the _Lorenz equations_ \[\dot{x} =\sigma(y-x)\] \[\dot{y} =rx-y-xz\] \[\dot{z} =xy-bz.\] Here \(\sigma\), \(r\), \(b>0\) are parameters. Ed Lorenz (1963) derived this three-dimensional system from a drastically simplified model of convection rolls in the atmosphere. The same equations also arise in models of lasers and dynamos, and as we'll see in Section 9.1, they _exactly_ describe the motion of a certain waterwheel (you might like to build one yourself). Lorenz discovered that this simple-looking deterministic system could have extremely erratic dynamics: over a wide range of parameters, the solutions oscillate irregularly, never exactly repeating but always remaining in a bounded region of phase space. When he plotted the trajectories in three dimensions, he discovered that they settled onto a complicated set, now called a strange attractor. Unlike stable fixed points and limit cycles, the strange attractor is not a point or a curve or even a surface--it's a fractal, with a fractional dimension between 2 and 3. In this chapter we'll follow the beautiful chain of reasoning that led Lorenz to his discoveries. Our goal is to get a feel for his strange attractor and the chaotic motion that occurs on it. Lorenz's paper (Lorenz 1963) is deep, prescient, and surprisingly readable--look it up! It is also reprinted in Cvitanovic (1989a) and Hao (1990). For a captivating history of Lorenz's work and that of other chaotic heroes, see Gleick (1987). ### 9.1 A Chaotic Waterwheel A neat mechanical model of the Lorenz equations was invented by Willem Malkus and Lou Howard at MIT in the 1970s. The simplest version is a toy waterwheel with leaky paper cups suspended from its rim (Figure 9.1.1). Water is poured in steadily from the top. If the flow rate is too slow, the top cups never fill up enough to overcome friction, so the wheel remains motionless. For faster inflow, the top cup gets heavy enough to start the wheel turning (Figure 9.1.1a). Eventually the wheel settles into a steady rotation in one direction or the other (Figure 9.1.1b). By symmetry, rotation in either direction is equally possible; the outcome depends on the initial conditions. By increasing the flow rate still further, we can destabilize the steady rotation. Then the motion becomes chaotic: the wheel rotates one way for a few turns, then some of the cups get too full and the wheel doesn't have enough inertia to carry them over the top, so the wheel slows down and may even reverse its direction (Figure 9.1.1c). Then it spins the other way for a while. The wheel keeps changing direction erratically. Spectators have been known to place bets (small ones, of course) on which way it will be turning after a minute. Figure 9.1.2 shows Malkus's more sophisticated set-up that is used nowadays at MIT. Figure 9.1.1:The wheel sits on a table top. It rotates in a plane that is tilted slightly from the horizontal (unlike an ordinary waterwheel, which rotates in a vertical plane). Water is pumped up into an overhanging manifold and then sprayed out through dozens of small nozzles. The nozzles direct the water into separate chambers around the rim of the wheel. The chambers are transparent, and the water has food coloring ### 9.1 A chaotic waterwheel Figure 9.1.2: in it, so the distribution of water around the rim is easy to see. The water leaks out through a small hole at the bottom of each chamber, and then collects underneath the wheel, where it is pumped back up through the nozzles. This system provides a steady input of water. The parameters can be changed in two ways. A brake on the wheel can be adjusted to add more or less friction. The tilt of the wheel can be varied by turning a screw that props the wheel up; this alters the effective strength of gravity. A sensor measures the wheel's angular velocity \(\omega(t)\), and sends the data to a strip chart recorder which then plots \(\omega(t)\) in real time. Figure 9.1.3 shows a record of \(\omega(t)\) when the wheel is rotating chaotically. Notice once again the irregular sequence of reversals. We want to explain where this chaos comes from, and to understand the bifurcations that cause the wheel to go from static equilibrium to steady rotation to irregular reversals. ### Notation Here are the coordinates, variables and parameters that describe the wheel's motion (Figure 9.1.4): Figure 9.1.4: \(\theta=\) angle in the lab frame (_not_ the frame attached to the wheel) \(\theta=0\leftrightarrow 12\):\(00\) in the lab frame \(\omega(t)=\) angular velocity of the wheel (increases counterclockwise, as does \(\theta\)) \(m(\theta,t)=\) mass distribution of water around the rim of the wheel, defined such that the mass between \(\theta_{1}\) and \(\theta_{2}\) is \(M(t)=\int_{\theta_{1}}^{\theta_{2}}m(\theta,t)d\theta\) \(Q(\theta)=\) inflow (rate at which water is pumped in by the nozzles above position \(\theta\)) \(r=\) radius of the wheel \(K=\) leakage rate \(v=\) rotational damping rate \(I=\) moment of inertia of the wheel The unknowns are \(m(\theta,t)\) and \(\omega(t)\). Our first task is to derive equations governing their evolution. ### Conservation of Mass To find the equation for conservation of mass, we use a standard argument. You may have encountered it if you've studied fluids, electrostatics, or chemical engineering. Consider any sector \([\theta_{1},\theta_{2}]\) fixed in space (Figure 9.1.5). The mass in that sector is \(M(t)=\int_{\theta_{1}}^{\theta_{2}}m(\theta,t)d\theta\). After an infinitesimal time \(\Delta t\), what is the change in mass \(\Delta M\)? There are four contributions: 1. The mass pumped in by the nozzles is \(\left[\int_{\theta_{1}}^{\theta_{2}}Qd\theta\right]\Delta t\). 2. The mass that leaks out is \(\left[-\int_{\theta_{1}}^{\theta_{2}}Km\ d\theta\right]\Delta t\). Notice the factor of \(m\) in the integral; it implies that leakage occurs at a rate proportional to the mass of water in the chamber--more water implies a larger pressure Figure 9.1.5: head and therefore faster leakage. Although this is plausible physically, the fluid mechanics of leakage is complicated, and other rules are conceivable as well. The real justification for the rule above is that it agrees with direct measurements on the waterwheel itself, to a good approximation. (For experts on fluids: to achieve this linear relation between outflow and pressure head, Malkus attached thin tubes to the holes at the bottom of each chamber. Then the outflow is essentially Poiseuille flow in a pipe.) 3. As the wheel rotates, it carries a new block of water into our observation sector. That block has mass \(m(\theta_{1})\,\omega\Delta t\), because it has angular width \(\omega\Delta t\) (Figure 9.1.5), and \(m(\theta_{1})\) is its mass per unit angle. 4. Similarly, the mass carried out of the sector is \(-m(\theta_{2})\,\omega\Delta t\). Hence, \[\Delta M=\Delta t\left[\int_{\theta_{1}}^{\theta_{2}}Qd\theta-\int_{\theta_{1} }^{\theta_{2}}Km\,d\theta\right]+m(\theta_{1})\,\omega\Delta t-m(\theta_{2})\, \omega\Delta t.\] (1) To convert (I) to a differential equation, we put the transport terms inside the integral, using \(m(\theta_{1})-m(\theta_{2})=-\int_{\theta_{1}}^{\theta_{2}}\frac{\partial m}{ \partial\theta}\,d\theta\). Then we divide by \(\Delta t\) and let \(\Delta t\to 0\). The result is \[\frac{dM}{dt}=\int_{\theta_{1}}^{\theta_{2}}(Q-Km-\omega\frac{\partial m}{ \partial\theta})\,d\theta.\] But by definition of \(M\), \[\frac{dM}{dt}=\int_{\theta_{1}}^{\theta_{2}}\frac{\partial m}{\partial t}\,d\theta.\] Hence \[\int_{\theta_{1}}^{\theta_{2}}\frac{\partial m}{dt}\,d\theta=\int_{\theta_{1} }^{\theta_{2}}(Q-Km-\omega\frac{\partial m}{\partial\theta})\,d\theta.\] Since this holds for _all_\(\theta_{1}\) and \(\theta_{2}\), we must have \[\frac{\partial m}{\partial t}=Q-Km-\omega\frac{\partial m}{\partial\theta}.\] (2) Equation (2) is often called the _continuity equation_. Notice that it is _a partial_ differential equation, unlike all the others considered so far in this book. We'll worry about how to analyze it later; we still need an equation that tells us how \(\omega(t)\) evolves. ### Torque Balance The rotation of the wheel is governed by Newton's law \(F=ma\), expressed as a balance between the applied torques and the rate of change of angular momentum. Let \(I\) denote the moment of inertia of the wheel. Note that in general \(I\) depends on \(t\), because the distribution of water does. But this complication disappears if we wait long enough: as \(t\rightarrow\infty\), one can show that \(I(t)\rightarrow\text{constant}\) (Exercise 9.1.1). Hence, after the transients decay, the equation of motion is \[I\dot{\omega}=\text{damping torque}+\text{gravitational torque}.\] There are two sources of damping: viscous damping due to the heavy oil in the brake, and a more subtle "inertial" damping caused by a spin-up effect--the water enters the wheel at zero angular velocity but is spun up to angular velocity \(\omega\) before it leaks out. Both of these effects produce torques proportional to \(\omega\), so we have \[\text{damping torque}=-v\omega,\] where \(v>0\). The negative sign means that the damping opposes the motion. The gravitational torque is like that of an inverted pendulum, since water is pumped in at the top of wheel (Figure 9.1.6). In an infinitesimal sector \(d\theta\), the mass \(dM=md\theta\). This mass element produces a torque \[d\tau=(dM)gr\sin\theta=mgr\sin\theta\;d\theta.\] To check that the sign is correct, observe that when \(\sin\theta>0\) the torque tends to _increase_\(\omega\), just as in an inverted pendulum. Here \(g\) is the effective gravitational constant, given by \(g=g_{0}\sin\alpha\) where \(g_{0}\) is the usual gravitational constant and \(\alpha\) is the tilt of the wheel from horizontal (Figure 9.1.7). Integration over all mass elements yields ### A chaotic waterwheel Figure 9.1.7:\[\text{gravitational torque}\ =gr{\int_{0}^{2\pi}}m(\theta,t)\sin\theta\,d\theta.\] Putting it all together, we obtain the torque balance equation \[I\dot{\omega}=-v\omega+gr{\int_{0}^{2\pi}}m(\theta,t)\sin\theta\,d\theta. \tag{3}\] This is called an _integro-differential equation_ because it involves both derivatives and integrals. ### Amplitude Equations Equations (2) and (3) completely specify the evolution of the system. Given the current values of \(m(\theta,t)\) and \(\omega(t)\), (2) tells us how to update \(m\) and (3) tells us how to update \(\omega\). So no further equations are needed. If (2) and (3) truly describe the waterwheel's behavior, there must be some pretty complicated motions hidden in there. How can we extract them? The equations appear much more intimidating than anything we've studied so far. A miracle occurs if we use Fourier analysis to rewrite the system. Watch! Since \(m(\theta,t)\) is periodic in \(\theta\), we can write it as a Fourier series \[m(\theta,t)=\sum_{n=0}^{\infty}\,\left[a_{n}(t)\sin n\theta+b_{n}(t)\cos n \theta\right]. \tag{4}\] By substituting this expression into (2) and (3), we'll obtain a set of _amplitude equations_, ordinary differential equations for the amplitudes \(a_{n},b_{n}\) of the different _harmonics_ or _modes_. But first we must also write the inflow as a Fourier series: \[Q(\theta)=\sum_{n=0}^{\infty}a_{n}\cos n\theta. \tag{5}\] There are no \(\sin n\theta\) terms in the series because water is added _symmetrically_ at the top of the wheel; the same inflow occurs at \(\theta\) and \(\cdot\theta\). (In this respect, the waterwheel is unlike an ordinary, real-world waterwheel where asymmetry is used to drive the wheel in the same direction at all times.) Substituting the series for \(m\) and \(Q\) into (2), we get \[\frac{\partial}{\partial t}\!\left[\sum_{n=0}^{\infty}a_{n}(t) \sin n\theta+b_{n}(t)\cos n\theta\right] = -\omega\frac{\partial}{\partial\theta}\!\left[\sum_{n=0}^{\infty }a_{n}(t)\sin n\theta+b_{n}(t)\cos n\theta\right]\] \[+ \sum_{n=0}^{\infty}a_{n}\cos n\theta\] \[- K\!\left[\sum_{n=0}^{\infty}a_{n}(t)\sin n\theta+b_{n}(t)\cos n \theta\right]\!.\]Now carry out the differentiations on both sides, and collect terms. By orthogonality of the functions \(\sin\,n\theta\), \(\cos\,n\theta\), we can equate the coefficients of each harmonic separately. For instance, the coefficient of \(\sin\,n\theta\) on the left-hand side is \(\dot{a}_{n}\), and on the right it is \(\,n\omega b_{n}-Ka_{n}\). Hence \[\dot{a}_{n}=n\omega b_{n}-Ka_{n}. \tag{6}\] Similarly, matching coefficients of \(\cos\,n\theta\) yields \[\dot{b}_{n}=-n\omega a_{n}-Kb_{n}+q_{n}. \tag{7}\] Both (6) and (7) hold for all \(n=0,\,1,\,\ldots\,\). Next we rewrite (3) in terms of Fourier series. _Get ready for the miracle._ When we substitute (4) into (3), only one term survives in the integral, by orthogonality: \[I\dot{\omega} =-v\omega+gr\int_{0}^{2\pi}\biggl{[}\sum_{n=0}^{\infty}a_{n}(t) \sin n\theta+b_{n}(t)\cos n\theta\biggr{]}\sin\theta\,d\theta\] \[=-v\omega+gr\int_{0}^{2\pi}a_{1}\sin^{2}\theta\,d\theta \tag{8}\] \[=-v\omega+\pi gra_{1}.\] Hence, only \(a_{1}\) enters the differential equation for \(\,\dot{\omega}\,\). But then (6) and (7) imply that \(a_{1},b_{1}\), _and \(\omega\) form a closed system--these three variables are decoupled from all the other \(a_{n}\), \(b_{n}\), \(n\approx 1\)_! The resulting equations are \[\dot{a}_{1} =\omega b_{1}-Ka_{1}\] \[\dot{b}_{1} =-\omega a_{1}-Kb_{1}+q_{1} \tag{9}\] \[\dot{\omega} =(-v\omega+\pi gra_{1})/I.\] (If you're curious about the higher modes \(a_{n}\), \(b_{n}\), \(n\approx 1\), see Exercise 9.1.2.) We've simplified our problem tremendously: the original pair of integro-partial differential equations (2), (3) has boiled down to the three-dimensional system (9). It turns out that (9) is equivalent to the Lorenz equations! (See Exercise 9.1.3.) Before we turn to that more famous system, let's try to understand a little about (9). No one has ever _fully_ understood it--its behavior is fantastically complex--but we can say something. ### Fixed Points We begin by finding the fixed points of (9). For notational convenience, the usual asterisks will be omitted in the intermediate steps. Setting all the derivatives equal to zero yields \[a_{1}=\omega b_{1}/K \tag{10}\]\[\begin{array}{l}\omega a_{1}=q_{1}-Kb_{1}\\ a_{1}=v\omega/\pi gr.\end{array} \tag{11}\] Now solve for \(b_{1}\) by eliminating \(a_{1}\) from (10) and (11): \[b_{1}=\frac{Kq_{1}}{\omega^{2}+K^{2}}.\] Equating (10) and (12) yields \(\omega b_{1}/K=v\omega/\pi gr\). Hence \(\omega=0\) or \[b_{1}=Kv/\pi gr. \tag{14}\] Thus, there are two kinds of fixed point to consider: 1. If \(\omega=0\), then \(a_{1}=0\) and \(b_{1}=q_{1}/K\). This fixed point \[(a_{1}*,b_{1}*,\omega*)=(0,\ q_{1}/K,0)\] (15) corresponds to a state of _no rotation_; the wheel is at rest, with inflow balanced by leakage. We're not saying that this state is stable, just that it exists; stability calculations will come later. 2. If \(\omega\neq 0\), then (13) and (14) imply \(b_{1}=Kq_{1}/(\omega^{2}+K^{2})=Kv/\pi gr\). Since \(K\neq 0\), we get \(q_{1}/(\omega^{2}+K^{2})=v/\pi gr\). Hence \[(\omega*)^{2}=\frac{\pi grq_{1}}{v}-K^{2}.\] (16) If the right-hand side of (16) is positive, there are two solutions, \(\pm\omega*\), corresponding to _steady rotation_ in either direction. These solutions exist if and only if \[\frac{\pi grq_{1}}{K^{2}v}>1.\] (17) The dimensionless group in (17) is called the _Rayleigh number_. It measures how hard we're driving the system, relative to the dissipation. More precisely, the ratio in (17) expresses a competition between \(g\) and \(q_{1}\) (gravity and inflow, which tend to spin the wheel), and \(K\) and \(v\) (leakage and damping, which tend to stop the wheel). So it makes sense that steady rotation is possible only if the Rayleigh number is large enough. The Rayleigh number appears in other parts of fluid mechanics, notably convection, in which a layer of fluid is heated from below. There it is proportional to the difference in temperature from bottom to top. For small temperature gradients, heat is conducted vertically but the fluid remains motionless. When the Rayleigh number increases past a critical value, an instability occurs--the hot fluid is less dense and begins to rise, while the cold fluid on top begins to sink. This sets up a pattern of convection rolls, completely analogous to the steady rotation of our waterwheel. With further increases of the Rayleigh number, the rolls become wavy and eventually chaotic. The analogy to the waterwheel breaks down at still higher Rayleigh numbers, when turbulence develops and the convective motion becomes complex in space as well as time (Drazin and Reid 1981, Berge et al. 1984, Manneville 1990). In contrast, the waterwheel settles into a pendulum-like pattern of reversals, turning once to the left, then back to the right, and so on indefinitely (see Example 9.5.2). ### Simple Properties of the Lorenz Equations In this section we'll follow in Lorenz's footsteps. He took the analysis as far as possible using standard techniques, but at a certain stage he found himself confronted with what seemed like a paradox. One by one he had eliminated all the known possibilities for the long-term behavior of his system: he showed that in a certain range of parameters, there could be no stable fixed points and no stable limit cycles, yet he also proved that all trajectories remain confined to a bounded region and are eventually attracted to a set of zero volume. What could that set be? And how do the trajectories move on it? As we'll see in the next section, that set is the strange attractor, and the motion on it is chaotic. But first we want to see how Lorenz ruled out the more traditional possibilities. As Sherlock Holmes said in _The Sign of Four_, "When you have eliminated the impossible, whatever remains, however improbable, must be the truth." The Lorenz equations are \[\begin{array}{l}\dot{x}=\sigma(y-x)\\ \dot{y}=rx-y-xz\\ \dot{z}=xy-bz.\end{array} \tag{1}\] Here \(\sigma\), \(r\), \(b>0\) are parameters: \(\sigma\) is the _Prandtl number, \(r\)_ is the Rayleigh number, and \(b\) has no name. (In the convection problem it is related to the aspect ratio of the rolls.) ### Nonlinearity The system (I) has only two nonlinearities, the quadratic terms \(xy\) and \(xz\). This should remind you of the waterwheel equations (9.1.9), which had two nonlinearities, \(\omega a_{1}\) and \(\omega b_{1}\). See Exercise 9.1.3 for the change of variables that transforms the waterwheel equations into the Lorenz equations. ### Symmetry There is an important _symmetry_ in the Lorenz equations. If we replace \((x,y)\rightarrow(-x,-y)\) in (I), the equations stay the same. Hence, if \((x(t),y(t),z(t))\) is a solution, so is \((-x(t),-y(t),z(t))\). In other words, all solutions are either symmetric themselves, or have a symmetric partner. ### Volume Contraction The Lorenz system is _dissipative_: volumes in phase space contract under the flow. To see this, we must first ask: how do volumes evolve? Let's answer the question in general, for any three-dimensionalsystem \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\). Pick an arbitrary closed surface \(S(t)\) of volume \(V(t)\) in phase space. Think of the points on \(S\) as initial conditions for trajectories, and let them evolve for an infinitesimal time \(dt\). Then \(S\) evolves into a new surface \(S(t+dt)\), what is its volume \(V(t+dt)\)? Figure 9.2.1 shows a side view of the volume. Let \(\mathbf{n}\) denote the outward normal on \(S\). Since \(\mathbf{f}\) is the instantaneous velocity of the points, \(\mathbf{f}\cdot\mathbf{n}\) is the outward normal component of velocity. Therefore in time \(dt\) a patch of area \(dA\) sweeps out a volume \((\mathbf{f}\cdot\mathbf{n}\,dt)dA\), as shown in Figure 9.2.2. Figure 9.2.2: Figure 9.2.1:Hence \[V(t+dt)=V(t)+\text{(volume swept out by tiny patches of surface,}\] \[\text{integrated over all patches),}\] so we obtain \[V(t+dt)=V(t)+\int_{S}(\mathbf{f}\cdot\mathbf{n}\,dt)\,dA.\] Hence \[\dot{V}=\frac{V(t+dt)-V(t)}{dt}=\int_{S}\mathbf{f}\cdot\mathbf{n}\,dA.\] Finally, we rewrite the integral above by the divergence theorem, and get \[\dot{V}=\int_{V}\nabla\cdot\mathbf{f}\,dV. \tag{2}\] For the Lorenz system, \[\nabla\cdot\mathbf{f} =\frac{\partial}{\partial x}[\sigma(y-x)]+\frac{\partial}{ \partial y}[rx-y-xz]+\frac{\partial}{\partial z}[xy-bz]\] \[=-\sigma-1-b<0.\] Since the divergence is constant, (2) reduces to \(\dot{V}=-(\sigma+1+b)V\), which has solution \(V(t)=V(0)e^{-(\sigma+1+b)t}\). Thus _volumes in phase space shrink exponentially fast_. Hence, if we start with an enormous solid blob of initial conditions, it eventually shrinks to a limiting set of zero volume, like a balloon with the air being sucked out of it. All trajectories starting in the blob end up somewhere in this limiting set; later we'll see it consists of fixed points, limit cycles, or for some parameter values, a strange attractor. Volume contraction imposes strong constraints on the possible solutions of the Lorenz equations, as illustrated by the next two examples. **Example 9.2.1:** Show that there are no quasiperiodic solutions of the Lorenz equations. _Solution:_ We give a proof by contradiction. If there were a quasiperiodic solution, it would have to lie on the surface of a torus, as discussed in Section 8.6, and this torus would be _invariant_ under the flow. Hence the volume inside the torus would be constant in time. But this contradicts the fact that all volumes shrink exponentially fast. Example 9.2.2: Show that it is impossible for the Lorenz system to have either repelling fixed points or repelling closed orbits. (By _repelling,_ we mean that _all_ trajectories starting near the fixed point or closed orbit are driven away from it.) _Solution:_ Repellers are incompatible with volume contraction because they are _sources_ of volume, in the following sense. Suppose we encase a repeller with a closed surface of initial conditions nearby in phase space. (Specifically, pick a small sphere around a fixed point, or a thin tube around a closed orbit.) A short time later, the surface will have expanded as the corresponding trajectories are driven away. Thus the volume inside the surface would increase. This contradicts the fact that all volumes contract. By process of elimination, we conclude that all fixed points must be sinks or saddles, and closed orbits (if they exist) must be stable or saddle-like. For the case of fixed points, we now verify these general conclusions explicitly. Fixed Points Like the waterwheel, the Lorenz system (I) has two types of fixed points. The origin (\(x*\), \(y*\), \(z*\)) = (0, 0, 0) is a fixed point for _all_ values of the parameters. It is like the motionless state of the waterwheel. For \(r>1\), there is also a symmetric pair of fixed points \(x*=y*=\pm\sqrt{b(r-1)}\), \(z*=r-1\). Lorenz called them \(C^{+}\) and \(C^{-}\). They represent left- or right-turning convection rolls (analogous to the steady rotations of the waterwheel). As \(r\to 1^{+}\), \(C^{+}\) and \(C^{-}\) coalesce with the origin in a _pitchfork_ bifurcation. Linear Stability of the Origin The linearization at the origin is \(\dot{x}=\sigma\big{(}y-x\big{)}\), \(\dot{y}=rx-y\), \(\dot{z}=-bz\), obtained by omitting the \(xy\) and \(xz\) nonlinearities in (I). The equation for \(z\) is decoupled and shows that \(z(t)\to 0\) exponentially fast. The other two directions are governed by the system \[\begin{pmatrix}\dot{x}\\ \dot{y}\end{pmatrix}=\begin{pmatrix}-\sigma&\sigma\\ r&-1\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix},\] with trace \(\tau=-\sigma-1<0\) and determinant \(\Delta=\sigma(1-r)\). If \(r>1\), the origin is a saddle point because \(\Delta<0\). Note that this is _a new type of saddle_ for us, since the full system is three-dimensional. Including the decaying \(z\)-direction, the saddle has one outgoing and two incoming directions. If \(r<1\), all directions are incoming and the origin is a sink. Specifically, since \(\tau^{2}-4\Delta=(\sigma+1)^{2}-4\sigma(1-r)=(\sigma-1)^{2}+4\sigma r>0\), the origin is a stable node for \(r<1\). ### Global Stability of the Origin Actually, for \(r<1\), we can show that _every_ trajectory approaches the origin as \(t\to\infty\); the origin is _globally stable_. Hence there can be no limit cycles or chaos for \(r<1\). The proof involves the construction of a _Liapunov function_, a smooth, positive definite function that decreases along trajectories. As discussed in Section 7.2, a Liapunov function is a generalization of an energy function for a classical mechanical system--in the presence of friction or other dissipation, the energy decreases monotonically. There is no systematic way to concoct Liapunov functions, but often it is wise to try expressions involving sums of squares. Here, consider \(V(x,y,z)=\frac{1}{\sigma}x^{2}+y^{2}+z^{2}\). The surfaces of constant \(V\) are concentric ellipsoids about the origin (Figure 9.2.3). The idea is to show that if \(r<1\) and \((x,y,z)\approx(0,0,0)\), then \(\dot{V}<0\) along trajectories. This would imply that the trajectory keeps moving to lower \(V\), and hence penetrates smaller and smaller ellipsoids as \(t\to\infty\). But \(V\) is bounded below by \(0\), so \(V(\mathbf{x}(t))\to 0\) and hence \(\mathbf{x}(t)\to\mathbf{0}\), as desired. Now calculate: \[\begin{array}{l}\frac{1}{2}\dot{V}=\frac{1}{\sigma}xx\dot{x}+y\dot{y}+z\dot{ z}\\ =(yx-x^{2})+(ryx-y^{2}-xyz)+(zxy-bz^{2})\\ =(r+1)xy-\mathbf{x}^{2}-y^{2}-bz^{2}.\end{array}\] Completing the square in the first two terms gives \[\begin{array}{l}\frac{1}{2}\dot{V}=-\big{[}x-\frac{r+1}{2}y\big{]}^{2}-\big{[} 1-(\frac{r+1}{2})^{2}\big{]}y^{2}-bz^{2}.\end{array}\] We claim that the right-hand side is strictly negative if \(r<1\) and \((x,y,z)\approx(0,0,0)\). It is certainly not positive, since it is a negative sum of squares. But could \(\dot{V}=0\)? ### Simple properties of the Lorenz equations Figure 9.2.3:That would require each of the terms on the right to vanish separately. Hence \(y=0\), \(z=0\), from the second two terms on the right-hand side. (Because of the assumption \(r<1\), the coefficient of \(y^{2}\) is nonzero.) Thus the first term reduces to \(\neg x^{2}\), which vanishes only if \(x=0\). The upshot is that \(\dot{V}=0\) implies (\(x\), \(y\), \(z\)) = (0, 0, 0). Otherwise \(\dot{V}<0\). Hence the claim is established, and therefore the origin is globally stable for \(r<1\). ### Stability of \(C^{+}\) and \(C^{-}\) Now suppose \(r>1\), so that \(C^{+}\) and \(C^{-}\) exist. The calculation of their stability is left as Exercise 9.2.1. It turns out that they are linearly stable for \[1<r<r_{H}=\frac{\sigma(\sigma+b+3)}{\sigma-b-1}\] (assuming also that \(\sigma-b-1>0\)). We use a subscript \(H\) because \(C^{+}\) and \(C^{-}\) lose stability in a Hopf bifurcation at \(r=r_{H}\). What happens immediately after the bifurcation, for \(r\) slightly greater than \(r_{H}\)? You might suppose that \(C^{+}\) and \(C^{-}\) would each be surrounded by a small stable limit cycle. That would occur if the Hopf bifurcation were supercritical. But actually it's _subcritical_--the limit cycles are _unstable_ and exist only for \(r<r_{H}\). This requires a difficult calculation; see Marsden and McCracken (1976) or Drazin (1992, Q8.2 on p. 277). Here's the intuitive picture. For \(r<r_{H}\) the phase portrait near \(C^{+}\) is shown schematically in Figure 9.2.4. The fixed point is stable. It is encircled by a _saddle cycle_, a new type of unstable limit cycle that is possible only in phase spaces of three or more dimensions. The cycle has a two-dimensional unstable manifold (the sheet in Figure 9.2.4), and a two-dimensional stable manifold (not shown). As \(r\to r_{H}\) from below, the cycle Figure 9.2.4: shrinks down around the fixed point. At the Hopf bifurcation, the fixed point absorbs the saddle cycle and changes into a saddle point. For \(r>r_{H}\) there are no attractors in the neighborhood. So for \(r>r_{H}\) trajectories must fly away to a distant attractor. But what can it be? A partial bifurcation diagram for the system, based on the results so far, shows no hint of any stable objects for \(r>r_{H}\) (Figure 9.2.5). Could it be that all trajectories are repelled out to infinity? No; we can prove that all trajectories eventually enter and remain in a certain large ellipsoid (Exercise 9.2.2). Could there be some stable limit cycles that we're unaware of? Possibly, but Lorenz gave a persuasive argument that for \(r\) slightly greater than \(r_{H}\), any limit cycles would have to be _unstable_ (see Section 9.4). So the trajectories must have a bizarre kind of long-term behavior. Like balls in a pinball machine, they are repelled from one unstable object after another. At the same time, they are confined to a bounded set of zero volume, yet they manage to move on this set forever without intersecting themselves or others. In the next section we'll see how the trajectories get out of this conundrum. ### 9.3 Chaos on a Strange Attractor Lorenz used numerical integration to see what the trajectories would do in the long run. He studied the particular case \(\sigma=10,\ b=\frac{8}{3},\ r=28\). This value of \(r\) is just past the Hopf bifurcation value \(r_{H}=\sigma(\sigma+\ b+3)/(\sigma-b-1)\approx 24.74\), so he knew that something strange had to occur. Of course, strange things could occur for another reason--the electromechanical computers of those days were unreliable and difficult to use, so Lorenz had to interpret his numerical results with caution. ### 9.3 Chaos on a Strange Attractor Figure 9.2.5: He began integrating from the initial condition (0, 1, 0), close to the saddle point at the origin. Figure 9.3.1 plots \(y(t)\) for the resulting solution. After an initial transient, the solution settles into an irregular oscillation that persists as \(t\rightarrow\infty\), but never repeats exactly. The motion is _aperiodic._ Lorenz discovered that a wonderful structure emerges if the solution is visualized as a trajectory in phase space. For instance, when \(x(t)\) is plotted against \(z(t)\), a butterfly pattern appears (Figure 9.3.2). Figure 9.3.2 Figure 9.3.1 The trajectory appears to cross itself repeatedly, but that's just an artifact of projecting the three-dimensional trajectory onto a two-dimensional plane. In three dimensions no self-intersections occur. Let's try to understand Figure 9.3.2 in detail. The trajectory starts near the origin, then swings to the right, and then dives into the center of a spiral on the left. After a very slow spiral outward, the trajectory shoots back over to the right side, spirals around a few times, shoots over to the left, spirals around, and so on indefinitely. The number of circuits made on either side varies unpredictably from one cycle to the next. In fact, the sequence of the number of circuits has many of the characteristics of a _random_ sequence. Physically, the switches between left and right correspond to the irregular reversals of the waterwheel that we observed in Section 9.1. When the trajectory is viewed in all three dimensions, rather than in a two-dimensional projection, it appears to settle onto an exquisitely thin set that looks like a pair of butterfly wings. Figure 9.3.3 shows a schematic of this _strange attractor_ (a term coined by Ruelle and Takens (1971)). This limiting set is the attracting set of zero volume whose existence was deduced in Section 9.2. What is the geometrical structure of the strange attractor? Figure 9.3.3 suggests that it is a pair of surfaces that merge into one in the lower portion of Figure 9.3.3. But how can this be, when the uniqueness theorem (Section 6.2) tells us that trajectories can't cross or merge? Lorenz (1963) gives a lovely explanation--the two surfaces only _appear_ to merge. The illusion is caused by the strong volume contraction of the flow, and insufficient numerical resolution. But watch where that idea leads him: It would seem, then, that the two surfaces merely appear to merge, and remain distinct surfaces. Following these surfaces along a path parallel to a trajectory, and circling \(C^{+}\) and \(C^{-}\), we see that each surface is really a pair of surfaces, so that, where they appear to merge, there are really four surfaces. Continuing this process for another circuit, we see that there are Figure 9.3.3: Abraham and Shaw [1983], p. 88 really eight surfaces, etc., and we finally conclude that there is an infinite complex of surfaces, each extremely close to one or the other of two merging surfaces. Today this "infinite complex of surfaces" would be called a fractal. It is a set of points with zero volume but infinite surface area. In fact, numerical experiments suggest that it has a dimension of about 2.05! (See Example 11.5.1.) The amazing geometric properties of fractals and strange attractors will be discussed in detail in Chapters 11 and 12. But first we want to examine chaos a bit more closely. ### Exponential Divergence of Nearby Trajectories The motion on the attractor exhibits _sensitive dependence on initial conditions._ This means that two trajectories starting very close together will rapidly diverge from each other, and thereafter have totally different futures. Color Plate 2 vividly illustrates this divergence by plotting the evolution of a small red blob of 10,000 nearby initial conditions. The blob eventually spreads over the whole attractor. Hence nearby trajectories can end up anywhere on the attractor! The practical implication is that long-term prediction becomes impossible in a system like this, where small uncertainties are amplified enormously fast. Let's make these ideas more precise. Suppose that we let transients decay, so that a trajectory is "on" the attractor. Suppose **x**(_t_) is a point on the attractor at time \(t\), and consider a nearby point, say **x**(_t_) + _d_(_t_), where \(d\) is a tiny separation vector of initial length ||_d_0|| = 10-15, say (Figure 9.3.4). Now watch how _d_(_t_) grows. In numerical studies of the Lorenz attractor, one finds that \[\left\| \delta(t) \right\| \sim \left\| \delta_{0} \right\| e^{\omega t}\] where \(l\) 0.9. Hence _neighboring trajectories separate exponentially fast._ Equivalently, if we plot ln||_d_(_t_)|| versus \(t\), we find a curve that is close to a straight line with a positive slope of \(l\) (Figure 9.3.5). We need to add some qualifications: 1. The curve is never exactly straight. It has wiggles because the strength of the exponential divergence varies somewhat along the attractor. 2. The exponential divergence must stop when the separation is comparable to the "diameter" of the attractor--the trajectories obviously can't get any farther apart than that. This explains the leveling off or _saturation_ of the curve in Figure 9.3.5. 3. The number \(\lambda\) is often called the _Liapunov exponent_, although this is a sloppy use of the term, for two reasons: First, there are actually \(n\) different _Liapunov exponents_ for an \(n\)-dimensional system, defined as follows. Consider the evolution of an infinitesimal sphere of perturbed initial conditions. During its evolution, the sphere will become distorted into an infinitesimal ellipsoid. Let \(\delta_{{}_{k}}(t)\), \(k=1,\ldots,n\), denote the length of the \(k\)th principal axis of the ellipsoid. Then \(\delta_{{}_{k}}(t)\sim\delta_{{}_{k}}(0)e^{\lambda_{{}_{k}}t}\), where the \(\lambda_{{}_{k}}\) are the Liapunov exponents. For large \(t\), the diameter of the ellipsoid is controlled by the most positive \(\lambda_{{}_{k}}\). Thus our \(\lambda\) is actually the _largest_ Liapunov exponent. Second, \(\lambda\) depends (slightly) on which trajectory we study. We should average over many different points on the same trajectory to get the true value of \(\lambda\). When a system has a positive Liapunov exponent, there is a _time horizon_ beyond which prediction breaks down, as shown schematically in Figure 9.3.6. (See Lighthill 1986 for a nice discussion.) Suppose we measure the initial conditions of an experimental system very accurately. Of course, no measurement is perfect--there is always some error \(\|\delta_{0}\|\) between our estimate and the true initial state. ### Chaos on a strange attractor Figure 9.3.5: After a time \(t\), the discrepancy grows to \(\|\delta(t)\|\sim\|\delta_{0}\|e^{\lambda t}\). Let \(a\) be a measure of our tolerance, i.e., if a prediction is within \(a\) of the true state, we consider it acceptable. Then our prediction becomes intolerable when \(\|\delta(t)\|\geq a\); this occurs after a time \[t_{\text{horizon}}\sim O\left(\frac{1}{\lambda}\ln\frac{a}{\|\delta_{0}\|} \right).\] The logarithmic dependence on \(\|\delta_{0}\|\) is what hurts us. No matter how hard we work to reduce the initial measurement error, we can't predict longer than a few multiples of \(1/\lambda\). The next example is intended to give you a quantitative feel for this effect. **Example 9.3.1:** Suppose we're trying to predict the future state of a chaotic system to within a tolerance of \(a=10^{-3}\). Given that our estimate of the initial state is uncertain to within \(\|\delta_{0}\|=10^{-7}\), for about how long can we predict the state of the system, while remaining within the tolerance? Now suppose we buy the finest instrumentation, recruit the best graduate students, etc., and somehow manage to measure the initial state a _million_ times better, i.e., we improve our initial error to \(\|\delta_{0}\|=10^{-13}\). How much longer can we predict? _Solution:_ The original prediction has \[t_{\text{horizon}}\approx\frac{1}{\lambda}\ln\frac{10^{-3}}{10^{-7}}=\frac{1} {\lambda}\ln(10^{4})=\frac{4\ln 10}{\lambda}.\] The improved prediction has \[t_{\text{horizon}}\approx\frac{1}{\lambda}\ln\frac{10^{-3}}{10^{-13}}=\frac{1} {\lambda}\ln(10^{10})=\frac{10\ln 10}{\lambda}.\] Thus, after a millionfold improvement in our initial uncertainty, we can predict only \(10/4=2.5\) times longer! **330**Loremz equations Figure 9.3.6: Such calculations demonstrate the futility of trying to predict the detailed long-term behavior of a chaotic system. Lorenz suggested that this is what makes long-term weather prediction so difficult. ### Defining Chaos No definition of the term _chaos_ is universally accepted yet, but almost everyone would agree on the three ingredients used in the following working definition: _Chaos is aperiodic long-term behavior in a deterministic system that exhibits sensitive dependence on initial conditions._ 1. "Aperiodic long-term behavior" means that there are trajectories which do not settle down to fixed points, periodic orbits, or quasiperiodic orbits as \(t\to\infty\). For practical reasons, we should require that such trajectories are not too rare. For instance, we could insist that there be an open set of initial conditions leading to aperiodic trajectories, or perhaps that such trajectories should occur with nonzero probability, given a random initial condition. 2. "Deterministic" means that the system has no random or noisy inputs or parameters. The irregular behavior arises from the system's nonlinearity, rather than from noisy driving forces. 3. "Sensitive dependence on initial conditions" means that nearby trajectories separate exponentially fast, i.e., the system has a positive Liapunov exponent. ## Example 9.3.2: Some people think that chaos is just a fancy word for instability. For instance, the system \(\dot{x}=x\) is deterministic and shows exponential separation of nearby trajectories. Should we call this system chaotic? _Solution:_ No. Trajectories are repelled to infinity, and never return. So infinity acts like an attracting _fixed point_. Chaotic behavior should be aperiodic, and that excludes fixed points as well as periodic behavior. ### Defining Attractor and Strange Attractor The term _attractor_ is also difficult to define in a rigorous way. We want a definition that is broad enough to include all the natural candidates, but restrictive enough to exclude the imposters. There is still disagreement about what the exact definition should be. See Guckenheimer and Holmes (1983, p. 256), Eckmann and Ruelle (1985), and Milnor (1985) for discussions of the subtleties involved. Loosely speaking, an attractor is a set to which all neighboring trajectories converge. Stable fixed points and stable limit cycles are examples. More precisely, we define an _attractor_ to be a closed set \(A\) with the following properties:1. \(A\) is an _invariant set:_ any trajectory **x**(_t_) that starts in \(A\) stays in \(A\) for all time. 2. _A attracts an open set of initial conditions_: there is an open set \(U\) containing \(A\) such that if **x**(0) _U,_ then the distance from **x**(_t_) to \(A\) tends to zero as \(t\to\infty\). This means that \(A\) attracts all trajectories that start sufficiently close to it. The largest such \(U\) is called the _basin of attraction_ of \(A\). 3. \(A\) is _minimal:_ there is no proper subset of \(A\) that satisfies conditions 1 and 2. **Example 9.3.3**: Consider the system \(\dot{x}=x-x^{3}\), \(\dot{y}=-y\). Let \(I\) denote the interval -1 \(\leq x\leq\) 1, \(y=0\). Is \(I\) an invariant set? Does it attract an open set of initial conditions? Is it an attractor? _Solution:_ The phase portrait is shown in Figure 9.3.7. There are stable fixed points at the endpoints (\(\pm\)1,0) of \(I\) and a saddle point at the origin. Figure 9.3.7 shows that \(I\) is an invariant set; any trajectory that starts in \(I\) stays in \(I\) forever. (In fact the whole _x_-axis is an invariant set, since if \(y(0)=0\), then \(y(t)=0\) for all \(t\).) So condition 1 is satisfied. Moreover, \(I\) certainly attracts an open set of initial conditions--it attracts _all_ trajectories in the \(xy\) plane. So condition 2 is also satisfied. But \(I\) is _not_ an attractor because it is not minimal. The stable fixed points (\(\pm\)1,0) are proper subsets of \(I\) that also satisfy properties 1 and 2. These points are the only attractors for the system. There is an important moral to Example 9.3.3. Even if a certain set attracts all trajectories, it may fail to be an attractor because it may not be minimal--it may contain one or more smaller attractors. The same could be true for the Lorenz equations. Although all trajectories are attracted to a bounded set of zero volume, that set is not necessarily an attractor, since it might not be minimal. Doubts about this delicate issue lingered for many years, but were eventually laid to rest in 1999, as we'll discuss in Section 9.4. Finally, we define a _strange attractor_ to be an attractor that exhibits sensitive dependence on initial conditions. Strange attractors were originally called strange because they are often fractal sets. Nowadays this geometric property is regarded as less important than the dynamical property of sensitive dependence on initial conditions. The terms _chaotic attractor_ and _fractal attractor_ are used when one wishes to emphasize one or the other of those aspects. ### 9.4 Lorenz Map Lorenz (1963) found a beautiful way to analyze the dynamics on his strange attractor. He directs our attention to a particular view of the attractor (Figure 9.4.1), and then he writes: [MISSING_PAGE_POST] **9. the trajectory apparently leaves one spiral only after exceeding some critical distance from the center. Moreover, the extent to which this distance is exceeded appears to determine the point at which the next spiral is entered; this in turn seems to determine the number of circuits to be executed before changing spirals again. It therefore seems that some single feature of a given circuit should predict the same feature of the following circuit. The "single feature" that he focuses on is \(z_{n}\), the \(n\)th local maximum of \(z(t)\) (Figure 9.4.2). Lorenz's idea is that \(z_{n}\) should predict \(z_{n+1}\). To check this, he numerically integrated the equations for a long time, then measured the local maxima of \(z(t)\), and finally plotted \(z_{n+1}\) vs. \(z_{n}\). As shown in Figure 9.4.3, _the data from the chaotic time series appear to fall neatly on a curve_--there is almost no "thickness" to the graph!By this ingenious trick, Lorenz was able to extract order from chaos. The function \(z_{{}_{n + 1}} = f(z_{{}_{n}})\) shown in Figure 9.4.3 is now called the _Lorenz map_. It tells us a lot about the dynamics on the attractor: given \(z_{{}_{0}}\), we can predict \(z_{{}_{1}}\) by \(z_{{}_{1}} = f(z_{{}_{0}})\), and then use that information to predict \(z_{{}_{2}} = f(z_{{}_{1}})\), and so on, bootstrapping our way forward in time by iteration. The analysis of this iterated map is going to lead us to a striking conclusion, but first we should make a few clarifications. First, the graph in Figure 9.4.3 is not actually a curve. It _does_ have some thickness. So strictly speaking, \(f(z)\) is not a well-defined function, because there can be more than one output \(z_{{}_{n + 1}}\) for a given input \(z_{{}_{n}}\). On the other hand, the thickness is so small, and there is so much to be gained by treating the graph as a curve, that we will simply make this approximation, keeping in mind that the subsequent analysis is plausible but not rigorous. Second, the Lorenz map may remind you of a Poincare map (Section 8.7). In both cases we're trying to simplify the analysis of a differential equation by reducing it to an iterated map of some kind. But there's an important distinction: To construct a Poincare map for a three-dimensional flow, we compute a trajectory's successive intersections with a two-dimensional surface. The Poincare map takes a point on that surface, specified by _two_ coordinates, and then tells us how those two coordinates change after the first return to the surface. The Lorenz map is different because it characterizes the trajectory by only _one_ number, not two. This simpler approach works only if the attractor is very "flat," i.e., close to two-dimensional, as the Lorenz attractor is. ### Ruling Out Stable Limit Cycles How do we know that the Lorenz attractor is not just a stable limit cycle in disguise? Playing devil's advocate, a skeptic might say, "Sure, the trajectories don't ever seem to repeat, but maybe you haven't integrated long enough. Eventually the trajectories _will_ settle down into a periodic behavior--it just happens that the period is incredibly long, much longer than you've tried in your computer. Prove me wrong." Although he couldn't come up with a rigorous refutation, Lorenz was able to give a plausible counterargument that stable limit cycles do not, in fact, occur for the parameter values he studied. His argument goes like this: The key observation is that the graph in Figure 9.4.3 satisfies \[|f^{\prime}(z)|>1 \tag{1}\] everywhere. This property ultimately implies that if any limit cycles exist, they are necessarily _unstable_. To see why, we start by analyzing the fixed points of the map\(f\). These are points \(z^{*}\) such that \(f(z^{*})=z^{*}\), in which case \(z_{{}_{n}}=z_{{}_{n + 1}}=z_{{}_{n + 2}}=\ldots\). Figure 9.4.3 shows that there is one fixed point, where the \(45^{\circ}\) diagonal intersects the graph. It represents a closed orbit that looks like that shown in Figure 9.4.4. To show that this closed orbit is unstable, consider a slightly perturbed trajectory that has \(z_{n}=z^{\ast\!+}\)\(\eta_{n}\), where \(\eta_{n}\) is small. After linearization as usual, we find \(\eta_{n+1}\approx\)\(f^{\prime}\)\((z^{\ast\!\ast})\eta_{n}\). Since \(\left|\,f^{\prime}\,(z^{\ast\!\ast})\right|>1\), by the key property (I), we get \[\left|\eta_{n+1}\right|>\left|\eta_{n}\right|\text{.}\] Hence the deviation \(\eta_{n}\)_grows_ with each iteration, and so the original closed orbit is unstable. Now we generalize the argument slightly to show that _all_ closed orbits are unstable. **EXAMPLE 9.4.1:** Given the Lorenz map approximation \(z_{n+1}=f(z_{n})\), with \(\left|f^{\prime}(z)\right|>1\) for all \(z\), show that _all_ closed orbits are unstable. _Solution:_ Think about the sequence \(\{z_{n}\}\) corresponding to an arbitrary closed orbit. It might be a complicated sequence, but since we know that the orbit eventually closes, the sequence must eventually repeat. Hence \(z_{n+p}=z_{n}\), for some integer \(p\geq 1\). (Here \(p\) is the _period_ of the sequence, and \(z_{n}\) is a _period-\(p\) point._) Now to prove that the corresponding closed orbit is unstable, consider the fate of a small deviation \(\eta_{n}\), and look at it after \(p\) iterations, when the cycle is complete. We'll show that \(\left|\eta_{n+p}\right|>\left|\eta_{n}\right|\), which implies that the deviation has grown and the closed orbit is unstable. To estimate \(\eta_{n+p}\), go one step at a time. After one iteration, \(\eta_{n+1}\approx\)\(f^{\prime}(z_{n})\eta_{n}\), by linearization about \(z_{n}\). Similarly, after two iterations, \[\eta_{n+2} \approx f^{\prime}(z_{n+1})\eta_{n+1}\] \[\approx f^{\prime}(z_{n+1})\big{[}f^{\prime}(z_{n})\eta_{n}\big{]}\] \[= \big{[}f^{\prime}(z_{n+1})f^{\prime}(z_{n})\big{]}\eta_{n}\text{.}\] **336**Loremz equations Figure 9.4.4: Hence after \(p\) iterations, \[\eta_{n+p}\approx\left[\prod_{k=0}^{p-1}f^{\prime}(z_{n+k})\right]\eta_{n}. \tag{2}\] In (2), each of the factors in the product has absolute value greater than 1, because \(|f^{\prime}(z)|>1\) for all \(z\). Hence \(|\eta_{n+p}|>|\eta_{n}|\), which proves that the closed orbit is unstable. Still, since the Lorenz map is not a well-defined function (because, as we've seen, its graph has some thickness to it), this sort of argument wouldn't convince our hypothetical skeptic. The matter was finally laid to rest in 1999, when a graduate student named Warwick Tucker proved that the Lorenz equations do, in fact, have a strange attractor (Tucker 1999, 2002). See Stewart (2000) and Viana (2000) for readable accounts of this milestone. Why does Tucker's proof matter? Because it dispels any lingering concerns that our simulations are deceiving us. Those concerns are serious and justified. After all, how sure can we be of the trajectories we see in the computer, when any little error in numerical integration is bound to grow exponentially fast? Tucker's theorem reassures us that, despite these inevitable numerical errors, the strange attractor and the chaotic motion that we see on it are genuine properties of the Lorenz equations themselves. ### Exploring Parameter Space So far we have concentrated on the particular parameter values \(\sigma=10,\ b=\frac{8}{3}\), \(r=28\), as in Lorenz (1963). What happens if we change the parameters? It's like a walk through the jungle--one can find exotic limit cycles tied in knots, pairs of limit cycles linked through each other, intermittent chaos, noisy periodicity, as well as strange attractors (Sparrow 1982, Jackson 1990). You should do some exploring on your own, perhaps starting with some of the exercises. There is a vast three-dimensional parameter space to be explored, and much remains to be discovered. To simplify matters, many investigators have kept \(\sigma=10\) and \(\ b=\frac{8}{3}\) while varying \(r\). In this section we give a glimpse of some of the phenomena observed in numerical experiments. See Sparrow (1982) for the definitive treatment. The behavior for small values of \(r\) is summarized in Figure 9.5.1. Much of this picture is familiar. The origin is globally stable for \(r<1\). At \(r=1\) the origin loses stability by a supercritical pitchfork bifurcation, and a symmetric pair of attracting fixed points is born (in our schematic, only one of the pair is shown). At \(r_{H}=24.74\) the fixed points lose stability by absorbing an unstable limit cycle in a subcritical Hopf bifurcation. Now for the new results. As we decrease \(r\) from \(r_{H}\), the unstable limit cycles expand and pass precariously close to the saddle point at the origin. At \(r\approx 13.926\) the cycles touch the saddle point and become homoclinic orbits; hence we have a _homoclinic bifurcation_. (See Section 8.4 for the much simpler homoclinic bifurcations that occur in two-dimensional systems.) Below \(r=13.926\) there are no limit cycles. Viewed in the other direction, we could say that a pair of unstable limit cycles are created as \(r\) increases through \(r=13.926\). This homoclinic bifurcation has many ramifications for the dynamics, but its analysis is too advanced for us--see Sparrow's (1982) discussion of "homoclinic explosions." The main conclusion is that an amazingly complicated invariant set is born at \(r=13.926\), along with the unstable limit cycles. This set is a thicket of infinitely many saddle-cycles and aperiodic orbits. It is not an attractor and is not observable directly, but it generates sensitive dependence on initial conditions in its neighborhood. Trajectories can get hung up near this set, somewhat like wandering in a maze. Then they rattle around chaotically for a while, but eventually escape and settle down to \(C^{+}\) or \(C^{-}\). The time spent wandering near the set gets longer and longer as \(r\) increases. Finally, at \(r=24.06\) the time spent wandering becomes infinite and the set becomes a strange attractor (Yorke and Yorke 1979). Figure 9.5.1 **Example 9.5.1:** Show numerically that the Lorenz equations can exhibit _transient chaos_ when \(r=21\) (with \(\sigma=10\) and \(b=\frac{8}{3}\) as usual). _Solution:_ After experimenting with a few different initial conditions, it is easy to find solutions like that shown in Figure 9.5.2. At first the trajectory seems to be tracing out a strange attractor, but eventually it stays on the right and spirals down toward the stable fixed point \(C^{+}\). (Recall that both \(C^{+}\) and \(C^{-}\) are still stable at \(r=21\).) The time series of \(y\) vs. \(t\) shows the same result: an initially erratic solution ultimately damps down to equilibrium (Figure 9.5.3). ### 9.5 Exploring PARAMETER SPACE Figure 9.5.2:Other names used for transient chaos are _metastable chaos_ (Kaplan and Yorke 1979) or _pre-turbulence_ (Yorke and Yorke 1979, Sparrow 1982). By our definition, the dynamics in Example 9.5.1 are not "chaotic," because the long-term behavior is not aperiodic. On the other hand, the dynamics do exhibit sensitive dependence on initial conditions--if we had chosen a slightly different initial condition, the trajectory could easily have ended up at \(C^{\cdot}\) instead of \(C^{\cdot_{+}}\). Thus the system's behavior is unpredictable, at least for certain initial conditions. Transient chaos shows that a deterministic system can be unpredictable, even if its final states are very simple. In particular, you don't need strange attractors to generate effectively random behavior. Of course, this is familiar from everyday experience--many games of "chance" used in gambling are essentially demonstrations of transient chaos. For instance, think about rolling dice. A crazily-rolling die always stops in one of six stable equilibrium positions. The problem with predicting the outcome is that the final position depends sensitively on the initial orientation and velocity (assuming the initial velocity is large enough). Before we leave the regime of small \(r\), we note one other interesting implication of Figure 9.5.1: for \(24.06<r<24.74\), there are _two_ types of attractors: fixed points and a strange attractor. This coexistence means that we can have hysteresis between chaos and equilibrium by varying \(r\) slowly back and forth past these two endpoints (Exercise 9.5.4). It also means that a large enough perturbation can knock a steadily rotating waterwheel into permanent chaos; this is reminiscent (in spirit, though not detail) of fluid flows that mysteriously become turbulent even though the basic laminar flow is still linearly stable (Drazin and Reid 1981). The next example shows that the dynamics become simple again when \(r\) is sufficiently large. Figure 9.5.3: **Example 9.5.2**: _Describe the long-term dynamics for large values of \(r\), for \(\sigma=10,\ b=\frac{8}{3}\). Interpret the results in terms of the motion of the waterwheel of Section 9.1._ _Solution: Numerical simulations indicate that the system has a globally attracting limit cycle for all \(r>313\) (Sparrow 1982). In Figures 9.5.4 and 9.5.5 we plot a typical solution for \(r=350\); note the approach to the limit cycle._This solution predicts that the waterwheel should ultimately rock back and forth like a pendulum, turning once to the right, then back to the left, and so on. This is observed experimentally. In the limit \(r\to\infty\) one can obtain many analytical results about the Lorenz equations. For instance, Robbins (1979) used perturbation methods to characterize the limit cycle at large \(r\). For the first steps in her calculation, see Exercise 9.5.5. For more details, see Chapter 7 in Sparrow (1982). The story is much more complicated for \(r\) between 28 and 313. For most values of \(r\) one finds chaos, but there are also small windows of periodic behavior interspersed. The three largest windows are 99.524. \(\ldots<r<\) 100.795. \(\ldots\); I45 \(<r<\)166; and \(r>\) 214.4. The alternating pattern of chaotic and periodic regimes resembles that seen in the logistic map (Chapter 10), and so we will defer further discussion until then. ### 9.6 Using Chaos to Send Secret Messages One of the most exciting recent developments in nonlinear dynamics is the realization that chaos can be _useful_. Normally one thinks of chaos as a fascinating curiosity at best, and a nuisance at worst, something to be avoided or engineered away. But since about 1990, people have found ways to exploit chaos to do some marvelous and practical things. For an introduction to this subject, see Vohra et al. (1992). One application involves "private communications." Suppose you want to send a secret message to a friend or business partner. Naturally you should use a code, so that even if an enemy is eavesdropping, he will have trouble making sense of the message. This is an old problem--people have been making (and breaking) codes for as long as there have been secrets worth keeping. Kevin Cuomo and Alan Oppenheim (1992, 1993) implemented a new approach to this problem, building on Pecora and Carroll's (1990) discovery of _synchronized chaos_. Here's the strategy: When you transmit the message to your friend, you also "mask" it with much louder chaos. An outside listener only hears the chaos, which sounds like meaningless noise. But now suppose that your friend has a magic receiver that perfectly reproduces the chaos--then he can subtract off the chaotic mask and listen to the message! ### 9.7 Cuomo's Demonstration Kevin Cuomo was a student in my course on nonlinear dynamics, and at the end of the semester he treated our class to a live demonstration of his approach. First he showed us how to make the chaotic mask, using an electronic implementation of the Lorenz equations (Figure 9.6.1). The circuit involves resistors, capacitors, operational amplifiers, and analog multiplier chips. The voltages \(u\), \(v\), \(w\) at three different points in the circuit are proportional to Lorenz's \(x\), \(y\), \(z\). Thus the circuit acts like an analog computer for the Lorenz equations. Oscilloscope traces of \(u(t)\) vs. \(w(t)\), for example, confirmed that the circuit was following the familiar Lorenz attractor. Then, by hooking up the circuit to a loudspeaker, Cuomo enabled us to _hear_ the chaos--it sounds like static on the radio. The hard part is to make a receiver that can synchronize perfectly to the chaotic transmitter. In Cuomo's set-up, the receiver is an identical Lorenz circuit, driven in a certain clever way by the transmitter. We'll get into the details later, but for now let's content ourselves with the experimental fact that synchronized chaos does occur. Figure 9.6.2 plots the receiver variables \(u_{r}(t)\) and \(v_{r}(t)\) against their transmitter counterparts \(u(t)\) and \(v(t)\). ### 9.6 USING CHAOS TO SEND SECRET MESSAGES Figure 9.6.2: Courtesy of Kevin Cuomo Figure 9.6.1: Cuomo and Oppenheim (1993), p. 66 The 45o trace on the oscilloscope indicates that the synchronization is nearly perfect, despite the fact that both circuits are running chaotically. The synchronization is also quite stable: the data in Figure 9.6.2 reflect a time span of several minutes, whereas without the drive the circuits would decorrelate in about 1 millisecond. Cuomo brought the house down when he showed us how to use the circuits to mask a message, which he chose to be a recording of the hit song "Emotions" by Mariah Carey. (One student, apparently with different taste in music, asked "Is that the signal or the noise?") After playing the original version of the song, Cuomo played the masked version. Listening to the hiss, one had absolutely no sense that there was a song buried underneath. Yet when this masked message was sent to the receiver, its output synchronized almost perfectly to the original chaos, and after instant electronic subtraction, we heard Mariah Carey again! The song sounded fuzzy, but easily understandable. Figures 9.6.3 and 9.6.4 illustrate the system's performance more quantitatively on a test sentence from a different source. Figure 9.6.3a is a segment of speech from the sentence "He has the bluest eyes," obtained by sampling the speech waveform at a 48 kHz rate and with 16-bit resolution. This signal was then masked by much louder chaos. The power spectra in Figure 9.6.4 show that the chaos is about 20 decibels louder than the message, with coverage over its whole frequency range. Finally, the unmasked message at the receiver is shown in Figure 9.6.3b. The original speech is recovered with only a tiny amount of distortion (most visible as the increased noise on the flat parts of the record). ### Proof of Synchronization The signal-masking method discussed above was made possible by the conceptual breakthrough of Pecora and Carroll (1990). Before their work, many people would have doubted that two chaotic systems could be made to synchronize. After all, chaotic systems are sensitive to slight changes in initial condition, so one might expect any errors between the transmitter and receiver to grow exponentially. But Pecora and Carroll (1990) found a way around these concerns. Cuomo and Oppenheim (1992, 1993) simplified and clarified the argument; we discuss their approach now. The receiver circuit is shown in Figure 9.6.5. ### Using Chaos to SEND SECRET MESSAGES Figure 9.6.4: Cuomo and Oppenheim (1993), p. 68 Figure 9.6.5: Courtesy of Kevin Cuomo Figure 9.6.4: Cuomo and Oppenheim (1993), p. 68It is identical to the transmitter, except that the drive signal \(u(t)\) replaces the receiver signal \(u_{r}(t)\) at a crucial place in the circuit (compare Figure 9.6.l). To see what effect this has on the dynamics, we write down the governing equations for both the transmitter and the receiver. Using Kirchhoff's laws and appropriate nondimensionalizations (Cuomo and Oppenheim 1992), we get \[\begin{array}{l}\dot{u}=\sigma(v-u)\\ \dot{v}=ru-v-20uw\\ \dot{w}=5uv-bw\end{array} \tag{1}\] as the dynamics of the transmitter. These are just the Lorenz equations, written in terms of scaled variables \[u=\tfrac{1}{10}x,\qquad\qquad v=\tfrac{1}{10}y,\qquad\qquad w=\tfrac{1}{20}z.\] (This scaling is irrelevant mathematically, but it keeps the variables in a more favorable range for electronic implementation, if one unit is supposed to correspond to one volt. Otherwise the wide dynamic range of the solutions exceeds typical power supply limits.) The receiver variables evolve according to \[\begin{array}{l}\dot{u}_{r}=\sigma(v_{r}-u_{r})\\ \dot{v}_{r}=ru(t)-v_{r}-20u(t)w_{r}\\ \dot{w}_{r}=5u(t)v_{r}-bw_{r}\end{array} \tag{2}\] where we have written \(u(t)\) to emphasize that the receiver is driven by the chaotic signal \(u(t)\) coming from the transmitter. The astonishing result is that _the receiver asymptotically approaches perfect synchrony with the transmitter, starting from any initial conditions_! To be precise, let \[\begin{array}{l}\mathbf{d}=(u,v,w)=\text{state of the transmitter or ``driver''}\\ \mathbf{r}=(u,v_{r},w_{r})=\text{state of the receiver}\\ \mathbf{e}=\mathbf{d}-\mathbf{r}=\text{error signal}\end{array}\] The claim is that \(\mathbf{e}(t)\to\mathbf{0}\) as \(t\to\infty\), for all initial conditions. Why is this astonishing? Because at each instant the receiver has only _partial_ information about the state of the transmitter--it is driven solely by \(u(t)\), yet somehow it manages to reconstruct the other two transmitter variables \(v(t)\) and \(w(t)\) as well. The proof is given in the following example. **EXAAPLE 9.6.1:** By defining an appropriate Liapunov function, show that \(\mathbf{e}(t)\rightarrow\mathbf{0}\) as \(t\rightarrow\infty\). _Solution:_ First we write the equations governing the error dynamics. Subtracting (2) from (1) yields \[\begin{array}{l}\dot{e}_{1}=\sigma(e_{2}-e_{1})\\ \dot{e}_{2}=-e_{2}-20u(t)e_{3}\\ \dot{e}_{3}=5u(t)e_{2}-be_{3}\end{array}\] This is a linear system for \(\mathbf{e}(t)\), but it has a chaotic time-dependent coefficient \(u(t)\) in two terms. The idea is to construct a Liapunov function in such a way that _the chaos cancels out_. Here's how: Multiply the second equation by \(e_{2}\) and the third by \(4e_{3}\) and add. Then \[\begin{array}{l}e_{2}\dot{e}_{2}+4e_{3}\dot{e}_{3}=-e_{2}{}^{2}-20u(t)e_{2}e_{ 3}+20u(t)e_{2}e_{3}-4be_{3}{}^{2}\\ =-e_{2}{}^{2}-4be_{3}{}^{2}\end{array}\] and so the chaotic term disappears! The left-hand side of (3) is \(\frac{1}{2}\frac{d}{dt}\!\left(e_{2}{}^{2}+4e_{3}{}^{2}\right)\). This suggests the form of a Liapunov function. As in Cuomo and Oppenheim (1992), we define the function \[E(\mathbf{e},t)=\frac{1}{2}(\frac{1}{\sigma}e_{1}^{2}+e_{2}{}^{2}+4e_{3}{}^{2}).\] \(E\) is certainly positive definite, since it is a sum of squares (as always, we assume \(\sigma>0\)). To show \(E\) is a Liapunov function, we must show it decreases along trajectories. We've already computed the time-derivative of the second two terms, so concentrate on the first term, shown in brackets below: \[\begin{array}{l}\dot{E}=\left[\frac{1}{\sigma}e_{i}\dot{e}_{1}\right]+e_{2}\dot {e}_{2}+4e_{3}\dot{e}_{3}\\ =-\left[e_{1}{}^{2}-e_{1}e_{2}\right]-e_{2}{}^{2}-4be_{3}{}^{2}.\end{array}\] Now complete the square for the term in brackets: \[\begin{array}{l}\dot{E}=-\left[e_{1}-\frac{1}{2}e_{2}\right]^{2}+\left(\frac{1 }{2}e_{2}\right)^{2}-e_{2}{}^{2}-4be_{3}{}^{2}\\ =-\left[e_{1}-\frac{1}{2}e_{2}\right]^{2}-\frac{3}{4}e_{2}{}^{2}-4be_{3}{}^{2}. \end{array}\] Hence \(\dot{E}\leq 0\), with equality only if \(\mathbf{e}=\mathbf{0}\). Therefore \(E\) is a Liapunov function, and so \(\mathbf{e}=\mathbf{0}\) is globally asymptotically stable. A stronger result is possible: one can show that \(\mathbf{e}(t)\) decays _exponentially fast_ (Cuomo, Oppenheim, and Strogatz 1993; see Exercise 9.6.1). This is important, because rapid synchronization is necessary for the desired application. We should be clear about what we have and haven't proven. Example 9.6.1 shows only that the receiver will synchronize to the transmitter if the drive signal is \(u(t)\). This does _not_ prove that the signal-masking approach will work. For that application, the drive is a mixture \(u(t)+m(t)\) where \(m(t)\) is the message and \(u(t)>>m(t)\) is the mask. We have no proof that the receiver will regenerate \(u(t)\) precisely. In fact, it doesn't--that's why Mariah Carey sounded a little fuzzy. So it's still something of a mathematical mystery as to why the approach works as well as it does. But the proof is in the listening! In the years since the work of Pecora and Carroll (1990) and Cuomo and Oppenheim (1992), many other researchers have looked at the pros and cons of using synchronized chaos for communications. Some of the most intriguing developments include communication schemes based on synchronized chaotic lasers, which allow much faster transmission rates than electronic circuits (Van Wiggeren and Roy 1998, Argyris et al. 2005), and countermeasures for decrypting messages cloaked in chaos (Short 1994, Short 1996, Geddes et al. 1999). ## 10.2 Stacked for Cramer-Rao ### 1.1 A Chaotic Waterwheel (Waterwheel's moment of inertia approaches a constant) For the waterwheel of Section 9.1, show that \(I(t)\to\) constant as \(t\to\infty\), as follows: * The total moment of inertia is a sum \(I=I_{\text{wheel}}+I_{\text{water}}\), where \(I_{\text{wheel}}\) depends only on the apparatus itself, and not on the distribution of water around the rim. Express \(I_{\text{water}}\) in terms of \(M=\int_{0}^{2\pi}m(\theta,t)d\theta\). * Show that \(M\) satisfies \(\dot{M}=Q_{\text{total}}-KM\), where \(Q_{\text{total}}=\int_{0}^{2\pi}Q(\theta)d\theta\). * Show that \(I(t)\to\) constant as \(t\to\infty\), and find the value of the constant. (Behavior of higher modes) In the text, we showed that three of the waterwheel equations decoupled from all the rest. How do the remaining modes behave? * If \(Q(\theta)=q_{1}\cos\theta\), the answer is simple: show that for \(n\approx 1\), all modes \(a_{n}\), \(b_{n}\to 0\) as \(t\to\infty\). * What do you think happens for a more general \(Q(\theta)=\sum_{n=0}^{\infty}q_{n}\cos n\theta\)? Part (b) is challenging; see how far you can get. For the state of current knowledge, see Kolar and Gumbs (1992). (Deriving the Lorenz equations from the waterwheel) Find a change of variables that converts the waterwheel equations\[\dot{a}_{1} = \omega b_{1} - Ka_{1}\] \[\dot{b}_{1} = -\omega a_{1} + q_{1} - Kb_{1}\] \[\dot{\omega} = -\frac{v}{I}\omega + \frac{\pi gr}{I}a_{1}\] into the Lorenz equations \[\dot{x} = \sigma(y - x)\] \[\dot{y} = rx - xz - y\] \[\dot{z} = xy - bz\] where \(\sigma\), \(b\), \(r>0\) are parameters. (This can turn into a messy calculation--it helps to be thoughtful and systematic. You should find that \(x\) is like \(\omega\), \(y\) is like \(a_{1}\), and \(z\) is like \(b_{1}\).) Also, show that when the waterwheel equations are translated into the Lorenz equations, the Lorenz parameter \(b\) turns out to be \(b=1\). (So the waterwheel equations are not quite as general as the Lorenz equations.) Express the Prandtl and Rayleigh numbers \(\sigma\) and \(r\) in terms of the waterwheel parameters. #### 9.1.4 (Laser model) As mentioned in Exercise 3.3.2, the Maxwell-Bloch equations for a laser are \[\dot{E} = \kappa(P - E)\] \[\dot{P} = \gamma_{1}(ED - p)\] \[\dot{D} = \gamma_{2}(\lambda + 1 - D - \lambda EP).\] a) Show that the non-lasing state (the fixed point with \(E^{*}=0\)) loses stability above a threshold value of \(\lambda\), to be determined. Classify the bifurcation at this laser threshold. b) Find a change of variables that transforms the system into the Lorenz system. The Lorenz equations also arise in models of geomagnetic dynamos (Robbins 1977) and thermoconvection in a circular tube (Malkus 1972). See Jackson (1990, vol. 2, Sections 7.5 and 7.6) for an introduction to these systems. #### 9.1.5 (Research project on asymmetric waterwheel) Our derivation of the waterwheel equations assumed that the water is pumped in symmetrically at the top. Investigate the _asymmetric_ case. Modify \(Q(\theta)\) in (9.1.5) appropriately. Show that a closed set of three equations is still obtained, but that (9.1.9) includes a new term. Redo as much of the analysis in this chapter as possible. You should be able to solve for the fixed points and show that the pitchfork bifurcation is replaced by an imperfect bifurcation (Section 3.6). After that, you're on your own! This problem has not yet been addressed in the literature. ### 9.2 Simple Properties of the Lorenz Equations #### 9.2.1 (Parameter where Hopf bifurcation occurs) * For the Lorenz equations, show that the characteristic equation for the eigenvalues of the Jacobian matrix at \(C^{+}\), \(C^{-}\) is \[\lambda^{3}+(\sigma+b+1)\lambda^{2}+(r+\sigma)b\lambda+2b\sigma(r-1)=0.\] * By seeking solutions of the form \(\lambda=i\omega\), where \(\omega\) is real, show that there is a pair of pure imaginary eigenvalues when \(r=r_{H}=\sigma\biggl{(}\frac{\sigma+b+3}{\sigma-b-1}\biggr{)}\). Explain why we need to assume \(\sigma>b+1\). * Find the third eigenvalue. ####9.2.2 (An ellipsoidal trapping region for the Lorenz equations) Show that there is a certain ellipsoidal region \(E\) of the form \(rx^{2}+\sigma y^{2}+\sigma(z-2r)^{2}\leq C\) such that all trajectories of the Lorenz equations eventually enter \(E\) and stay in there forever. For a much stiffer challenge, try to obtain the smallest possible value of \(C\) with this property. 2.3 (A spherical trapping region) Show that all trajectories eventually enter and remain inside a large sphere \(S\) of the form \(x^{2}+y^{2}+(z-r-\sigma)^{2}=C\), for \(C\) sufficiently large. (Hint: Show that \(x^{2}+y^{2}+(z-r-\sigma)^{2}\) decreases along trajectories for all \((x,y,z)\) outside a certain fixed ellipsoid. Then pick \(C\) large enough so that the sphere \(S\) encloses this ellipsoid.) 2.4 (\(z\)-axis is invariant) Show that the \(z\)-axis is an invariant line for the Lorenz equations. In other words, a trajectory that starts on the \(z\)-axis stays on it forever. 2.5 (Stability diagram) Using the analytical results obtained about bifurcations in the Lorenz equations, give a partial sketch of the stability diagram. Specifically, assume \(b=1\) as in the waterwheel, and then plot the pitchfork and Hopf bifurcation curves in the \((\sigma,r)\) parameter plane. As always, assume \(\sigma,r\geq 0\). (For a numerical computation of the stability diagram, including chaotic regions, see Kolar and Gumbs (1992).) #### 9.2.6 (Rikitake model of geomagnetic reversals) Consider the system \[\dot{x} = -vx+zy\] \[\dot{y} = -vy+(z-a)x\] \[\dot{z} = 1-xy\] where \(a\), \(v>0\) are parameters. * Show that the system is dissipative. * Show that the fixed points may be written in parametric form as \(x^{\bullet}=\pm k\), \(y^{\bullet}=\pm k^{-1}\), \(z^{\bullet}=vk^{2}\), where \(v(k^{2}-k^{-2})=a\). * Classify the fixed points. These equations were proposed by Rikitake (1958) as a model for the self-generation of the Earth's magnetic field by large current-carrying eddies in the core. Computer experiments show that the model exhibits chaotic solutions for some parameter values. These solutions are loosely analogous to the irregular reversals of the Earth's magnetic field inferred from geological data. See Cox (1982) for the geophysical background. ### 9.3 Chaos on a Strange Attractor (Quasier periodicity \(\approx\) chaos) The trajectories of the quasiperiodic system \(\dot{\theta}_{1}=\omega_{1},\ \dot{\theta}_{2}=\omega_{2},\ (\omega_{1}/\omega_{2}\) irrational) are not periodic. a) Why isn't this system considered chaotic? b) Without using a computer, find the largest Liapunov exponent for the system. (Numerical experiments) For each of the values of \(r\) given below, use a computer to explore the dynamics of the Lorenz system, assuming \(\sigma=10\) and \(b=8/3\) as usual. In each case, plot \(x(t)\), \(y(t)\), and \(x\) vs. \(z\). You should investigate the consequences of choosing different initial conditions and lengths of integration. Also, in some cases you may want to ignore the transient behavior, and plot only the sustained long-term behavior. \(r=10\) 9.3.3 \(r=22\) (transient chaos) 9.3.4 \(r=24.5\) 9.3.5 \(r=100\) (surprise) (chaos and stable point co-exist) 9.3.6 \(r=126.52\) 9.3.7 \(r=400\) (Practice with the definition of an attractor) Consider the following familiar system in polar coordinates: \(\dot{r}=r(1-r^{2}),\ \ \dot{\theta}=1\). Let \(D\) be the disk \(x^{2}+y^{2}\leq 1\). a) Is \(D\) an invariant set? b) Does \(D\) attract an open set of initial conditions? c) Is \(D\) an attractor? If not, why not? If so, find its basin of attraction. d) Repeat part (c) for the circle \(x^{2}+y^{2}=1\). (Exponential divergence) Using numerical integration of two nearby trajectories, estimate the largest Liapunov exponent for the Lorenz system, assuming that the parameters have their standard values \(r=28\), \(\sigma=10\), \(b=8/3\). (Time horizon) To illustrate the "time horizon" after which prediction becomes impossible, numerically integrate the Lorenz equations for \(r=28\), \(\sigma=10\), \(b=8/3\). Start two trajectories from nearby initial conditions, and plot \(x(t)\) for both of them on the same graph. ### 9.4 Lorenz Map (Computer work) Using numerical integration, compute the Lorenz map for \(r=28\), \(\sigma=10\), \(b=8/3\). #### 9.4.2 (Tent map, as model of Lorenz map) Consider the map \[x_{n+1} = \begin{cases}2x_{n}, &0 \leq x_{n} \leq \frac{1}{2} \\ 2-2x_{n}, &\frac{1}{2} \leq x_{n} \leq 1 \end{cases}\] as a simple analytical model of the Lorenz map. 1. Why is it called the "tent map"? 2. Find all the fixed points, and classify their stability. 3. Show that the map has a period-2 orbit. Is it stable or unstable? 4. Can you find any period-3 points? How about period-4? If so, are the corresponding periodic orbits stable or unstable? ### 9.5 Exploring Parameter Space (Numerical experiments) For each of the values of \(r\) given below, use a computer to explore the dynamics of the Lorenz system, assuming \(\sigma=10\) and \(b=8/3\) as usual. In each case, plot \(x(t)\), \(y(t)\), and \(x\) vs. \(z\). #### 9.5.1 \(r=166.3\) (intermittent chaos) #### 9.5.2 \(r=212\) (noisy periodicity) #### 9.5.3 (Hysteresis between a fixed point and a strange attractor) Consider the Lorenz equations with \(\sigma=10\) and \(b=8/3\). Suppose that we slowly "turn the \(r\) knob" up and down. Specifically, let \(r=24.4+\sin\omega t\), where \(\omega\) is small compared to typical orbital frequencies on the attractor. Numerically integrate the equations, and plot the solutions in whatever way seems most revealing. You should see a striking hysteresis effect between an equilibrium and a chaotic state. #### 9.5.5 (Lorenz equations for large \(r\)) Consider the Lorenz equations in the limit \(r\to\infty\). By taking the limit in a certain way, all the dissipative terms in the equations can be removed (Robbins 1979, Sparrow 1982). 1. Let \(\varepsilon=r^{-1/2}\), so that \(r\to\infty\) corresponds to \(\varepsilon\to 0\). Find a change of variables involving \(\varepsilon\) such that as \(\varepsilon\to 0\), the equations become \[\begin{array}{l}X^{\prime}=Y\\ Y^{\prime}=-XZ\\ Z^{\prime}=XY.\end{array}\] 2. Find two conserved quantities (i.e., constants of the motion) for the new system. 3. Show that the new system is volume-preserving (i.e., the volume of an arbitrary blob of "phase fluid" is conserved by the time-evolution of the system, even though the shape of the blob may change dramatically.) 4. Explain physically why the Lorenz equations might be expected to show some conservative features in the limit \(r\to\infty\). e) Solve the system in part (a) numerically. What is the long-term behavior? Does it agree with the behavior seen in the Lorenz equations for large \(r\)? 5.6 (Transient chaos) Example 9.5.1 shows that the Lorenz system can exhibit transient chaos for \(r=21\), \(\sigma=10,\ b=\frac{8}{3}\). However, not all trajectories behave this way. Using numerical integration, find three different initial conditions for which there _is_ transient chaos, and three others for which there _isn't_. Give a rule of thumb which predicts whether an initial condition will lead to transient chaos or not. ### Using Chaos to Send Secret Messages 6.1 (Exponentially fast synchronization) The Liapunov function of Example 9.6.1 shows that the synchronization error \(\mathbf{e}(t)\) tends to zero as \(t\rightarrow\infty\), but it does not provide information about the rate of convergence. Sharpen the argument to show that the synchronization error \(\mathbf{e}(t)\) decays exponentially fast. a) Prove that \(V=\frac{1}{2}e_{2}{}^{2}+2e_{3}{}^{2}\) decays exponentially fast, by showing \(\dot{V}\leq-kV\), for some constant \(k>0\) to be determined. b) Show that part (a) implies that \(e_{2}(t)\), \(e_{3}(t)\to 0\) exponentially fast. c) Finally show that \(e_{1}(t)\to 0\) exponentially fast. 6.2 (Pecora and Carroll's approach) In the pioneering work of Pecora and Carroll (1990), one of the receiver variables is simply set _equal to_ the corresponding transmitter variable. For instance, if \(x(t)\) is used as the transmitter drive signal, then the receiver equations are \[x(t)=x(t)\] \[\dot{y}_{r}=rx(t)-y_{r}-x(t)z_{r}\] \[\dot{z}_{r}=x(t)y_{r}-bz_{r}\] where the first equation is _not_ a differential equation. Their numerical simulations and a heuristic argument suggested that \(y_{r}(t)\to y(t)\) and \(z_{r}(t)\to z(t)\) as \(t\rightarrow\infty\), even if there were differences in the initial conditions. Here is a simple proof of that result, due to He and Vaidya (1992). a) Show that the error dynamics are \[e_{1} \equiv 0\] \[\dot{e}_{2} =-e_{2}-x(t)e_{3}\] \[\dot{e}_{3} =x(t)e_{2}-be_{3}\] where \(e_{1}=x-x_{r}\), \(e_{2}=y-y_{r}\), and \(e_{3}=z-z_{r}\). b) Show that \(V=e_{2}^{2}+e_{3}^{2}\) is a Liapunov function. c) What do you conclude? #### 9.6.3 (Computer experiments on synchronized chaos) Let \(x,y,z\) be governed by the Lorenz equations with \(r=60\), \(\sigma=10\), \(b=8/3\). Let \(x_{r},y_{r},z_{r}\) be governed by the system in Exercise 9.6.2. Choose different initial conditions for \(y\) and \(y_{r}\) and similarly for \(z\) and \(z_{r}\), and then start integrating numerically. * Plot \(y(t)\) and \(y_{r}(t)\) on the same graph. With any luck, the two time series should eventually merge, even though both are chaotic. * Plot the \((y,z)\) projection of both trajectories. #### 9.6.4 (Some drives don't work) Suppose \(z(t)\) were the drive signal in Exercise 9.6.2, instead of \(x(t)\). In other words, we replace \(z_{r}\) by \(z(t)\) everywhere in the receiver equations, and watch how \(x_{r}\) and \(y_{r}\) evolve. * Show numerically that the receiver does _not_ synchronize in this case. * What if \(y(t)\) were the drive? #### 9.6.5 (Masking) In their signal-masking approach, Cuomo and Oppenheim (1992, 1993) use the following receiver dynamics: \[\dot{x}_{r} =\sigma(y_{r}-x_{r})\] \[\dot{y}_{r} =rs(t)-y_{r}-s(t)z_{r}\] \[\dot{z}_{r} =s(t)y_{r}-bz_{r}\] where \(s(t)=x(t)+m(t)\), and \(m(t)\) is the low-power message added to the much stronger chaotic mask \(x(t)\). If the receiver has synchronized with the drive, then \(x_{r}(t)\approx x(t)\) and so \(m(t)\) may be recovered as \(\hat{m}(t)=s(t)-x_{r}(t)\). Test this approach numerically, using a sine wave for \(m(t)\). How close is the estimate \(\hat{m}(t)\) to the actual message \(m(t)\)? How does the error depend on the frequency of the sine wave? #### 9.6.6 (Lorenz circuit) Derive the circuit equations for the transmitter circuit shown in Figures 9.6.1. ## 10 ONE-DIMENSIONAL MAPS ### 10.0 Introduction This chapter deals with a new class of dynamical systems in which time is _discrete_, rather than continuous. These systems are known variously as difference equations, recursion relations, iterated maps, or simply _maps_. For instance, suppose you repeatedly press the cosine button on your calculator, starting from some number \(x_{0}\). Then the successive readouts are \(x_{1} = \cos x_{0}\), \(x_{2} = \cos x_{1}\), and so on. Set your calculator to radian mode and try it. Can you explain the surprising result that emerges after many iterations? The rule \(x_{n + 1} = \cos x_{n}\) is an example of a _one-dimensional map_, so-called because the points \(x_{n}\) belong to the one-dimensional space of real numbers. The sequence \(x_{0}\), \(x_{1}\), \(x_{2}\), \(\ldots\) is called the _orbit_ starting from \(x_{0}\). Maps arise in various ways: 1. _As tools for analyzing differential equations._ We have already encountered maps in this role. For instance, Poincare maps allowed us to prove the existence of a periodic solution for the driven pendulum and Josephson junction (Section 8.5), and to analyze the stability of periodic solutions in general (Section 8.7). The Lorenz map (Section 9.4) provided strong evidence that the Lorenz attractor is truly strange, and is not just a long-period limit cycle. 2. _As models of natural phenomena._ In some scientific contexts it is natural to regard time as discrete. This is the case in digital electronics, in parts of economics and finance theory, in impulsively driven mechanical systems, and in the study of certain animal populations where successive generations do not overlap. 3. _As simple examples of chaos._ Maps are interesting to study in their own right, as mathematical laboratories for chaos. Indeed, maps are capableof much wider behavior than differential equations because the points _x__n__hop_ along their orbits rather than flow continuously (Figure 10.0.1). The study of maps is still in its infancy, but exciting progress has been made in the last few decades, thanks to the growing availability of calculators, then computers, and now computer graphics. Maps are easy and fast to simulate on digital computers where time is _inherently_ discrete. Such computer experiments have revealed a number of unexpected and beautiful patterns, which in turn have stimulated new theoretical developments. Most surprisingly, maps have generated a number of successful predictions about the routes to chaos in semiconductors, convecting fluids, heart cells, lasers, and chemical oscillators. We discuss some of the properties of maps and the techniques for analyzing them in Sections 10.1-10.5. The emphasis is on period-doubling and chaos in the logistic map. Section 10.6 introduces the amazing idea of universality, and summarizes experimental tests of the theory. Section 10.7 is an attempt to convey the basic ideas of Feigenbaum's renormalization technique. As usual, our approach will be intuitive. For rigorous treatments of one-dimensional maps, see Devaney (1989) and Collet and Eckmann (1980). ### 10.1 Fixed Points and Cobwebs In this section we develop some tools for analyzing one-dimensional maps of the form _x__n_+1=_f_(\(x_{n}\)), where \(f\) is a smooth function from the real line to itself. ### 10.2 A Pedantic Point When we say "map," do we mean the function \(f\) or the difference equation _x__n_+1=_f_(\(x_{n}\))? Following common usage, we'll call _both_ of them maps. If you're disturbed by this, you must be a pure mathematician.. or should consider becoming one! ### 1 Fixed Points and Linear Stability Suppose _x_* satisfies _f_(_x_*)=_x_*. Then _x_* is a _fixed point_, for if _x__n_+2* then _x__n_+1=_f_(\(x_{n}\))=_f_(_x_*)=_x_*; hence the orbit remains at _x_* for all future iterations. To determine the stability of _x_*, we consider a nearby orbit _x__n_=_x_* + \(e_{n}\) and ask whether the orbit is attracted to or repelled from _x_*. That is, does the deviation \(e_{n}\) grow or decay as \(n\) increases? Substitution yields\[x*+\eta_{n+1}=x_{n+1}=f(x*+\eta_{n})=f(x*)+f^{\prime}(x*)\eta_{n}+O(\eta_{n}{}^{2}).\] But since \(f(x*)=x*\), this equation reduces to \[\eta_{n+1}=f^{\prime}(x*)\eta_{n}+O(\eta_{n}{}^{2}).\] Suppose we can safely neglect the \(O(\eta_{n}{}^{2})\) terms. Then we obtain the _linearized map_\(\eta_{n+1}=f^{\prime}(x*)\eta_{n}\) with _eigenvalue_ or _multiplier_\(\lambda=f^{\prime}(x*)\). The solution of this linear map can be found explicitly by writing a few terms: \(\eta_{1}=\lambda\eta_{0}\), \(\eta_{2}=\lambda\eta_{1}=\lambda^{2}\eta_{0}\), and so in general \(\eta_{n}=\lambda^{n}\eta_{0}\). If \(|\lambda|=|f^{\prime}(x*)|<1\), then \(\eta_{n}\to 0\) as \(n\rightarrow\infty\) and the fixed point \(x*\) is _linearly stable_. Conversely, if \(|f^{\prime}(x*)|>1\) the fixed point is _unstable_. Although these conclusions about local stability are based on linearization, they can be proven to hold for the original nonlinear map. But the linearization tells us nothing about the _marginal_ case \(|f^{\prime}(x*)|=1\); then the neglected \(O(\eta_{n}{}^{2})\) terms determine the local stability. (All of these results have parallels for differential equations--recall Section 2.4.) **Example 10.1.1:** Find the fixed points for the map \(x_{n+1}=x_{n}{}^{2}\) and determine their stability. _Solution:_ The fixed points satisfy \(x*=(x*)^{2}\). Hence \(x*=0\) or \(x*=1\). The multiplier is \(\lambda=f^{\prime}(x*)=2x*\). The fixed point \(x*=0\) is stable since \(|\lambda|=0<1\), and \(x*=1\) is unstable since \(|\lambda|=2>1\). Try Example 10.1.1 on a hand calculator by pressing the \(x^{2}\) button over and over. You'll see that for sufficiently small \(x_{0}\), the convergence to \(x*=0\) is _extremely_ rapid. Fixed points with multiplier \(\lambda=0\) are called _superstable_ because perturbations decay like \(\eta_{n}\thicksim\eta_{0}{}^{(2^{\prime})}\), which is much faster than the usual \(\eta_{n}\thicksim\lambda^{n}\eta_{0}\) at an ordinary stable point. **Cobwebs** In Section 8.7 we introduced the _cobweb_ construction for iterating a map (Figure 10.1.1). Given \(x_{{}_{n+1}}\!=\!f(x_{{}_{n}})\) and an initial condition \(x_{\varphi}\), draw a vertical line until it intersects the graph of \(f\); that height is the output \(x_{1}\). At this stage we could return to the horizontal axis and repeat the procedure to get \(x_{2}\) from \(x_{1}\), but it is more convenient simply to trace a horizontal line till it intersects the diagonal line \(x_{{}_{n+1}}\!=x_{n}\), and then move vertically to the curve again. Repeat the process \(n\) times to generate the first \(n\) points in the orbit. Cobwebs are useful because they allow us to see global behavior at a glance, thereby supplementing the local information available from the linearization. Cobwebs become even more valuable when linear analysis fails, as in the next example. **Example 10.1.2:** Consider the map \(x_{{}_{n+1}}\!=\!\sin x_{{}_{n}}\). Show that the stability of the fixed point \(x^{*}\,=\,0\) is not determined by the linearization. Then use a cobweb to show that \(x^{*}\,=\,0\) is stable--in fact, _globally_ stable. _Solution:_ The multiplier at \(x^{*}\,=\,0\) is \(f^{\prime}(0)\,=\,\cos(\,0)\,=\,1\), which is a marginal case where linear analysis is inconclusive. However, the cobweb of Figure 10.1.2 shows that \(x^{*}\,=\,0\) is locally stable; the orbit slowly rattles down the narrow channel, and heads monotonically for the fixed point. (A similar picture is obtained for \(x_{0}\,<\,0\).) To see that the stability is global, we have to show that _all_ orbits satisfy \(x_{{}_{n}}\to\,0\). But for any \(x_{\varphi}\), the first iterate is sent immediately to the interval \(-1\leq x_{1}\leq 1\) since \(|\sin x|\leq 1\). The cobweb in that interval looks qualitatively like Figure 10.1.2, so convergence is assured. **Example 10.1.3:** Given \(x_{n+1}=\cos x_{n}\), how does \(x_{n}\) behave as \(n\to\infty\)? _Solution:_ If you tried this on your calculator, you found that \(x_{n}\to 0.739\). \(\ldots\), no matter where you started. What is this bizarre number? It's the unique solution of the transcendental equation \(x=\cos x\), and it corresponds to a fixed point of the map. Figure 10.1.3 shows that a typical orbit spirals into the fixed point \(x*=0.739\ldots\) as \(n\to\infty\). **Figure 10.1.3** The spiraling motion implies that \(x_{n}\) converges to \(x*\) through _damped oscillations_. That is characteristic of fixed points with \(\lambda<0\). In contrast, at stable fixed points with \(\lambda>0\) the convergence is monotonic. **10.1 FIXED POINTS AND COBWEBS** 359 ### 10.2 Logistic Map: Numerics In a fascinating and influential review article, Robert May (1976) emphasized that even simple nonlinear maps could have very complicated dynamics. The article ends memorably with "an evangelical plea for the introduction of these difference equations into elementary mathematics courses, so that students' intuition may be enriched by seeing the wild things that simple nonlinear equations can do." May illustrated his point with the _logistic map_ \[x_{n+1} = rx_{n}(1 - x_{n})\] , (1) a discrete-time analog of the logistic equation for population growth (Section 2.3). Here \(x_{n} \geq 0\) is a dimensionless measure of the population in the \(n\)th generation and \(r \geq 0\) is the intrinsic growth rate. As shown in Figure 10.2.1, the graph of (I) is a parabola with a maximum value of \(r/4\) at \(x = \frac{1}{2}\). We restrict the control parameter \(r\) to the range \(0 \leq r \leq 4\) so that (I) maps the interval \(0 \leq x \leq 1\) into itself. (The behavior is much less interesting for other values of \(x\) and \(r\)--see Exercise 10.2.1.) ### Period-Doubling Suppose we fix \(r\), choose some initial population \(x_{\varphi}\), and then use (I) to generate the subsequent \(x_{n}\). What happens? For small growth rate \(r < 1\), the population always goes extinct: \(x_{n} \to 0\) as \(n \to \infty\). This gloomy result can be proven by cobwebbing (Exercise 10.2.2). For \(1 < r < 3\) the population grows and eventually reaches a nonzero steady state (Figure 10.2.2). The results are plotted here as a _time series_ of \(x_{n}\) vs. \(n\). To make the sequence clearer, we have connected the discrete points \((n,x_{n})\) by line segments, but remember that only the corners of the jagged curves are meaningful. Figure 10.2.1For larger \(r\), say \(r=3.3\), the population builds up again but now _oscillates_ about the former steady state, alternating between a large population in one generation and a smaller population in the next (Figure 10.2.3). This type of oscillation, in which \(x_{n}\) repeats every _two_ iterations, is called a _period-2 cycle_. At still larger \(r\), say \(r=3.5\), the population approaches a cycle that now repeats every _four_ generations; the previous cycle has doubled its period to _period-4_ (Figure 10.2.4). **Figure 10.2.4** **10.2 LOGISTIC MAP: NUMERICS**Further _period-doublings_ to cycles of period 8, 16, 32,...., occur as \(r\) increases. Specifically, let \(r_{n}\) denote the value of \(r\) where a 2*-cycle first appears. Then computer experiments reveal that \[\begin{array}{l}r_{1}=3\\ r_{2}=3.449\ldots\\ r_{3}=3.54409\ldots\\ r_{4}=3.5644\ldots\\ r_{5}=3.568759\ldots\\ \vdots\\ r_{\infty}=3.569946\ldots\\ \end{array}\] (period 2 is born) Note that the successive bifurcations come faster and faster. Ultimately the \(r_{n}\) converge to a limiting value \(r_{c}\). The convergence is essentially geometric: in the limit of large \(n\), the distance between successive transitions shrinks by a constant factor \[\delta=\lim_{r\rightarrow-\infty}\frac{r_{n}-r_{n-1}}{r_{n+1}-r_{n}}=4.669.\ldots\] We'll have a lot more to say about this number in Section 10.6. ### Chaos and Periodic Windows According to Gleick (1987, p. 69), May wrote the logistic map on a corridor blackboard as a problem for his graduate students and asked, "_What the Christ happens for r_ > _r__sc_?" The answer turns out to be complicated: For many values of \(r\), the sequence \(\{x_{n}^{\prime}\}\) never settles down to a fixed point or a periodic orbit--instead the long-term behavior is aperiodic, as in Figure 10.2.5. This is a discrete-time version of the chaos we encountered earlier in our study of the Lorenz equations (Chapter 9). Figure 10.2.6 You might guess that the system would become more and more chaotic as \(r\) increases, but in fact the dynamics are more subtle than that. To see the long-term behavior for _all_ values of \(r\) at once, we plot the _orbit diagram_, a magnificent picture that has become an icon of nonlinear dynamics (Figure 10.2.7). Figure 10.2.7 plots the system's attractor as a function of \(r\). To generate the orbit diagram for yourself, you'll need to write a computer program with two "loops." First, choose a value of \(r\). Then generate an orbit starting from some random initial condition \(x_{\theta}\). Iterate for 300 cycles or so, to allow the system to settle down to its eventual behavior. Once the transients have decayed, plot many points, say \(x_{301}\),..., \(x_{600}\) above that \(r\). Then move to an adjacent value of \(r\) and repeat, eventually sweeping across the whole picture. Figure 10.2.7 shows the most interesting part of the diagram, in the region \(3.4\leq r\leq 4\). At \(r=3.4\), the attractor is a period-2 cycle, as indicated by the two branches. As \(r\) increases, both branches split simultaneously, yielding a period-4 cycle. This splitting is the period-doubling bifurcation mentioned earlier. A cascade of further period-doublings occurs as \(r\) increases, yielding period-8, period-16, and so on, until at \(r=r_{\infty}\approx 3.57\), the map becomes chaotic and the attractor changes from a finite to an infinite set of points. For \(r>r_{\infty}\) the orbit diagram reveals an unexpected mixture of order and chaos, with _periodic windows_ interspersed between chaotic clouds of dots. The large window beginning near \(r\approx 3.83\) contains a stable period-3 cycle. A blow-up of part of the period-3 window is shown in the lower panel of Figure 10.2.7. Fantastically, a copy of the orbit diagram reappears in miniature! ### 10.2 LOGISTIC MAP: NUMERICS Figure 10.2.6 ### 10.3 Logistic Map: Analysis The numerical results of the last section raise many tantalizing questions. Let's try to answer a few of the more straightforward ones. **Example 10.3.1:** Consider the logistic map \(x_{n+1} = rx_{n}(1 - x_{n})\) for \(0 \leq x_{n} \leq 1\) and \(0 \leq r \leq 4\). Find all the fixed points and determine their stability. _Solution:_ The fixed points satisfy \(x* = f(x*) = rx*(1 - x*)\). Hence \(x* = 0\) or \(1 = r\left( {1 - x*} \right)\), i.e., \(x* = 1 - \frac{1}{r}\). The origin is a fixed point for all \(r\), whereas \(x* = 1 - \frac{1}{r}\) is in the range of allowable \(x\) only if \(r \geq 1\). Stability depends on the multiplier \(f^{\prime}(x*) = r - 2 rx*\). Since \(f^{\prime}(0) = r\), the origin is stable for \(r < 1\) and unstable for \(r > 1\). At the other fixed point, Figure 10.2.7: Campbell (1979), p. 35, courtesy of Roger Eckhardt\(f^{\prime}(x*)=r-2r(1-\frac{1}{r})=2-r\). Hence \(x*=1-\frac{1}{r}\) is stable for \(-1<(2-r)<1\), i.e., for \(1<r<3\). It is unstable for \(r>3\). The results of Example 10.3.1 are clarified by a graphical analysis (Figure 10.3.1). For \(r<1\) the parabola lies below the diagonal, and the origin is the only fixed point. As \(r\) increases, the parabola gets taller, becoming tangent to the diagonal at \(r=1\). For \(r>1\) the parabola intersects the diagonal in a second fixed point \(x*=1-\frac{1}{r}\), while the origin loses stability. Thus we see that \(x*\) bifurcates from the origin in a _transcritical bifurcation_ at \(r=1\) (borrowing a term used earlier for differential equations). Figure 10.3.1 also suggests how \(x*\) itself loses stability. As \(r\) increases beyond 1, the slope at \(x*\) gets increasingly steep. Example 10.3.1 shows that the critical slope \(f^{\prime}(x*)=-1\) is attained when \(r=3\). The resulting bifurcation is called a _flip bifurcation_. Flip bifurcations are often associated with period-doubling. In the logistic map, the flip bifurcation at \(r=3\) does indeed spawn a 2-cycle, as shown in the next example. **Example 10.3.2:** Show that the logistic map has a 2-cycle for all \(r>3\). _Solution:_ A 2-cycle exists if and only if there are two points \(p\) and \(q\) such that \(f(p)=q\) and \(f(q)=p\). Equivalently, such a \(p\) must satisfy \(f(f(p))=p\), where \(f(x)=rx(1-x)\). Hence \(p\) is a fixed point of the _second-iterate map_\(f^{2}(x)\equiv f(f(x))\). Since \(f(x)\) is a quadratic polynomial, \(f^{2}(x)\) is a _quartic_ polynomial. Its graph for \(r>3\) is shown in Figure 10.3.2. To find \(p\) and \(q\), we need to solve for the points where the graph intersects the diagonal, i.e., we need to solve the fourth-degree equation \(f^{2}(x)=x\). That sounds hard until you realize that the fixed points \(x*=0\) and \(x*=1-\frac{1}{r}\) are trivial solutions of this equation. (They satisfy \(f(x*)=x*\), so \(f^{2}(x*)=x*\) automatically.) After factoring out the fixed points, the problem reduces to solving a quadratic equation. We outline the algebra involved in the rest of the solution. Expansion of the equation \(f^{2}(x)-x=0\) gives \(r^{2}x(1-x)[1-rx(1-x)]-x=0\). After factoring out \(x\) and \(x-(1-\frac{1}{r})\) by long division, and solving the resulting quadratic equation, we obtain a pair of roots \[p,q=\frac{r+1\pm\sqrt{(r-3)(r+1)}}{2r}\,,\] which are real for \(r>3\). Thus a 2-cycle exists for all \(r>3\), as claimed. At \(r=3\), the roots coincide and equal \(x*=1-\frac{1}{r}=\frac{2}{3}\,,\) which shows that the 2-cycle bifurcates _continuously_ from \(x*\). For \(r<3\) the roots are complex, which means that a 2-cycle doesn't exist. A cobweb diagram reveals how flip bifurcations can give rise to period-doubling. Consider any map \(f\), and look at the local picture near a fixed point where \(f^{\prime}(x*)\approx-1\) (Figure 10.3.3). Figure 10.3.2: If the graph of \(f\) is concave down near \(x*\), the cobweb tends to produce a small, stable 2-cycle close to the fixed point. But like pitchfork bifurcations, flip bifurcations can also be subcritical, in which case the 2-cycle exists _below_ the bifurcation and is _unstable_--see Exercise 10.3.11. The next example shows how to determine the stability of a 2-cycle. **Example 10.3.3:** Show that the 2-cycle of Example 10.3.2 is stable for \(3<r<1+\sqrt{6}=3.449.\ldots\) (This explains the values of \(r_{1}\) and \(r_{2}\) found numerically in Section 10.2.) _Solution:_ Our analysis follows a strategy that is worth remembering: To analyze the stability of a cycle, reduce the problem to a question about the stability of a _fixed point_, as follows. Both \(p\) and \(q\) are solutions of \(f^{2}(x)=x\), as pointed out in Example 10.3.2; hence \(p\) and \(q\) are _fixed points of the second-iterate map_\(f^{2}(x)\). The original 2-cycle is stable precisely if \(p\) and \(q\) are stable fixed points for \(f^{2}\). Now we're on familiar ground. To determine whether \(p\) is a stable fixed point of \(f^{2}\), we compute the multiplier \[\lambda=\tfrac{d}{dx}(f(f(x)))_{x-p}=f^{\prime}(f(p))f^{\prime}(p)=f^{\prime} (q)f^{\prime}(p).\] (Note that the same \(\lambda\) is obtained at \(x=q\), by the symmetry of the final term above. Hence, when the \(p\) and \(q\) branches bifurcate, they must do so _simultaneously_. We noticed such a simultaneous splitting in our numerical observations of Section 10.2.) ### 10.3 LOGISTIC MAP: ANALYSIS Figure 10.3.3: After carrying out the differentiations and substituting for \(p\) and \(q\), we obtain \[\lambda =r(1-2q)\,r(1-2p)\] \[=r^{2}\left[1-2(p+q)+4\,pq\right]\] \[=r^{2}\left[1-2(r+1)/r+4(r+1)/r^{2}\right]\] \[=4+2r-r^{2}.\] Therefore the 2-cycle is linearly stable for \(\left|4+2r+r^{2}\right|<1\), i.e., for \(\,3<r<1+\sqrt{6}\). Figure 10.3.4 shows a partial _bifurcation diagram_ for the logistic map, based on our results so far. Bifurcation diagrams are different from orbit diagrams in that _unstable_ objects are shown as well; orbit diagrams show only the attractors. Our analytical methods are becoming unwieldy. A few more exact results can be obtained (see the exercises), but such results are hard to come by. To elucidate the behavior in the interesting region where \(r>r_{\infty}\), we are going to rely mainly on graphical and numerical arguments. ### 10.4 Periodic Windows One of the most intriguing features of the orbit diagram (Figure 10.2.7) is the occurrence of periodic windows for \(r>r_{\infty}\). The period-3 window that occurs near \(3.8284\ldots\leq r\leq 3.8415\ldots\) is the most conspicuous. Suddenly, against a backdrop of chaos, a stable 3-cycle appears out of the blue. Our first goal in this section is to understand how this 3-cycle is created. (The same mechanism accounts for the creation of all the other windows, so it suffices to consider this simplest case.) First, some notation. Let \(f(x)=rx(1-x)\) so that the logistic map is \(x_{{}_{n+1}}=f(x_{{}_{n}})\). Figure 10.3.4: Then \(x_{{}_{n+2}}=f(f(x_{n}))\) or more simply, \(x_{{}_{n+2}}=f^{2}(x_{n})\). Similarly, \(x_{{}_{n+3}}=f^{3}(x_{n})\). The third-iterate map \(f^{3}(x)\) is the key to understanding the birth of the period-3 cycle. Any point \(p\) in a period-3 cycle repeats every three iterates, by definition, so such points satisfy \(p=f^{3}(p)\) and are therefore fixed points of the third-iterate map. Unfortunately, since \(f^{3}(x)\) is an eighth-degree polynomial, we cannot solve for the fixed points explicitly. But a graph provides sufficient insight. Figure 10.4.1 plots \(f^{3}(x)\) for \(r=3.835\). Intersections between the graph and the diagonal line correspond to solutions of \(f^{3}(x)=x\). There are eight solutions, six of interest to us and marked with dots, and two imposters that are not genuine period-3; they are actually fixed points, or period-1 points for which \(f(x*)=x*\). The black dots in Figure 10.4.1 correspond to a stable period-3 cycle; note that the slope of \(f^{3}(x)\) is shallow at these points, consistent with the stability of the cycle. In contrast, the slope exceeds 1 at the cycle marked by the open dots; this 3-cycle is therefore unstable. Now suppose we decrease \(r\) toward the chaotic regime. Then the graph in Figure 10.4.1 changes shape--the hills move down and the valleys rise up. The curve therefore pulls away from the diagonal. Figure 10.4.2 shows that when \(r=3.8\), the six marked intersections have vanished. Hence, for some intermediate value between \(r=3.8\) and \(r=3.835\), the graph of \(f^{3}(x)\) must have become _tangent_ to the diagonal. At this critical value of \(r\), the stable and unstable period-3 cycles coalesce and annihilate in a _tangent bifurcation_. This transition defines the beginning of the periodic window. ### Periodic windows Figure 10.4.1: Figure 10.4.2: One can show analytically that the value of \(r\) at the tangent bifurcation is \(1+\sqrt{8}=3.8284\ldots\) (Myrberg 1958). This beautiful result is often mentioned in textbooks and articles--but always without proof. Given the resemblance of this result to the \(1+\sqrt{6}\) encountered in Example 10.3.3, Id always assumed it should be comparably easy to derive, and once assigned it as a routine homework problem. Oops! It turns out to be a bear. See Exercise 10.4.10 for hints, and Saha and Strogatz (1994) for Partha Saha's solution, the most elementary one my class could find. Maybe you can do better; if so, let me know! For \(r\) just below the period-3 window, the system exhibits an interesting kind of chaos. Figure 10.4.3 shows a typical orbit for \(r=3.8282\). Part of the orbit looks like a stable 3-cycle, as indicated by the black dots. But this is spooky since the 3-cycle no longer exists! We're seeing the _ghost_ of the 3-cycle. Figure 10.4.3: Figure 10.4.2: We should not be surprised to see ghosts--they _always_ occur near saddle-node bifurcations (Sections 4.3 and 8.1) and indeed, a tangent bifurcation is just a saddle-node bifurcation by another name. But the new wrinkle is that the orbit returns to the ghostly 3-cycle repeatedly, with intermittent bouts of chaos between visits. Accordingly, this phenomenon is known as _intermittency_ (Pomeau and Manneville 1980). Figure 10.4.4 shows the geometry underlying intermittency. In Figure 10.4.4a, notice the three narrow channels between the diagonal and the graph of \(f^{3}(x)\). These channels were formed in the aftermath of the tangent bifurcation, as the hills and valleys of \(f^{3}(x)\) pulled away from the diagonal. Now focus on the channel in the small box of Figure 10.4.4a, enlarged in Figure 10.4.4b. The orbit takes many iterations to squeeze through the channel. Hence \(f^{3}(x_{n})\approx x_{n}\) during the passage, and so the orbit looks like a 3-cycle; this explains why we see a ghost. Eventually, the orbit escapes from the channel. Then it bounces around chaotically until fate sends it back into a channel at some unpredictable later time and place. Intermittency is not just a curiosity of the logistic map. It arises commonly in systems where the transition from periodic to chaotic behavior takes place by a saddle-node bifurcation of cycles. For instance, Exercise 10.4.8 shows that intermittency can occur in the Lorenz equations. (In fact, it was discovered there; see Pomeau and Manneville 1980). In experimental systems, intermittency appears as nearly periodic motion interrupted by occasional irregular bursts. The time between bursts is statistically distributed, much like a random variable, even though the system is completely deterministic. As the control parameter is moved farther away from the periodic window, the bursts become more frequent until the system is fully chaotic. This progression is known as the _intermittency route to chaos_. ### Periodic windows Figure 10.4.4: Figure 10.4.5 shows an experimental example of the intermittency route to chaos in a laser. The intensity of the emitted laser light is plotted as a function of time. In the lowest panel of Figure 10.4.5, the laser is pulsing periodically. A bifurcation to intermittency occurs as the system's control parameter (the tilt of the mirror in the laser cavity) is varied. Moving from bottom to top of Figure 10.4.5, we see that the chaotic bursts occur increasingly often. For a nice review of intermittency in fluids and chemical reactions, see Berge et al. (1984). Those authors also review two other types of intermittency (the kind considered here is _Type I intermittency_) and give a much more detailed treatment of intermittency in general. ### Period-Doubling in the Window We commented at the end of Section 10.2 that a copy of the orbit diagram appears in miniature in the period-3 window. The explanation has to do with hills and valleys again. Just after the stable 3-cycle is created in the tangent bifurcation, the slope at the black dots in Figure 10.4.1 is close to \(+1\). As we increase \(r\), the hills rise and the valleys sink. The slope of \(f^{3}(x)\) at the black dots decreases steadily from \(+1\) and eventually reaches \(-1\). When this occurs, a flip bifurcation causes each of the black dots to split in two; the 3-cycle doubles its period and becomes a 6-_cycle_. The same mechanism operates here as in the original period-doubling cascade, but now produces orbits of period \(3\bullet 2^{n}\). A similar period-doubling cascade can be found in _all_ of the periodic windows. ## 10.5 Liapunov Exponent We have seen that the logistic map can exhibit aperiodic orbits for certain parameter values, but how do we know that this is really chaos? To be called "chaotic," a system should also show _sensitive dependence on initial conditions_, in the sense that neighboring orbits separate exponentially fast, on average. In Section 9.3 we quantified sensitive dependence by defining the Liapunov exponent for a chaotic differential equation. Now we extend the definition to one-dimensional maps. Here's the intuition. Given some initial condition \(x_{\theta}\), consider a nearby point \(x_{0}+\delta_{\theta}\), where the initial separation \(\delta_{0}\) is extremely small. Let \(\delta_{n}\) be the separation after \(n\) iterates. If \(|\delta_{n}|\approx|\delta_{0}|e^{\nu\lambda}\), then \(\lambda\) is called the Liapunov exponent. A positive Liapunov exponent is a signature of chaos. A more precise and computationally useful formula for \(\lambda\) can be derived. By taking logarithms and noting that \(\delta_{n}=f^{*}(x_{0}+\delta_{0})-f^{*}(x_{0})\), we obtain \[\lambda \approx \frac{1}{n}\ln\left|\frac{\delta_{n}}{\delta_{0}}\right|\] \[= \frac{1}{n}\ln\left|\frac{f^{n}(x_{0}+\delta_{0})-f^{n}(x_{0})}{ \delta_{0}}\right|\] \[= \frac{1}{n}\ln\left|(f^{n})^{\prime}(x_{0})\right|\] where we've taken the limit \(\delta_{0}\to 0\) in the last step. The term inside the logarithm can be expanded by the chain rule: \[(f^{n})^{\prime}(x_{0})=\prod_{i=0}^{n-1}f^{\prime}(x_{i})\enspace.\] (We've already seen this formula in Example 9.4.1, where it was derived by heuristic reasoning about multipliers, and in Example 10.3.3, for the special case \(n=2\).) Hence \[\lambda \approx \frac{1}{n}\ln\left|\prod_{i=0}^{n-1}f^{\prime}(x_{i})\right|\] \[= \frac{1}{n}\sum_{i=0}^{n-1}\ln\left|f^{\prime}(x_{i})\right|.\] If this expression has a limit as \(n\to\infty\), we define that limit to be the _Liapunov exponent_ for the orbit starting at \(x_{0}\): \[\lambda=\lim_{n\to\infty}\left\{\frac{1}{n}\sum_{i=0}^{n-1}\ln\left|f^{\prime}(x_ {i})\right|\right\}.\] Note that \(\lambda\) depends on \(x_{0}\). However, it is the same for all \(x_{0}\) in the basin of attraction of a given attractor. For stable fixed points and cycles, \(\lambda\) is negative; for chaotic attractors, \(\lambda\) is positive. The next two examples deal with special cases where \(\lambda\) can be found analytically. **Example 10.5.1:** Suppose that \(f\) has a stable \(p\)-cycle containing the point \(x_{0}\). Show that the Liapunov exponent \(\lambda<0\). If the cycle is superstable, show that \(\lambda=-\infty\). _Solution:_ As usual, we convert questions about \(p\)-cycles of \(f\) into questions about fixed points of \(f^{p}\). Since \(x_{0}\) is an element of a \(p\)-cycle, \(x_{0}\) is a fixed point of \(f^{p}\). By assumption, the cycle is stable; hence the multiplier \(\left|(f^{p})^{\prime}(x_{0})\right|<1\). Therefore \(\ln\left|(f^{p})^{\prime}(x_{0})\right|<\ln(1)=0\), a result that we'll use in a moment. Next observe that for a \(p\)-cycle, \[\lambda =\lim_{n\to\infty}\left\{\frac{1}{n}\sum_{i=0}^{n-1}\ln\left|f^{ \prime}(x_{i})\right|\right\}\] \[=\frac{1}{p}\sum_{i=0}^{p-1}\ln\left|f^{\prime}(x_{i})\right|\] since the same \(p\) terms keep appearing in the infinite sum. Finally, using the chain rule in reverse, we obtain \[\frac{1}{p}\sum_{i=0}^{p-1}\ln\left|f^{\prime}(x_{i})\right|=\frac{1}{p}\ln \left|(f^{p})^{\prime}(x_{0})\right|<0,\] as desired. If the cycle is superstable, then \(\left|(f^{p})^{\prime}(x_{0})\right|=0\) by definition, and thus \(\lambda=\frac{1}{p}\ln(0)=-\infty\). The second example concerns the _tent map_, defined by \[f(x)=\begin{cases}rx,&0\leq x\leq\frac{1}{2}\\ r-rx,&\frac{1}{2}\leq x\leq 1\end{cases}\] for \(0\leq r\leq 2\) and \(0\leq x\leq 1\) (Figure 10.5.1). **374**: **ONE-DIMENSIONAL MAPS**Because it is piecewise linear, the tent map is far easier to analyze than the logistic map. **Example 10.5.2**: **:** Show that \(\lambda=\ln r\) for the tent map, independent of the initial condition \(x_{\vartheta}\). _Solution:_ Since \(f^{\prime}(x)=\pm\,r\) for all \(x\), we find \(\lambda=\lim\limits_{n\to\infty}\left\{\frac{1}{n}\sum\limits_{i=0}^{n-1}\ln \left|f^{\prime}(x_{i})\right|\right\}=\ln r\). Example 10.5.2 suggests that the tent map has chaotic solutions for all \(r>1\), since \(\lambda=\ln\,r>0\). In fact, the dynamics of the tent map can be understood in detail, even in the chaotic regime; see Devaney (1989). In general, one needs to use a computer to calculate Liapunov exponents. The next example outlines such a calculation for the logistic map. **Example 10.5.3**: **:** Describe a numerical scheme to compute \(\lambda\) for the logistic map \(f(x)=rx(1-x)\). Graph the results as a function of the control parameter \(r\), for \(3\leq r\leq 4\). _Solution:_ Fix some value of \(r\). Then, starting from a random initial condition, iterate the map long enough to allow transients to decay, say 300 iterates or so. Next compute a large number of additional iterates, say 10,000. You only need to store the current value of \(x_{n}\), not all the previous iterates. Compute \(\ln\left|f^{\prime}(x_{n})\right|=\ln\left|r-2rx_{n}\right|\) and add it to the sum of the previous logarithms. The Liapunov exponent is then obtained by dividing the grand total by 10,000. Repeat this procedure for the next \(r\), and so on. The end result should look like Figure 10.5.2. **10.5 LIAPUNOV EXPONENT 375**: **:**Comparing this graph to the orbit diagram (Figure 10.2.7), we notice that \(\lambda\) remains negative for \(r<r_{\infty}\approx 3.57\), and approaches zero at the period-doubling bifurcations. The negative spikes correspond to the \(2^{n}\)-cycles. The onset of chaos is visible near \(r\approx 3.57\), where \(\lambda\) first becomes positive. For \(r>3.57\) the Liapunov exponent generally increases, except for the dips caused by the windows of periodic behavior. Note the large dip due to the period-3 window near \(r=3.83\). Actually, all the dips in Figure 10.5.2 should drop down to \(\lambda=-\infty\), because a superstable cycle is guaranteed to occur somewhere near the middle of each dip, and such cycles have \(\lambda=-\infty\), by Example 10.5.1. This part of the spike is too narrow to be resolved in Figure 10.5.2. ### 10.6 Universality and Experiments This section deals with some of the most astonishing results in all of nonlinear dynamics. The ideas are best introduced by way of an example. **Example 10.6.1:** Plot the graph of the _sine map_\(x_{n+1}=r\sin\pi x_{n}\) for \(0\leq r\leq 1\) and \(0\leq x\leq 1\), and compare it to the logistic map. Then plot the orbit diagrams for both maps, and list some similarities and differences. _Solution:_ The graph of the sine map is shown in Figure 10.6.1. **Figure 10.5.2**: _Olsen and Degn (1985), p. 175_It has the same shape as the graph of the logistic map. Both curves are smooth, concave down, and have a single maximum. Such maps are called _unimodal._ Figure 10.6.2 shows the orbit diagrams for the sine map (top panel) and the logistic map (bottom panel). The resemblance is incredible. Note that both diagrams have the same vertical scale, but that the horizontal axis of the sine map diagram is scaled by a factor of 4. This normalization is appropriate because the maximum of \(r\sin\pi x\) is \(r\), whereas that of \(rx(1-x)\) is \(\frac{1}{4}r\). Figure 10.6.2 shows that the _qualitative_ dynamics of the two maps are identical. They both undergo period-doubling routes to chaos, followed by periodic windows interwoven with chaotic bands. Even more remarkably, the periodic windows occur in the same order, and with the same relative sizes. For instance, the period-3 window is the largest in both cases, and the next largest windows preceding it are period-5 and period-6. But there are _quantitative_ differences. For instance, the period-doubling bifurcations occur later in the logistic map, and the periodic windows are thinner. ### Qualitative Universality: The U-sequence Example 10.6.1 illustrates a powerful theorem due to Metropolis et al. (1973). They considered all unimodal maps of the form \(x_{n+1}=rf(x_{n})\), where \(f(x)\) also satisfies \(f(0)=f(1)=0\). (For the precise conditions, see their original paper.) Metropolis et al. proved that as \(r\) is varied, the order in which stable periodic solutions appear is _independent_ of the unimodal map being iterated. That is, _the periodic attractors always occur in the same sequence,_ now called the universal or _U-sequence._ This amazing result implies that the algebraic form of \(f(x)\) is irrelevant; only its overall shape matters. Up to period 6, the U-sequence is \[1,\,2,\,2\times 2,\,6,\,5,\,3,\,2\times 3,\,5,\,6,\,4,\,6,\,5,\,6.\]The beginning of this sequence is familiar: periods 1, 2, and 2 \(\times\) 2 are the first stages in the period-doubling scenario. (The later period-doublings give periods greater than 6, so they are omitted here.) Next, periods 6, 5, 3 correspond to the large windows mentioned in the discussion of Figure 10.6.2. Period 2 \(\times\) 3 is the first period-doubling of the period-3 cycle. The later cycles 5, 6, 4, 6, 5, 6 are less than 10% of the period-doubling scenario. Figure 10.6.2: Courtesy of Andy Christian familiar; they occur in tiny windows and easy to miss (see Exercise 10.6.5 for their locations in the logistic map). The U-sequence has been found in experiments on the Belousov-Zhabotinsky chemical reaction. Simoyi et al. (1982) studied the reaction in a continuously stirred flow reactor and found a regime in which periodic and chaotic states alternate as the flow rate is increased. Within the experimental resolution, the periodic states occurred in the exact order predicted by the U-sequence. See Section 12.4 for more details of these experiments. The U-sequence is qualitative; it dictates the order, but not the precise parameter values, at which periodic attractors occur. We turn now to Mitchell Feigenbaum's celebrated discovery of _quantitative_ universality in one-dimensional maps. ### Quantitative Universality You should read the dramatic story behind this work in Gleick (1987), and also see Feigenbaum (1980; reprinted in Cvitanovic 1989a) for his own reminiscences. The original technical papers are Feigenbaum (1978, 1979)--published only after being rejected by other journals. These papers are fairly heavy reading; see Feigenbaum (1980), Schuster (1989) and Cvitanovic (1989b) for more accessible expositions. Here's acapsule history. Around 1975, Feigenbaum began to study period-doubling in the logistic map. First he developed a complicated (and now forgotten) "generating function theory" to predict \(r_{n}\), the value of \(r\) where a \(2^{n}\)-cycle first appears. To check his theory numerically, and not being fluent with large computers, he programmed his handheld calculator to compute the first several \(r_{n}\). As the calculator chugged along, Feigenbaum had time to guess where the next bifurcation would occur. He noticed a simple rule: the \(r_{n}\) converged geometrically, with the distance between successive transitions shrinking by a constant factor of about 4.669. Feigenbaum (1980) recounts what happened next: I spent part of a day trying to fit the convergence rate value, 4.669, to the mathematical constants I knew. The task was fruitless, save for the fact that it made the number memorable. At this point I was reminded by Paul Stein that period-doubling isn't a unique property of the quadratic map but also occurs, for example, in \(x_{n+1}=r\sin\pi x_{n}\). However my generating function theory rested heavily on the fact that the nonlinearity was simply quadratic and not transcendental. Accordingly, my interest in the problem waned. Perhaps a month later I decided to compute the \(r_{n}\)'s in the transcendental case numerically. This problem was even slower to compute than the quadratic one. Again, it became apparent that the \(r_{n}\)'s converged geometrically, and altogether amazingly, the convergence rate was the same 4.669 that I remembered by virtue of my efforts to fit it. In fact, the same convergence rate appears _no matter what unimodal map is iterated_! In this sense, the number \[\delta=\lim_{n\to\infty}\frac{r_{n}-r_{n-1}}{r_{n+1}-r_{n}}=4.669\ldots\] is _universal_. It is a new mathematical constant, as basic to period-doubling as \(\pi\) is to circles. Figure 10.6.3 schematically illustrates the meaning of \(\delta\). Let \(\Delta_{n}=r_{n}-r_{n+1}\) denote the distance between consecutive bifurcation values. Then \(\Delta_{n}/\Delta_{n+1}\to\delta\) as \(n\to\infty\). There is also universal scaling in the \(x\)-direction. It is harder to state precisely because the pitchforks have varying widths, even at the same value of \(r\). (Look back at the orbit diagrams in Figure 10.6.2 to confirm this.) To take account of this nonuniformity, we define a standard \(x\)-scale as follows: Let \(x_{m}\) denote the maximum of \(f\), and let \(d_{n}\) denote the distance from \(x_{m}\) to the _nearest_ point in a \(2^{n}\)-cycle (Figure 10.6.3). Then the ratio \(d_{n}/d_{n+1}\) tends to a universal limit as \(n\to\infty\) : \[\frac{d_{n}}{d_{n+1}}\to\alpha=-2.5029\ldots,\] Figure 10.6.3 independent of the precise form of \(f\). Here the negative sign indicates that the nearest point in the \(2^{n}\)-cycle is alternately above and below \(x_{m}\), as shown in Figure 10.6.3. Thus the \(d_{n}^{\prime}\) are alternately positive and negative. Feigenbaum went on to develop a beautiful theory that explained why \(\alpha\) and \(\delta\) are universal (Feigenbaum 1979). He borrowed the idea of renormalization from statistical physics, and thereby found an analogy between \(\alpha\), \(\delta\) and the universal exponents observed in experiments on second-order phase transitions in magnets, fluids, and other physical systems (Ma 1976). In Section 10.7, we give a brief look at this renormalization theory. **Experimental Tests** Since Feigenbaum's work, sequences of period-doubling bifurcations have been measured in a variety of experimental systems. For instance, in the convection experiment of Libchaber et al. (1982), a box containing liquid mercury is heated from below. The control parameter is the Rayleigh number \(R\), a dimensionless measure of the externally imposed temperature gradient from bottom to top. For \(R\) less than a critical value \(R_{c}\), heat is conducted upward while the fluid remains motionless. But for \(R>R_{c}\), the motionless state becomes unstable and _convection_ occurs--hot fluid rises on one side, loses its heat at the top, and descends on the other side, setting up a pattern of counterrotating cylindrical _rolls_ (Figure 10.6.4). For \(R\) just slightly above \(R_{c}\), the rolls are straight and the motion is steady. Furthermore, at any fixed location in space, the temperature is constant. With more heating, another instability sets in. A wave propagates back and forth along each roll, causing the temperature to oscillate at each point. In traditional experiments of this sort, one keeps turning up the heat, causing further instabilities to occur until eventually the roll structure is destroyed and the system becomes turbulent. Libchaber et al. (1982) wanted to be able to increase the heat _without_ destabilizing the spatial structure. That's why they chose mercury--then the roll structure could be stabilized by applying a dc magnetic field to the whole system. Mercury has a high electrical conductivity, so there is a strong tendency for the rolls to align with the field, thereby retaining their spatial Figure 10.6.4: organization. There are further niceties in the experimental design, but they need not concern us; see Libchaber et al. (1982) or Berge et al. (1984). Now for the experimental results. Figure 10.6.5 shows that this system undergoes a sequence of period-doublings as the Rayleigh number is increased. Each time series shows the temperature variations at one point in the fluid. For \(R/R_{c}=3.47,\) the temperature varies periodically. This may be regarded as the basic period-1 state. When \(R\) is increased to \(R/R_{c}=3.52,\) the successive temperature maxima are no longer equal; the odd peaks are a little higher than before, and the even peaks are a little lower. This is the period-2 state. Further increases in \(R\) generate additional period-doublings, as shown in the lower two time series in Figure 10.6.5. By carefully measuring the values of \(R\) at the period-doubling bifurcations, Libchaber et al. (1982) arrived at a value of \(\delta=4.4\pm 0.1,\) in reasonable agreement with the theoretical result \(\delta\approx 4.699.\) Table 10.6.1, adapted from Cvitanovic (1989b), summarizes the results from a few experiments on fluid convection and nonlinear electronic circuits. The experimental estimates of \(\delta\) are shown along with the errors quoted by the experimentalists; thus 4.3 (8) means 4.3 \(\pm\) 0.8. Figure 10.6.5: libchaber et al. (1982), p.213 It is important to understand that these measurements are difficult. Since \(\delta\approx 5\), each successive bifurcation requires about a fivefold improvement in the experimenter's ability to measure the external control parameter. Also, experimental noise tends to blur the structure of high-period orbits, so it is hard to tell precisely when a bifurcation has occurred. In practice, one cannot measure more than about five period-doublings. Given these difficulties, the agreement between theory and experiment is impressive. Period-doubling has also been measured in laser, chemical, and acoustic systems, in addition to those listed here. See Cvitanovic (1989b) for references. ### What Do 1-D Maps Have to Do with Science? The predictive power of Feigenbaum's theory may strike you as mysterious. How can the theory work, given that it includes none of the _physics_ of real systems like convecting fluids or electronic circuits? And real systems often have tremendously many degrees of freedom--how can all that complexity be captured by a one-dimensional map? Finally, real systems evolve in continuous time, so how can a theory based on discrete-time maps work so well? To work toward the answer, let's begin with a system that is simpler than a convecting fluid, yet (seemingly) more complicated than a one-dimensional map. The system is a set of three differential equations concocted by Rossler (1976) to exhibit the simplest possible strange attractor. The _Rossler system_ is \[\begin{array}{l}\dot{x}=-y-z\\ \dot{y}=x+ay\\ \dot{z}=b+z(x-c)\end{array}\] where _a, b,_ and \(c\) are parameters. This system contains only one nonlinear term, _zx_, and is even simpler than the Lorenz system (Chapter 9), which has two nonlinearities. Figure 10.6.6 shows two-dimensional projections of the system's attractor for different values of \(c\) (with \(a=b=0.2\) held fixed). At \(c=2.5\) the attractor is a simple limit cycle. As \(c\) is increased to 3.5, the limit cycle goes around twice before closing, and its period is approximately twice that of the original cycle. This is what period-doubling looks like in a continuous-time system! In fact, somewhere between \(c=2.5\) and 3.5, _a period-doubling bifurcation of cycles_ must have occurred. (As Figure 10.6.6 suggests, such a bifurcation can occur only in three or higher dimensions, since the limit cycle needs room to avoid crossing itself) Another period-doubling bifurcation creates the four-loop cycle shown at \(c=4\). After an infinite cascade of further period-doublings, one obtains the strange attractor shown at \(c=5\). To compare these results to those obtained for one-dimensional maps, we use Lorenz's trick for obtaining a map from a flow (Section 9.4). For a given value of \(c\), we record the successive local maxima of \(x(t)\) for a trajectory on the strange attractor. Then we plot \(x_{{}_{n+1}}\)vs. \(x_{{}_{n}}\), where \(x_{{}_{n}}\) denotes the \(n\)th local maximum. ThisLorenz map for \(c=5\) is shown in Figure 10.6.7. The data points fall very nearly on a one-dimensional curve. Note the uncanny resemblance to the logistic map! **Figure 10.6.7**: Olsen and Degn (1985), p.186 We can even compute an orbit diagram for the Rossler system. Now we allow all values of \(c\), not just those where the system is chaotic. Above each \(c\), we plot _all_ the local maxima \(x_{n}\) on the attractor for that value of \(c\). The number of different maxima tells us the "period" of the attractor. For instance, at \(c=3.5\) the attractor is period-2 (Figure 10.6.6), and hence there are two local maxima of \(x(t)\). Both of these points are graphed above \(c=3.5\) in Figure 10.6.8. We proceed in this way for all values of \(c\), thereby sweeping out the orbit diagram. **10.6**: UNIVERSALITY AND EXPERIMENTS 385This orbit diagram allows us to keep track of the bifurcations in the Rossler system. We see the period-doubling route to chaos and the large period-3 window--all our old friends are here. Now we can see why certain physical systems are governed by Feigenbaum's universality theory--if the system's Lorenz map is nearly one-dimensional and unimodal, then the theory applies. This is certainly the case for the Rossler system, and probably for Libchaber's convecting mercury. But not all systems have one-dimensional Lorenz maps. For the Lorenz map to be almost one-dimensional, the strange attractor has to be very flat, i.e., only slightly more than two-dimensional. This requires that the system be highly dissipative; only two or three degrees of freedom are truly active, and the rest follow along slavishly. (Incidentally, that's another reason why Libchaber et al. (1982) applied a magnetic field; it increases the damping in the system, and thereby favors a low-dimensional brand of chaos.) So while the theory works for some mildly chaotic systems, it does not apply to fully turbulent fluids or fibrillating hearts, where there are many active degrees of freedom corresponding to complicated behavior in space as well as time. We are still a long way from understanding such systems. ### 10.7 Renormalization In this section we give an intuitive introduction to Feigenbaum's (1979) renormalization theory for period-doubling. For nice expositions at a higher mathematical level than that presented here, see Feigenbaum (1980), Collet and Eckmann (1980), Schuster (1989), Drazin (1992), and Cvitanovic (1989b). Figure 10.6.8: Olsen and Degn (1985), p.186 First we introduce some notation. Let \(f(x,r)\) denote a unimodal map that undergoes a period-doubling route to chaos as \(r\) increases, and suppose that \(x_{m}\) is the maximum of \(f\). Let \(r_{n}\) denote the value of \(r\) at which a \(2^{n}\)-cycle is born, and let \(R_{n}\) denote the value of \(r\) at which the \(2^{n}\)-cycle is superstable. Feigenbaum phrased his analysis in terms of the superstable cycles, so let's get some practice with them. **Example 10.7.1:** Find \(R_{0}\) and \(R_{1}\) for the map \(f(x,r)=r-x^{2}\). _Solution:_ At \(R_{0}\) the map has a superstable fixed point, by definition. The fixed point condition is \(x*=R_{0}-(x*)^{2}\) and the superstability condition is \(\lambda=\left(\partial f/\partial x\right)_{x-x^{*}}=0\). Since \(\partial f/\partial x=-2x\), we must have \(x*=0\), i.e., the fixed point is the maximum of \(f\). Substituting \(x*=0\) into the fixed point condition yields \(R_{0}=0\). At \(R_{1}\) the map has a superstable 2-cycle. Let \(p\) and \(q\) denote the points of the cycle. Superstability requires that the multiplier \(\lambda=\left(-2p\right)\left(-2q\right)=0\), so the point \(x=0\) must be one of the points in the 2-cycle. Then the period-2 condition \(f^{2}(0,\,R_{1})=0\) implies \(R_{1}-(R_{1})^{2}=0\). Hence \(R_{1}=1\) (since the other root gives a fixed point, not a 2-cycle). Example 10.7.1 illustrates a general rule: A superstable cycle of a unimodal map always contains \(x_{m}\) as one of its points. Consequently, there is a simple graphical way to locate \(R_{n}\) (Figure 10.7.1). We draw a horizontal line at height \(x_{m}\) ; then \(R_{n}\) occurs where this line intersects the _figtree_ portion of the orbit diagram (Feigenbaum \(=\)_figtree_ in German). Note that \(R_{n}\) lies between \(r_{n}\) and \(r_{n+1}\). Numerical experiments show that the spacing between successive \(R_{n}\) also shrinks by the universal factor \(\delta\approx 4.669\). The renormalization theory is based on the _self-similarity_ of the figtree--the twigs look like the earlier branches, except they are scaled down in both the \(x\) and \(r\) directions. This structure reflects the endless repetition of the same dynamical processes; a \(2^{n}\)-cycle is born, then becomes superstable, and then loses stability in a period-doubling bifurcation. To express the self-similarity mathematically, we compare \(f\) with its second iterate \(f^{2}\) at corresponding values of \(r\), and then "renormalize" one map into the other. Specifically, look at the graphs of \(f(x,R_{0})\) and \(f^{2}(x,R_{1})\) (Figure 10.7.2, a and b). This is a fair comparison because the maps have the same stability properties: _x__m__is a superstable fixed point for both of them._ Please notice that to obtain Figure 10.7.2b, we took the second iterate of _f and_ increased \(r\) from \(R_{0}\) to \(R_{1}\). This _r_-shifting is a basic part of the renormalization procedure. The small box of Figure 10.7.2b is reproduced in Figure 10.7.2c. The key point is that Figure 10.7.2c looks practically identical to Figure 10.7.2a, except for a change of scale and a reversal of both axes. From the point of view of dynamics, the two maps are very similar--cobweb diagrams starting from corresponding points would look almost the same. Now we need to convert these qualitative observations into formulas. A helpful first step is to translate the origin of \(x\) to \(x_{m}\), by redefining \(x\) as \(x-x_{m}\). This Figure 10.7.2: Figure 10.7.1: redefinition of \(x\) dictates that we also subtract \(x_{m}\) from \(f\), since \(f(x_{m},r)\ =\ x_{n+1}\). The translated graphs are shown in Figure 10.7.3a and 10.7.3b. Next, to make Figure 10.7.3b look like Figure 10.7.3a, we blow it up by a factor \(|\alpha|>1\) in both directions, and also invert it by replacing \((x,y)\) by \((-x,-y)\). Both operations can be accomplished in one step if we define the _scale factor_\(\alpha\) to be _negative_. As you are asked to show in Exercise 10.7.2, rescaling by \(\alpha\) is equivalent to replacing \(f^{2}\left(x,R_{1}\right)\) by \(\alpha\,f^{2}(x/\alpha,R_{1})\). Finally, the resemblance between Figure 10.7.3a and Figure 10.7.3c shows that \[f(x,R_{0})\approx\alpha f^{2}\left\{\frac{x}{\alpha},R_{1}\right\}.\] In summary, \(f\) has been _renormalized_ by taking its second iterate, rescaling \(x\to x/\alpha,\) and shifting \(r\) to the next superstable value. There is no reason to stop at \(f^{2}\). For instance, we can renormalize \(f^{2}\) to generate \(f^{4}\); it too has a superstable fixed point if we shift \(r\) to \(R_{2}\). The same reasoning as above yields \[f^{2}\left[\frac{x}{\alpha},R_{1}\right]\approx\alpha f^{4}\left(\frac{x}{ \alpha^{2}},R_{2}\right).\] When expressed in terms of the original map \(f(x,R_{0})\), this equation becomes \[f(x,R_{0})\approx\alpha^{2}f^{4}\left[\frac{x}{\alpha^{2}},R_{2}\right].\] After renormalizing \(n\) times we get \[f(x,R_{0})\approx\alpha^{n}f^{(2^{n})}\left[\frac{x}{\alpha^{n}},R_{n}\right].\] Figure 10.7.3 Feigenbaum found numerically that \[\lim_{n\to\infty}\alpha^{n}f^{(2^{s})}\left(\frac{x}{\alpha^{n}},R_{n}\right)=g_{0 }(x), \tag{1}\] where \(g_{0}(x)\) is a _universal function_ with a superstable fixed point. The limiting function exists only if \(\alpha\) is chosen correctly, specifically, \(\alpha=-2.5029.\ldots\) Here "universal" means that the limiting function \(g_{0}(x)\) is independent of the original \(f\) (almost). This seems incredible at first, but the form of (1) suggests the explanation: \(g_{0}(x)\) depends on \(f\) only through its behavior near \(x=0\), since that's all that survives in the argument \(x/\alpha^{n}\) as \(n\to\infty\). With each renormalization, we're blowing up a smaller and smaller neighborhood of the maximum of \(f\), so practically all information about the global shape of \(f\) is lost. One caveat: The _order_ of the maximum is never forgotten. Hence a more precise statement is that \(g_{0}(x)\) is universal for all \(f\)_with a quadratic maximum_ (the generic case). A different \(g_{0}(x)\) is found for \(f\)'s with a fourth-degree maximum, etc. To obtain other universal functions \(g_{i}(x)\), start with \(f(x,R_{i})\) instead of \(f(x,R_{0})\): \[g_{i}(x)=\lim_{n\to\infty}\alpha^{n}f^{(2^{s})}\left(\frac{x}{\alpha^{n}},R_{n +i}\right).\] Here \(g_{i}(x)\) is a universal function with a superstable \(2^{i}\)-cycle. The case where we start with \(R_{i}=R_{\infty}\) (at the onset of chaos) is the most interesting and important, since then \[f(x,R_{\infty})\approx\alpha f^{2}\left(\frac{x}{\alpha},R_{\infty}\right).\] For once, we don't have to shift \(r\) when we renormalize! The limiting function \(g_{\infty}(x)\), usually called \(g(x)\), satisfies \[g(x)=\alpha g^{2}\left(\frac{x}{\alpha}\right). \tag{2}\] This is a _functional equation_ for \(g(x)\) and the universal scale factor \(\alpha\). It is self-referential: \(g(x)\) is defined in terms of itself. The functional equation is not complete until we specify boundary conditions on \(g(x)\). After the shift of origin, all our unimodal \(f\)'s have a maximum at \(x=0\), so we require \(g^{\prime}(0)=0\). Also, we can set \(g(0)=1\) without loss of generality. (This just defines the scale for \(x\); if \(g(x)\) is a solution of (2), so is \(\mu g(x/\mu)\), with the same \(\alpha\). See Exercise 10.7.3.) Now we solve for \(g(x)\) and \(\alpha\). At \(x=0\) the functional equation gives \(g(0)=\alpha\,g(g(0)\,)\). But \(g(0)=1\), so \(1=\alpha g(1)\). Hence, \[\alpha=1/g(1)\,,\] which shows that \(\alpha\) is determined by \(g(x)\). No one has ever found a closed form solution for \(g(x)\), so we resort to a power series solution \[g(x)=1+c_{x}x^{2}+c_{x}x^{4}+\ldots\] (which assumes that the maximum is quadratic). The coefficients are determined by substituting the power series into (2) and matching like powers of \(x\). Feigenbaum (1979) used a seven-term expansion, and found \(c_{z}\approx-1.5276\), \(c_{4}\approx 0.1048\), along with \(\alpha\approx-2.5029\). Thus the renormalization theory has succeeded in explaining the value of \(\alpha\) observed numerically. The theory also explains the value of \(\delta\). Unfortunately, that part of the story requires more sophisticated apparatus than we are prepared to discuss (operators in function space, Frechet derivatives, etc). Instead we turn now to a concrete example of renormalization. The calculations are only approximate, but they can be done explicitly, using algebra instead of functional equations. ### Renormalization for Pedestrians The following pedagogical calculation is intended to clarify the renormalization process. As a bonus, it gives closed form approximations for \(\alpha\) and \(\delta\). Our treatment is modified from May and Oster (1980) and Helleman (1980). Let \(f(x,\mu)\) be any unimodal map that undergoes a period-doubling route to chaos. Suppose that the variables are defined such that the period-2 cycle is born at \(x=0\) when \(\mu=0\). Then for both \(x\) and \(\mu\) close to 0, the map is approximated by \[x_{\alpha+1}=-(1+\mu)x_{\alpha}+ax_{\alpha}^{2}+\ldots,\] since the eigenvalue is \(-1\) at the bifurcation. (We are going to neglect all higher order terms in \(x\) and \(\mu\); that's why our results will be only approximate.) Without loss of generality we can set \(a=1\) by rescaling \(x\to x/a\). So locally our map has the normal form \[x_{\alpha+1}=-(1+\mu)x_{\alpha}+x_{\alpha}^{2}+\ldots. \tag{3}\] Here's the idea: for \(\mu>0\), there exist period-2 points, say \(p\) and \(q\). As \(\mu\) increases, \(p\) and \(q\) themselves will eventually period-double. When this happens, the dynamics of \(f^{2}\) near \(p\) will necessarily be approximated by a map _with the same algebraic form as_ (3), since all maps have this form near a period-doubling bifurcation. Our strategy is to calculate the map governing the dynamics of \(f^{2}\) near \(p\), and renormalize it to look like (3). This defines a renormalization iteration, which in turn leads to a prediction of \(\alpha\) and \(\delta\). First, we find \(p\) and \(q\). By definition of period-2, \(p\) is mapped to \(q\) and \(q\) to \(p\). Hence (3) yields \[p=-(1+\mu)q+q^{2},\qquad q=-(1+\mu)p+p^{2}\,.\] By subtracting one of these equations from the other, and factoring out \(p-q\), we find that \(p+q=\mu\). Then multiplying the equations together and simplifying yields \(pq=-\,\mu\). Hence \[p=\frac{\mu+\sqrt{\mu^{2}+4\mu}}{2},\qquad q=\frac{\mu-\sqrt{\mu^{2}+4\mu}}{2}.\] Now shift the origin to \(p\) and look at the local dynamics. Let \[f(x)=-(1+\mu)x+x^{2}\.\] Then \(p\) is a fixed point of \(f^{2}\). Expand \(\,p+\eta_{n+1}=f^{2}(p+\eta_{n})\,\) in powers of the small deviation \(\eta_{n}\). After some algebra (Exercise 10.7.10) and neglecting higher order terms as usual, we get \[\eta_{n+1}=(1-4\mu-\mu^{2})\eta_{n}+C\eta_{n}^{2}+\ldots \tag{4}\] where \[C=4\mu+\mu^{2}-3\sqrt{\mu^{2}+4\mu}. \tag{5}\] As promised, the \(\eta\)-map (4) has the same algebraic form as the original map (3)! We can renormalize (4) into (3) by rescaling \(\eta\) and by defining a new \(\mu\). (Note: The need for _both_ of these steps was anticipated in the abstract version of renormalization discussed earlier. We have to rescale the state variable \(\eta\)_and_ shift the bifurcation parameter \(\mu\).) To rescale \(\eta\), let \(\,\tilde{x}_{n}=C\eta_{n}\,\). Then (4) becomes \[\tilde{x}_{n+1}=(1-4\mu-\mu^{2})\tilde{x}_{n}+\tilde{x}_{n}^{2}+\ldots. \tag{6}\] This matches (3) almost perfectly. All that remains is to define a new parameter \(\tilde{\mu}\) by \(-(1+\tilde{\mu})=(1-4\mu-\mu^{2})\). Then (6) achieves the desired form \[\tilde{x}_{n+1}=-(1+\tilde{\mu})\tilde{x}_{n}+\tilde{x}_{n}^{2}+\ldots \tag{7}\] where the renormalized parameter \(\tilde{\mu}\) is given by \[\tilde{\mu}=\mu^{2}+4\mu-2. \tag{8}\]When \(\tilde{\mu}=0\) the renormalized map (7) undergoes a flip bifurcation. Equivalently, the 2-cycle for the original map loses stability and creates a 4-cycle. This brings us to the end of the first period-doubling. **Example 10.7.2:** Using (8), calculate the value of \(\mu\) at which the original map (3) gives birth to a period-4 cycle. Compare your result to the value \(r_{2}=1+\sqrt{6}\) found for the logistic map in Example 10.3.3. _Solution:_ The period-4 solution is born when \(\tilde{\mu}=\mu^{2}+4\mu-2=0\). Solving this quadratic equation yields \(\mu=-2+\sqrt{6}\). (The other solution is negative and is not relevant.) Now recall that the origin of \(\mu\) was defined such that \(\mu=0\) at the birth of period-2, which occurs at \(r=3\) for the logistic map. Hence \(r_{2}=3+(-2+\sqrt{6})=1+\sqrt{6}\), which recovers the result obtained in Example 10.3.3. Because (7) has the same form as the original map, we can do the same analysis all over again, now regarding (7) as the fundamental map. In other words, we can renormalize _ad infinitum_! This allows us to bootstrap our way to the onset of chaos, using only the _renormalization transformation_ (8). Let \(\mu_{{}_{k}}\) denote the parameter value at which the original map (3) gives birth to a \(2^{k}\)-cycle. By definition of \(\mu\), we have \(\mu_{\rm t}=0\); by Example 10.7.2, \(\mu_{{}_{2}}=-2+\sqrt{6}\approx 0.449\). In general, the \(\mu_{{}_{k}}\) satisfy \[\mu_{{}_{k-1}}=\mu_{{}_{k}}{}^{2}+4\mu_{{}_{k}}-2\;. \tag{9}\] At first it looks like we have the subscripts backwards, but think about it, using Example 10.7.2 as a guide. To obtain \(\mu_{{}_{2}}\), we set \(\tilde{\mu}=0\) (\(=\mu_{{}_{1}}\)) in (8) and then solved for \(\mu\). Similarly, to obtain \(\mu_{{}_{k}}\), we set \(\tilde{\mu}=\mu_{{}_{k-1}}\) in (8) and then solve for \(\mu\). To convert (9) into a forward iteration, solve for \(\mu_{{}_{k}}\) in terms of \(\mu_{{}_{k-1}}\): \[\mu_{{}_{k}}=-2\sqrt{6+\mu_{{}_{k-1}}}\;\;. \tag{10}\] Exercise 10.7.11 asks you to give a cobweb analysis of (10), starting from the initial condition \(\mu_{{}_{1}}=0\). You'll find that \(\mu_{{}_{k}}\rightarrow\mu^{\ast}\), where \(\mu^{\ast}>0\) is a stable fixed point corresponding to the onset of chaos. **Example 10.7.3:** Find \(\mu^{\ast}\). _Solution:_ It is slightly easier to work with (9). The fixed point satisfies \(\mu^{\ast}=(\mu^{\ast})^{2}+4\mu^{\ast}-2\), and is given by \[\mu^{*}=\frac{1}{2}\left(-3+\sqrt{17}\right)\approx 0.56. \tag{11}\] Incidentally, this gives a remarkably accurate prediction of \(r_{\infty}\) for the logistic map. Recall that \(\mu=0\) corresponds to the birth of period-2, which occurs at \(r=3\) for the logistic map. Thus \(\mu^{*}\) corresponds to \(r_{\infty}\approx 3.56\) whereas the actual numerical result is \(r_{\infty}\approx 3.57\)! Finally we get to see how \(\delta\) and \(\alpha\) make their entry. For \(k>>1\), the \(\mu_{k}\) should converge geometrically to \(\mu^{*}\) at a rate given by the universal constant \(\delta\). Hence \(\delta\approx(\mu_{k-1}-\mu^{*})/(\mu_{k}-\mu^{*})\). As \(k\to\infty\), this ratio tends to \(0/0\) and therefore may be evaluated by L'Hopital's rule. The result is \[\delta \approx \frac{d\mu_{k-1}}{d\mu_{k}}\bigg{|}_{\mu=\mu^{*}}\] \[= 2\mu^{*}+4\] where we have used (9) in calculating the derivative. Finally, we substitute for \(\mu^{*}\) using (11) and obtain \[\delta\approx 1+\sqrt{17}\approx 5.12.\] This estimate is about 10 percent larger than the true \(\delta\approx 4.67\), which is not bad considering our approximations. To find the approximate \(\alpha\), note that we used \(C\) as a rescaling parameter when we defined \(\tilde{x}_{n}=C\eta_{n}\). Hence \(C\) plays the role of \(\alpha\). Substitution of \(\mu^{*}\) into (5) yields \[C=\frac{1+\sqrt{17}}{2}-3\left[\frac{1+\sqrt{17}}{2}\right]^{1/2}\approx-2.24,\] which is also within 10 percent of the actual value \(\alpha\approx-2.50\). **EXERCISE FOR CHAPTER TO** **Note**: Many of these exercises ask you to use a computer. Feel free to write your own programs, or to use commercially available software. ### Fixed Points and Cobwebs (Calculator experiments) Use a pocket calculator to explore the following maps. Start with some number and then keep pressing the appropriate function key; what happens? Then try a different number--is the eventual pattern the same?If possible, explain your results mathematically, using a cobweb or some other argument. **10.1.1**: \(x_{n+1}=\sqrt{x_{n}}\)**10.1.2**: \(x_{n+1}=x_{n}^{3}\) **10.1.3**: \(x_{n+1}=\exp x_{n}\)**10.1.4**: \(x_{n+1}=\ln x_{n}\) **10.1.5**: \(x_{n+1}=\cot x_{n}\)**10.1.6**: \(x_{n+1}=\tan x_{n}\) **10.1.7**: \(x_{n+1}=\sinh x_{n}\)**10.1.8**: \(x_{n+1}=\tanh x_{n}\) **10.1.9**: Analyze the map \(x_{n+1}=2x_{n}/(1+x_{n})\) for both positive and negative \(x_{n}\). **10.1.10**: Show that the map \(x_{n+1}=1+\frac{1}{2}\sin x_{n}\) has a unique fixed point. Is it stable? **10.1.11**: (Cubic map) Consider the map \(x_{n+1}=3x_{n}-x_{n}^{3}\). a) Find all the fixed points and classify their stability. b) Draw a cobweb starting at \(x_{0}=1.9\). c) Draw a cobweb starting at \(x_{0}=2.1\). d) Try to explain the dramatic difference between the orbits found in parts (b) and (c). For instance, can you prove that the orbit in (b) will remain bounded for all \(n\)? Or that \(|x_{n}|\to\infty\) in (c)? **10.1.12**: (Newton's method) Suppose you want to find the roots of an equation \(g(x)=0\). Then _Newton's method_ says you should consider the map \(x_{n+1}=f(x_{n})\), where \[f(x_{n})=x_{n}-\frac{g(x_{n})}{g^{\prime}(x_{n})}.\] a) To calibrate the method, write down the "Newton map" \(x_{n+1}=f(x_{n})\) for the equation \(g(x)=x^{2}-4=0\). b) Show that the Newton map has fixed points at \(x*=\pm 2\). c) Show that these fixed points are _superstable_. d) Iterate the map numerically, starting from \(x_{0}=1\). Notice the extremely rapid convergence to the right answer! **10.1.13**: (Newton's method and superstability) Generalize Exercise 10.1.12 as follows. Show that (under appropriate circumstances, to be stated) the roots of an equation \(g(x)=0\)_always_ correspond to superstable fixed points of the Newton map \(x_{n+1}=f(x_{n})\), where \(f(x_{n})=x_{n}-g(x_{n})/g^{\prime}(x_{n})\).(This explains why Newton's method converges so fast--if it converges at all.) **10.1.14**: Prove that \(x*=0\) is a globally stable fixed point for the map \(x_{n+1}=-\sin x_{n}\). (Hint: Draw the line \(x_{n+1}=-x_{n}\) on your cobweb diagram, in addition to the usual line \(x_{n+1}=x_{n}\).) ### Logistic Map: Numerics Consider the logistic map for all real \(x\) and for any \(r>1\). * Show that if \(x_{n}>1\) for some \(n\), then subsequent iterations diverge toward \(-\infty\). (For the application to population biology, this means the population goes extinct.) * Given the result of part (a), explain why it is sensible to restrict \(r\) and \(x\) to the intervals \(r\in[0,4]\) and \(x\in[0,1]\). Use a cobweb to show that \(x*=0\) is globally stable for \(0\leq r\leq 1\) in the logistic map. Compute the orbit diagram for the logistic map. Plot the orbit diagram for each of the following maps. Be sure to use a large enough range for both \(r\) and \(x\) to include the main features of interest. Also, try different initial conditions, just in case it matters. (Standard period-doubling route to chaos) (One period-doubling bifurcation and the show is over) (Period-doubling and chaos galore) (Nasty mess) (Attractors sometimes come in symmetric pairs) ### Logistic Map: Analysis (Superstable fixed point) Find the value of \(r\) at which the logistic map has a superstable fixed point. (Superstable 2-cycle) Let \(p\) and \(q\) be points in a 2-cycle for the logistic map. * Show that if the cycle is _superstable_, then either \(p=\frac{1}{2}\) or \(q=\frac{1}{2}\). (In other words, the point where the map takes on its maximum must be one of the points in the 2-cycle.) * Find the value of \(r\) at which the logistic map has a superstable 2-cycle. Analyze the long-term behavior of the map \(x_{n+1}=rx_{n}/(1+x_{n}^{2})\), where \(r>0\). Find and classify all fixed points as a function of \(r\). Can there be periodic solutions? Chaos? (Quadratic map) Consider the _quadratic map_\(x_{n+1}=x_{n}^{2}+c\). * Find and classify all the fixed points as a function of \(c\). * Find the values of \(c\) at which the fixed points bifurcate, and classify those bifurcations. c) For which values of \(c\) is there a stable 2-cycle? When is it superstable? d) Plot a partial bifurcation diagram for the map. Indicate the fixed points, the 2-cycles, and their stability. (Conjugacy) Show that the logistic map \(x_{n\ + 1}=rx_{n}(1-x_{n})\) can be transformed into the quadratic map \(y_{n+1}=y_{n}^{2}+c\) by a linear change of variables, \(x_{n}=ay_{n}+b\), where \(a\), \(b\) are to be determined. (One says that the logistic and quadratic maps are "conjugate." More generally, a _conjugacy_ is a change of variables that transforms one map into another. If two maps are conjugate, they are equivalent as far as their dynamics are concerned; you just have to translate from one set of variables to the other. Strictly speaking, the transformation should be a homeomorphism, so that all topological features are preserved.) (Cubic map) Consider the cubic map \(x_{n\ + 1}=f(x_{n})\), where \(f(x_{n})=rx_{n}-x_{n}^{3}\). a) Find the fixed points. For which values of \(r\) do they exist? For which values are they stable? b) To find the 2-cycles of the map, suppose that \(f(p)=q\) and \(f(q)=p\). Show that \(p\), \(q\) are roots of the equation \(x(x^{2}-r+1)\left(x^{2}-r-1\right)\left(x^{4}-rx^{2}+1\right)=0\) and use this to find all the 2-cycles. c) Determine the stability of the 2-cycles as a function of \(r\). d) Plot a partial bifurcation diagram, based on the information obtained. (A chaotic map that can be analyzed completely) Consider the _decimal shift map_ on the unit interval given by \[x_{n\ + 1}=10x_{n}\left(\text{mod 1}\right).\] As usual, "mod 1" means that we look only at the noninteger part of \(x\). For example, 2.63 (mod l) = 0.63. a) Draw the graph of the map. b) Find all the fixed points. (Hint: Write \(x_{n}\) in decimal form.) c) Show that the map has periodic points of all periods, but that all of them are unstable. (For the first part, it suffices to give an explicit example of a period-\(p\) point, for each integer \(p>1\).) d) Show that the map has infinitely many aperiodic orbits. e) By considering the rate of separation between two nearby orbits, show that the map has sensitive dependence on initial conditions. (Dense orbit for the decimal shift map) Consider a map of the unit interval into itself. An orbit \(\{x_{n}\}\) is said to be "dense" if it eventually gets arbitrarily close to every point in the interval. Such an orbit has to hop around rather crazily! More precisely, given any \(\varepsilon>0\) and any point \(p\in\) [0,l], the orbit \(\{x_{n}\}\) is _dense_ if there is some finite \(n\) such that \(|x_{n}-p|<\varepsilon\). Explicitly construct a dense orbit for the decimal shift map \(x_{n\ + 1}=10x_{n}\left(\text{mod l}\right)\). 3.9 (Binary shift map) Show that the _binary shift map_\(x_{n}\), \({}_{1}=2x_{n}\)(mod 1) has sensitive dependence on initial conditions, infinitely many periodic and aperiodic orbits, and a dense orbit. (Hint: Redo Exercises 10.3.7 and 10.3.8, but write \(x_{n}\) as a binary number, not a decimal.) #### 10.3.10 (Exact solutions for the logistic map with \(r=4\)) The previous exercise shows that the orbits of the binary shift map can be wild. Now we are going to see that this same wildness occurs in the logistic map when \(r=4\). 1. Let \(\{\theta_{n}\}\) be an orbit of the binary shift map \(\theta_{n+1}=2\theta_{n}\)(mod 1), and define a new sequence \(\{x_{n}\}\) by \(x_{n}=\sin^{2}(\pi\theta_{n})\). Show that \(x_{n+1}=4x_{n}(1-x_{n})\), no matter what \(\theta_{0}\) we started with. Hence any such orbit is an exact solution of the logistic map with \(r=4\)! 2. Graph the time series \(x_{n}\) vs. \(n\), for various choices of \(\theta_{0}\). 10.3.11 (Subcritical flip) Let \(x_{n+1}=f(x_{n})\), where \(f(x)=-(1+r)x-x^{2}-2x^{3}\). 1. Classify the linear stability of the fixed point \(x*=0\). 2. Show that a flip bifurcation occurs at \(x*=0\) when \(r=0\). 3. By considering the first few terms in the Taylor series for \(f^{2}\) (\(x\)) or otherwise, show that there is an _unstable_ 2-cycle for \(r<0\), and that this cycle coalesces with \(x*=0\) as \(r\to 0\) from below. 4. What is the long-term behavior of orbits that start near \(x*=0\), both for \(r<0\) and \(r>0\)? 10.3.12 (Numerics of superstable cycles) Let \(R_{n}\) denote the value of \(r\) at which the logistic map has a superstable cycle of period \(2^{n}\). 1. Write an implicit but exact formula for \(R_{n}\) in terms of the point \(x=\frac{1}{2}\) and the function \(f(x,r)=rx(1-x)\). 2. Using a computer and the result of part (a), find \(R_{2}\), \(R_{3},\ldots,R_{7}\) to five significant figures. 3. Evaluate \(\frac{R_{6}-R_{5}}{R_{7}-R_{6}}\). 10.3.13 (Tantalizing patterns) The orbit diagram of the logistic map (Figure 10.2.7) exhibits some striking features that are rarely discussed in books. 1. There are several smooth, dark tracks of points running through the chaotic part of the diagram. What are these curves? (Hint: Think about \(f(x_{m},r)\), where \(x_{m}=\frac{1}{2}\) is the point at which \(f\) is maximized.) 2. Can you find the exact value of \(r\) at the corner of the "big wedge"? (Hint: Several of the dark tracks in part (b) intersect at this corner.) ### Periodic Windows 4.1 (Exponential map) Consider the map \(x_{n+1}=r\exp x_{n}\) for \(r>0\). 1. Analyze the map by drawing a cobweb. b) Show that a tangent bifurcation occurs at \(r=1/e\). c) Sketch the time series \(x_{n}\) vs. \(n\) for \(r\) just above and just below \(r=1/e\). Analyze the map \(x_{n+1}=rx_{n}^{2}/(1+x_{n}^{2})\). Find and classify all the bifurcations and draw the bifurcation diagram. Can this system exhibit intermittency? (A superstable 3-cycle) The map \(x_{n+1}=1-rx_{n}^{2}\) has a superstable 3-cycle at a certain value of \(r\). Find a cubic equation for this \(r\). Approximate the value of \(r\) at which the logistic map has a superstable 3-cycle. Please give a numerical approximation that is accurate to at least four places after the decimal point. (Band merging and crisis) Show numerically that the period-doubling bifurcations of the 3-cycle for the logistic map accumulate near \(r=3.8495\). \(\ldots\), to form three small chaotic bands. Show that these chaotic bands merge near \(r=3.857\ldots\) to form a much larger attractor that nearly fills an interval. This discontinuous jump in the size of an attractor is an example of a _crisis_ (Grebogi, Ott, and Yorke 1983a). (A superstable cycle) Consider the logistic map with \(r=3.7389149\). Plot the cobweb diagram, starting from \(x_{0}=\frac{1}{2}\) (the maximum of the map). You should find a superstable cycle. What is its period? (Iteration patterns) Superstable cycles for the logistic map can be characterized by a string of \(R\)'s and \(L\)'s, as follows. By convention, we start the cycle at \(x_{0}=\frac{1}{2}\). Then if the \(n\)th iterate \(x_{n}\) lies to the right of \(x_{0}=\frac{1}{2}\), the \(n\)th letter in the string is an \(R\); otherwise it's an \(L\). (No letter is used if \(x_{n}=\frac{1}{2}\), since the superstable cycle is then complete.) The string is called the _symbol sequence_ or _iteration pattern_ for the superstable cycle (Metropolis et al. 1973). a) Show that for the logistic map with \(r>1+\sqrt{5}\), the first two letters are always \(RL\). b) What is the iteration pattern for the orbit you found in Exercise 10.4.6? (Intermittency in the Lorenz equations) Solve the Lorenz equations numerically for \(\sigma=10,\ b=\frac{8}{3}\), and \(r\) near 166. a) Show that if \(r=166\), all trajectories are attracted to a stable limit cycle. Plot both the \(xz\) projection of the cycle, and the time series \(x(t)\). b) Show that if \(r=166.2\), the trajectory looks like the old limit cycle for much of the time, but occasionally it is interrupted by chaotic bursts. This is the signature of intermittency. c) Show that as \(r\) increases, the bursts become more frequent and last longer. (Period-doubling in the Lorenz equations) Solve the Lorenz equations numerically for \(\sigma=10,\ b=\frac{8}{3}\), and \(r=148.5\). You should find a stable limit cycle. Then repeat the experiment for \(r=147.5\) to see a period-doubled version of this cycle. (When plotting your results, discard the initial transient, and use the \(xy\) projections of the attractors.) (The birth of period 3) This is a hard exercise. The goal is to show that the period-3 cycle of the logistic map is born in a tangent bifurcation at \(r=1+\sqrt{8}=3.8284\dots\). Here are a few vague hints. There are four unknowns: the three period-3 points \(a\), \(b\), \(c\) and the bifurcation value \(r\). There are also four equations: \(f(a)=b\), \(f(b)=c\), \(f(c)=a\), and the tangent bifurcation condition. Try to eliminate \(a\), \(b\), \(c\) (which we don't care about anyway) and get an equation for \(r\) alone. It may help to shift coordinates so that the map has its maximum at \(x=0\) rather than \(x=\frac{1}{2}\). Also, you may want to change variables again to symmetric polynomials involving sums of products of \(a\), \(b\), \(c\). See Saha and Strogatz (1995) for one solution, probably not the most elegant one! (Repeated exponentiation) Let \(a>0\) be an arbitrary positive real number, and consider the following sequence: \[x_{1}=a\] \[x_{2}=a^{a}\] \[x_{3}=a^{(a^{r})}\] and so on, where the general term is \(x_{a+1}=a^{\varsigma_{\varsigma}}\). Analyze the long-term behavior of the sequence \(\left\{x_{a}\right\}\) as \(n\to\infty\), given that \(x_{1}=a\), and then discuss how that long-term behavior depends on \(a\). For instance, show that for certain values of \(a\), the terms \(x_{n}\) tend to some limiting value. How does that limit depend on \(a\)? For which values of \(a\) is the long-term behavior more complicated? What happens then? After you finish exploring these questions on your own, you may want to consult Knoebel (1981) and Rippon (1983) for a taste of the extensive history surrounding iterated exponentials, going all the way back to Euler (1777). ### 10.5 Liapunov Exponent Calculate the Liapunov exponent for the linear map \(x_{n+1}=rx_{n}\). Calculate the Liapunov exponent for the decimal shift map \(x_{n+1}=10x_{n}\) (mod 1). Analyze the dynamics of the tent map for \(r\leq 1\). (No windows for the tent map) Prove that, in contrast to the logistic map, the tent map does _not_ have periodic windows interspersed with chaos. Plot the orbit diagram for the tent map. Using a computer, compute and plot the Liapunov exponent as a function of \(r\) for the sine map \(x_{{}_{n+1}}=r\sin\pi x_{{}_{n}}\), for \(0\leq x_{{}_{n}}\leq 1\) and \(0\leq r\leq 1\). The graph in Figure 10.5.2 suggests that \(\lambda=0\) at each period-doubling bifurcation value \(r_{{}_{n}}\). Show analytically that this is correct. ### Universality and Experiments The first two exercises deal with the sine map \(x_{{}_{n+1}}=r\sin\pi x_{{}_{n}}\), where \(0<r\leq 1\) and \(x\in[0,1]\). The goal is to learn about some of the practical problems that come up when one tries to estimate \(\delta\) numerically. (Naive approach) 1. At each of 200 equally spaced \(r\) values, plot \(x_{{}_{700}}\) through \(x_{{}_{1000}}\) vertically above \(r\), starting from some random initial condition \(x_{0}\). Check your orbit diagram against Figure 10.6.2 to be sure your program is working. 2. Now go to finer resolution near the period-doubling bifurcations, and estimate \(r_{{}_{n^{\prime}}}\) for \(n=1,2,\ldots,6\). Try to achieve five significant figures of accuracy. 3. Use the numbers from (b) to estimate the Feigenbaum ratio \(\frac{r_{{}_{n}}-r_{{}_{n-1}}}{r_{{}_{n+1}}-r_{{}_{n}}}\). (Note: To get accurate estimates in part (b), you need to be clever, or careful, or both. As you probably found, a straightforward approach is hampered by "critical slowing down"--the convergence to a cycle becomes unbearably slow when that cycle is on the verge of period-doubling. This makes it hard to decide precisely where the bifurcation occurs. To achieve the desired accuracy, you may have to use double precision arithmetic, and about \(10^{4}\) iterates. But maybe you can find a shortcut by reformulating the problem.) (Superstable cycles to the rescue) The "critical slowing down" encountered in the previous problem is avoided if we compute \(R_{{}_{n}}\) instead of \(r_{{}_{n}}\). Here \(R_{{}_{n}}\) denotes the value of \(r\) at which the sine map has a superstable cycle of period \(2^{n}\). 1. Explain why it should be possible to compute \(R_{{}_{n}}\) more easily and accurately than \(r_{{}_{n}}\). 2. Compute the first six \(R_{{}_{n}}\)'s and use them to estimate \(\delta\). If you're interested in knowing the _best_ way to compute \(\delta\), see Briggs (1991) for the state of the art. (Qualitative universality of patterns) The U-sequence dictates the ordering of the windows, but it actually says more: it dictates the _iteration pattern_ within each window. (See Exercise 10.4.7 for the definition of iteration patterns.) For instance, consider the large period-6 window for the logistic and sine maps, visible in Figure 10.6.2. 1. For both maps, plot the cobweb for the corresponding superstable 6-cycle, given that it occurs at \(r=3.6275575\) for the logistic map and \(r=0.8811406\) for the sine map. (This cycle acts as a representative for the whole window.) 2. Find the iteration pattern for both cycles, and confirm that they match. 6.4 (Period 4) Consider the iteration patterns of all possible period-4 orbits for the logistic map, or any other unimodal map governed by the U-sequence. * Show that only two patterns are possible for period-4 orbits: \(RLL\) and \(RLR\). * Show that the period-4 orbit with pattern \(RLL\) always occurs after \(RLR\), i.e., at a larger value of \(r\). 6.5(Unfamiliar later cycles) The final superstable cycles of periods 5, 6, 4, 6, 5, 6 in the logistic map occur at approximately the following values of \(r\): 3.9057065, 3.9375364, 3.9602701, 3.9777664, 3.9902670, 3.9975831 (Metropolis et al. 1973). Notice that they're all near the end of the orbit diagram. They have tiny windows around them and tend to be overlooked. * Plot the cobwebs for these cycles. * Did you find it hard to obtain the cycles of periods 5 and 6? If so, can you explain why this trouble occurred? 6.6(A trick for locating superstable cycles) Hao and Zheng (1989) give an amusing algorithm for finding a superstable cycle with a specified iteration pattern. The idea works for any unimodal map, but for convenience, consider the map \(x_{n+1}=r-x_{n}^{2}\), for\(0\leq r\leq 2\). Define two functions \(R(y)=\sqrt{r-y}\), \(L(y)=-\sqrt{r-y}\). These are the right and left branches of the inverse map. * For instance, suppose we want to find the \(r\) corresponding to the superstable 5-cycle with pattern \(RLLR\). Then Hao and Zheng show that this amounts to solving the equation \(r=RLLR(0)\). Show that when this equation is written out explicitly, it becomes \[r=\sqrt{r+\sqrt{r+\sqrt{r-\sqrt{r}}}}\.\] * Solve this equation numerically by iterating the map \[r_{n+1}=\sqrt{r_{n}+\sqrt{r_{n}+\sqrt{r_{n}-\sqrt{r_{n}}}}}\,\] starting from any reasonable guess, e.g., \(r_{0}=2\). Show numerically that \(r_{n}\) converges rapidly to 1.860782522. \(\ldots\). * Verify that the answer to (b) yields a cycle with the desired pattern. ### Renormalization 7.1(Hands on the functional equation) The functional equation \(g(x)\ =\ \alpha g^{2}(x/\alpha)\) arose in our renormalization analysis of period-doubling. Let'sapproximate its solution by brute force, assuming that \(g(x)\) is even and has a quadratic maximum at \(x=0\). a) Suppose \(g(x)\approx 1+c_{2}x^{2}\) for small \(x\). Solve for \(c_{2}\) and \(\alpha\). (Neglect \(O(x^{4})\) terms.) b) Now assume \(g(x)\approx 1+c_{2}x^{2}+c_{4}x^{4}\), and use Mathematica, Maple, Macsyma (or hand calculation) to solve for \(\alpha\), \(c_{2}\), \(c_{4}\). Compare your approximate results to the "exact" values \(\alpha\approx-2.5029\ldots\), \(c_{2}\approx-1.527\ldots\), \(c_{4}\approx 0.1048\ldots\). Given a map \(y_{n+1}=f(y_{n})\), rewrite the map in terms of a rescaled variable \(x_{n}=\alpha y_{n}\). Use this to show that rescaling and inversion converts \(f^{2}\) (\(x\), \(R_{1}\)) into \(\alpha f^{2}\) (\(x/\alpha\), \(R_{1}\)), as claimed in the text. Show that if \(g\) is a solution of the functional equation, so is \(\mu g(x/\mu)\), with the same \(\alpha\). (Wildness of the universal function \(g(x)\)) Near the origin \(g(x)\) is roughly parabolic, but elsewhere it must be rather wild. In fact, the function \(g(x)\) has infinitely many wiggles as \(x\) ranges over the real line. Verify these statements by demonstrating that \(g(x)\) crosses the lines \(y=\pm x\) infinitely many times. (Hint: Show that if \(x\)* is a fixed point of \(g(x)\), then so is \(\alpha x\)*) (Crudest possible estimate of \(\alpha\)) Let \(f(x,r)=r-x^{2}\). a) Write down explicit expressions for \(f(x,R_{0})\) and \(\alpha f^{2}(x/\alpha\), \(R_{1})\). b) The two functions in (a) are supposed to resemble each other near the origin, if \(\alpha\) is chosen correctly. (That's the idea behind Figure 10.7.3.) Show the \(O(x^{2})\) coefficients of the two functions agree if \(\alpha=-2\). (Improved estimate of \(\alpha\)) Redo Exercise 10.7.5 to one higher order: Let \(f(x,r)=r-x^{2}\) again, but now compare \(\alpha f^{2}(x/\alpha\), \(R_{1})\) to \(\alpha^{2}f^{4}(x/\alpha^{2}\), \(R_{2})\) and match the coefficients of the lowest powers of \(x\). What value of \(\alpha\) is obtained in this way? (Quartic maxima) Develop the renormalization theory for functions with a _fourth-degree_ maximum, e.g., \(f(x,r)=r-x^{4}\). What approximate value of \(\alpha\) is predicted by the methods of Exercises 10.7.1 and 10.7.5? Estimate the first few terms in the power series for the universal function \(g(x)\). By numerical experimentation, estimate the new value of \(\delta\) for the quartic case. See Briggs (1991) for precise values of \(\alpha\) and \(\delta\) for this fourth-degree case, as well as for all other integer degrees between 2 and 12. (Renormalization approach to intermittency: algebraic version) Consider the map \(x_{n+1}=f(x_{n},r)\), where \(f(x_{n},r)=-r+x-x^{2}\). This is the normal form for any map close to a tangent bifurcation. a) Show that the map undergoes a tangent bifurcation at the origin when \(r=0\). b) Suppose \(r\) is small and positive. By drawing a cobweb, show that a typical orbit takes many iterations to pass through the bottleneck at the origin. ## Fractal ### Introduction Back in Chapter 9, we found that the solutions of the Lorenz equations settle down to a complicated set in phase space. This set is the strange attractor. As Lorenz (1963) realized, the geometry of this set must be very peculiar, something like an "infinite complex of surfaces." In this chapter we develop the ideas needed to describe such strange sets more precisely. The tools come from fractal geometry. Roughly speaking, _fractals_ are complex geometric shapes with fine structure at arbitrarily small scales. Usually they have some degree of self-similarity. In other words, if we magnify a tiny part of a fractal, we will see features reminiscent of the whole. Sometimes the similarity is exact; more often it is only approximate or statistical. Fractals are of great interest because of their exquisite combination of beauty, complexity, and endless structure. They are reminiscent of natural objects like mountains, clouds, coastlines, blood vessel networks, and even broccoli, in a way that classical shapes like cones and squares can't match. They have also turned out to be useful in scientific applications ranging from computer graphics and image compression to the structural mechanics of cracks and the fluid mechanics of viscous fingering. Our goals in this chapter are modest. We want to become familiar with the simplest fractals and to understand the various notions of fractal dimension. These ideas will be used in Chapter 12 to clarify the geometric structure of strange attractors. Unfortunately, we will not be able to delve into the scientific applications of fractals, nor the lovely mathematical theory behind them. For the clearest introduction to the theory and applications of fractals, see Falconer (1990). The books of Mandelbrot (1982), Peitgen and Richter (1986), Barnsley (1988), Feder (1988), and Schroeder (1991) are also recommended for their many fascinating pictures and examples. ### 11.1 Countable and Uncountable Sets This section reviews the parts of set theory that we'll need in later discussions of fractals. You may be familiar with this material already; if not, read on. Are some infinities larger than others? Surprisingly, the answer is yes. In the late 1800s, Georg Cantor invented a clever way to compare different infinite sets. Two sets \(X\) and \(Y\) are said to have the same _cardinality_ (or number of elements) if there is an invertible mapping that pairs each element \(x\in X\) with precisely one \(y\in Y.\) Such a mapping is called a _one-to-one correspondence_; it's like a buddy system, where every \(x\) has a buddy \(y,\) and no one in either set is left out or counted twice. A familiar infinite set is the set of natural numbers \(\mathbf{N}=\{\)1,2,3,4, \(\ldots\}.\) This set provides a basis for comparison--if another set \(X\) can be put into one-to-one correspondence with the natural numbers, then \(X\) is said to be _countable._ Otherwise \(X\) is _uncountable._ These definitions lead to some surprising conclusions, as the following examples show. **Example 11.1.1:** Show that the set of even natural numbers \(E=\{\)2,4,6, \(\ldots\}\) is countable. _Solution:_ We need to find a one-to-one correspondence between \(E\) and \(\mathbf{N}.\) Such a correspondence is given by the invertible mapping that pairs each natural number \(n\) with the even number \(2n;\) thus \(1\leftrightarrow 2,\)\(2\leftrightarrow 4,\)\(3\leftrightarrow 6,\) and so on. Hence there are exactly as many even numbers as natural numbers. You might have thought that there would be only _half as_ many, since all the odd numbers are missing! There is an equivalent characterization of countable sets which is frequently useful. A set \(X\) is countable if it can be written as a list \(\{x_{1},\)\(x_{2},\)\(x_{3},\)\(\ldots\},\) with every \(x\in X\) appearing somewhere in the list. In other words, given any \(x,\) there is some finite \(n\) such that \(x_{n}=x.\) A convenient way to exhibit such a list is to give an algorithm that systematically counts the elements of \(X.\) This strategy is used in the next two examples. **Example 11.1.2:** Show that the integers are countable. _Solution:_ Here's an algorithm for listing all the integers: We start with 0 and then work in order of increasing absolute value. Thus the list is \(\{\)0,1,\(-\)1,2,\(-\)2,3, \(-\)3, \(\ldots\}.\) Any particular integer appears eventually, so the integers are countable. **Example 11.1.3**: **:** Show that the positive rational numbers are countable. _Solution:_ Here's a _wrong_ way: we start listing the numbers \(\frac{1}{1},\frac{1}{2},\frac{1}{3},\frac{1}{4}\ldots\) in order. Unfortunately we never finish the \(\frac{1}{n}\)'s and so numbers like \(\frac{2}{3}\) are never counted! The right way is to make a table where the _pq_-th entry is _p/q_. Then the rationals can be counted by the weaving procedure shown in Figure 11.1.1. Any given _p/q_ is reached after a finite number of steps, so the rationals are countable. Now we consider our first example of an uncountable set. **Example 11.1.4**: **:** Let \(X\) denote the set of all real numbers between 0 and 1. Show that \(X\) is uncountable. _Solution:_ The proof is by contradiction. If \(X\) were countable, we could list all the real numbers between 0 and 1 as a set \(\{x_{1},x_{2},x_{3},\ldots\}\). Rewrite these numbers in decimal form: \[\begin{array}{l}x_{1}=0.x_{11}x_{12}x_{13}x_{14}\cdots\\ x_{2}=0.x_{21}x_{22}x_{23}x_{24}\cdots\\ x_{3}=0.x_{31}x_{32}x_{33}x_{34}\cdots\\ \vdots\end{array}\] where \(x_{j}\) denotes the _j_th digit of the real number \(x_{i}\). To obtain a contradiction, we'll show that there's a number \(r\) between 0 and 1 that is _not_ on the list. Hence any list is necessarily incomplete, and so the reals are uncountable. ### 11.1 Countable and uncountable sets Figure 11.1.1We construct \(r\) as follows: its first digit is _anything other than_\(x_{i1}\), the first digit of \(x_{1}\). Similarly, its second digit is anything other than the second digit of \(x_{2}\). In general, the \(n\)th digit of \(r\) is \(\overrightarrow{x_{m}}\), defined as any digit other than \(x_{m}\). Then we claim that the number \(r=\overrightarrow{x_{1}}\overrightarrow{x_{2}}\overrightarrow{x_{3}}...\) is not on the list. Why not? It can't be equal to \(x_{1}\), because it differs from \(x_{1}\) in the first decimal place. Similarly, \(r\) differs from \(x_{2}\) in the second decimal place, from \(x_{3}\) in the third decimal place, and so on. Hence \(r\) is not on the list, and thus \(X\)is uncountable. This argument (devised by Cantor) is called the _diagonal argument,_ because \(r\) is constructed by changing the diagonal entries \(x_{m}\) in the matrix of digits \([x_{j}]\). ### 1.2 Cantor Set Now we turn to another of Cantor's creations, a fractal known as the Cantor set. It is simple and therefore pedagogically useful, but it is also much more than that--as we'll see in Chapter 12, the Cantor set is intimately related to the geometry of strange attractors. Figure 11.2.1 shows how to construct the Cantor set. We start with the closed interval \(S_{0}=[0,1]\) and remove its open middle third, i.e., we delete the interval \((\frac{1}{3},\frac{2}{3})\) and leave the endpoints behind. This produces the pair of closed intervals shown as \(S_{1}\). Then we remove the open middle thirds of _those_ two intervals to produce \(S_{2}\), and so on. The limiting set \(C=S_{\infty}\) is the _Cantor set_. It is difficult to visualize, but Figure 11.2.1 suggests that it consists of an infinite number of infinitesimal pieces, separated by gaps of various sizes. ### 1.3 Fractal Properties of the Cantor Set The Cantor set \(C\) has several properties that are typical of fractals more generally: Figure 11.2.1: 1. _C has structure at arbitrarily small scales._ If we enlarge part of \(C\) repeatedly, we continue to see a complex pattern of points separated by gaps of various sizes. This structure is neverending, like worlds within worlds. In contrast, when we look at a smooth curve or surface under repeated magnification, the picture becomes more and more featureless. 2. _C is self-similar._ It contains smaller copies of itself at all scales. For instance, if we take the left part of \(C\) (the part contained in the interval \([0,\frac{1}{3}]\)) and enlarge it by a factor of three, we get \(C\) back again. Similarly, the parts of \(C\) in each of the four intervals of \(S\)2 are geometrically similar to \(C\), except scaled down by a factor of nine. If you're having trouble seeing the self-similarity, it may help to think about the sets _S_n rather than the mind-boggling set _S_\({}_{\infty}\). Focus on the left half of \(S\)2--it looks just like _S_i, except three times smaller. Similarly, the left half of \(S\)3 is \(S\)2, reduced by a factor of three. In general, the left half of _S_\({}_{n+1}\) looks like _all_ of _S_\({}_{n}\), scaled down by three. Now set \(n\) = \(\infty\). The conclusion is that the left half of _S_\({}_{\infty}\) looks like _S_\({}_{\infty}\), scaled down by three, just as we claimed earlier. Warning: The strict self-similarity of the Cantor set is found only in the simplest fractals. More general fractals are only approximately self-similar. 3. _The dimension of C is not an integer._ As we'll show in Section 11.3, its dimension is actually \(\ln 2/\ln 3\approx 0.63\)! The idea of a noninteger dimension is bewildering at first, but it turns out to be a natural generalization of our intuitive ideas about dimension, and provides a very useful tool for quantifying the structure of fractals. Two other properties of the Cantor set are worth noting, although they are not fractal properties as such: _C has measure zero_ and _it consists of uncountably many points._ These properties are clarified in the examples below. **EXAMPLE 11.2.1:** Show that the _measure_ of the Cantor set is zero, in the sense that it can be covered by intervals whose total length is arbitrarily small. _Solution:_ Figure 11.2.1 shows that each set _S_\({}_{n}\) completely covers all the sets that come after it in the construction. Hence the Cantor set \(C\) = _S_\({}_{\infty}\) is covered by _each_ of the sets _S_\({}_{n}\). So the total length of the Cantor set must be less than the total length of _S_\({}_{n}\), for any \(n\). Let _L_\({}_{n}\) denote the length of _S_\({}_{n}\). Then from Figure 11.2.1 we see that \(L_{0}=1\), \(L_{1}=\frac{2}{3}\), \(L_{2}=\left(\frac{2}{3}\right)\left(\frac{2}{3}\right)=\left(\frac{2}{3}\right) ^{2}\), and in general, \(L_{n}=\left(\frac{2}{3}\right)^{n}\). Since _L_\({}_{n}\to 0\) as \(n\to\infty\), the Cantor set has a total length of zero. **11.2 CANTOR SET 409**Example 11.2.1 suggests that the Cantor set is "small" in some sense. On the other hand, it contains tremendously many points--uncountably many, in fact. To see this, we first develop an elegant characterization of the Cantor set. **Example 11.2.2:** Show that the Cantor set \(C\) consists of all points \(c\in[0,1]\) that have no l's in their base-3 expansion. _Solution:_ The idea of expanding numbers in different bases may be unfamiliar, unless you were one of those children who was taught "New Math" in elementary school. Now you finally get to see why base-3 is useful! First let's remember how to write an arbitrary number \(x\in[0,1]\) in base-3. We expand in powers of 1/3: thus if \(x=\frac{a_{1}}{3}+\frac{a_{2}}{3^{2}}+\frac{a_{3}}{3^{3}}+\ldots\), then \(x=a_{1}a_{2}a_{3}\), \(\ldots\) in base-3, where the digits \(a_{n}\) are 0, 1, or 2. This expansion has a nice geometric interpretation (Figure 11.2.2). .00....01....02....20....21....22... If we imagine that [0,1] is divided into three equal pieces, then the first digit \(a_{1}\) tells us whether \(x\) is in the left, middle, or right piece. For instance, all numbers with \(a_{1}=0\) are in the left piece. (Ordinary base-10 works the same way, except that we divide [0,1] into ten pieces instead of three.) The second digit \(a_{2}\) provides more refined information: it tells us whether \(x\) is in the left, middle, or right third of a given piece. For instance, points of the form \(x=.01\ldots\) are in the middle part of the left third of [0,1], as shown in Figure 11.2.2. Now think about the base-3 expansion of points in the Cantor set \(C\). We deleted the middle third of [0,1] at the first stage of constructing \(C\); this removed all points whose first digit is 1. So those points can't be in \(C\). The points left over (the only ones with a chance of ultimately being in \(C\)) must have 0 or 2 as their first digit. Similarly, points whose _second_ digit is 1 were deleted at the next stage in the construction. By repeating this argument, we see that \(C\) consists of all points whose base-3 expansion contains no l's, as claimed. There's still a fussy point to be addressed. What about endpoints like \(\frac{1}{3}=.1000\ldots?\) It's in the Cantor set, yet it has a 1 in its base-3 expansion. Does this contradict what we said above? No, because this point can also be written solely in terms of 0's and 2's, as follows: \(\frac{1}{3}=.1000\ldots=.02222\ldots\) By this trick, each point in the Cantor set can be written such that no I's appear in its base-3 expansion, as claimed. Now for the payoff. **Example 11.2.3:** Show that the Cantor set is uncountable. _Solution:_ This is just a rewrite of the Cantor diagonal argument of Example 11.1.4, so we'll be brief. Suppose there were a list \(\{c_{1},c_{2},c_{3},\ldots\}\) of all points in \(C\). To show that \(C\) is uncountable, we produce a point \(\overline{c}\) that is in \(C\) but not on the list. Let \(c_{\#}\) denote the \(j\)th digit in the base-3 expansion of \(c_{i^{\prime}}\). Define \(\overline{c}=\overline{c}_{11}\overline{c}_{22}\ldots\), where the overbar means we switch 0's and 2's: thus \(\overline{c}_{\mathit{sm}}=0\) if \(c_{\mathit{sm}}=2\) and \(\overline{c}_{\mathit{sm}}=2\) if \(c_{\mathit{sm}}=0\). Then \(\overline{c}\) is in \(C\), since it's written solely with 0's and 2's, but \(\overline{c}\) is not on the list, since it differs from \(c_{\mathit{n}}\) in the \(n\)th digit. This contradicts the original assumption that the list is complete. Hence \(C\) is uncountable. ### 11.3 Dimension of Self-Similar Fractals What is the "dimension" of a set of points? For familiar geometric objects, the answer is clear--lines and smooth curves are one-dimensional, planes and smooth surfaces are two-dimensional, solids are three-dimensional, and so on. If forced to give a definition, we could say that _the dimension is the minimum number of coordinates needed to describe every point in the set_. For instance, a smooth curve is one-dimensional because every point on it is determined by one number, the arc length from some fixed reference point on the curve. But when we try to apply this definition to fractals, we quickly run into paradoxes. Consider the _von Koch curve_, defined recursively in Figure 11.3.1. We start with a line segment \(S_{0}\). To generate \(S_{1}\), we delete the middle third of \(S_{0}\) and replace it with the other two sides of an equilateral triangle. Subsequent stages are generated recursively by the same rule: \(S_{\mathit{n}}\) is obtained by replacing the middle third of each line segment in \(S_{\mathit{n-1}}\) by the other two sides of an equilateral triangle. The limiting set \(K=S_{\infty}\) is the von Koch curve. **Figure 11.3.1** What is the dimension of the von Koch curve? Since it's a curve, you might be tempted to say it's one-dimensional. But the trouble is that \(K\) has _infinite arc length_! To see this, observe that if the length of \(S_{0}\) is \(L_{0}\), then the length of \(S_{1}\) is \(L_{1}=\frac{4}{3}L_{0}\), because \(S_{1}\) contains four segments, each of length \(\frac{1}{3}L_{0}\). The length increases by a factor of \(\frac{4}{3}\) at each stage of the construction, so \(L_{n}=(\frac{4}{3})^{n}L_{0}\to\infty\) as \(n\to\infty\). Moreover, the arc length between _any_ two points on \(K\) is infinite, by similar reasoning. Hence points on \(K\) aren't determined by their arc length from a particular point, because every point is infinitely far from every other! This suggests that \(K\) is more than one-dimensional. But would we really want to say that \(K\) is two-dimensional? It certainly doesn't seem to have any "area." So the dimension should be _between_ 1 and 2, whatever that means. With this paradox as motivation, we now consider some improved notions of dimension that can cope with fractals. ### Similarity Dimension The simplest fractals are self-similar, i.e., they are made of scaled-down copies of themselves, all the way down to arbitrarily small scales. The dimension of such fractals can be defined by extending an elementary observation about _classical_ self-similar sets like line segments, squares, or cubes. For instance, consider the square region shown in Figure 11.3.2. If we shrink the square by a factor of 2 in each direction, it takes four of the small squares to equal the whole. Or if we scale the original square down by a factor of 3, then nine small squares are required. In general, if we reduce the linear dimensions of the square region by a factor of \(r\), it takes \(r^{2}\) of the smaller squares to equal the original. Now suppose we play the same game with a solid cube. The results are different: if we scale the cube down by a factor of 2, it takes eight of the smaller cubes to make up the original. In general, if the cube is scaled down by \(r\), we need \(r^{3}\) of the smaller cubes to make up the larger one. The exponents 2 and 3 are no accident; they reflect the two-dimensionality of the square and the three-dimensionality of the cube. This connection between dimensions and exponents suggests the following definition. Suppose that a self-similar set is composed of \(m\) copies of itself scaled down by a factor of \(r\). Then the _similarity dimension_\(d\) is the exponent defined by \(m=r^{d}\), or equivalently, \[d=\frac{\ln m}{\ln r}.\] This formula is easy to use, since \(m\) and \(r\) are usually clear from inspection. **Example 11.3.1:** Find the similarity dimension of the Cantor set \(C\). Figure 11.3.2:_Solution:_ As shown in Figure 11.3.3, \(C\) is composed of two copies of itself, each scaled down by a factor of 3. So \(m=2\) when \(r=3\). Therefore \(d=\ln 2/\ln 3\approx 0.63\). In the next example we confirm our earlier intuition that the von Koch curve should have a dimension between 1 and 2. **Example 11.3.2:** Show that the von Koch curve has a similarity dimension of \(\ln 4/\ln 3\approx 1.26\). _Solution:_ The curve is made up of four equal pieces, each of which is similar to the original curve but is scaled down by a factor of 3 in both directions. One of these pieces is indicated by the arrows in Figure 11.3.4. Hence \(m=4\) when \(r=3\), and therefore \(d=\ln 4/\ln 3\). ### More General Cantor Sets Other self-similar fractals can be generated by changing the recursive procedure. For instance, to obtain a new kind of Cantor set, divide an interval into five equal pieces, delete the second and fourth subintervals, and then repeat this process indefinitely (Figure 11.3.5). Figure 11.3.5 Figure 11.3.4 Figure 11.3.5 We call the limiting set the _even-fifths Cantor set_, since the even fifths are removed at each stage. (Similarly, the standard Cantor set of Section 11.2 is often called the _middle-thirds Cantor set_.) **Example 11.3.3**: _Find the similarity dimension of the even-fifths Cantor set._ _Solution:_ Let the original interval be denoted \(S_{\omega}\) and let \(S_{\omega}\) denote the \(n\)th stage of the construction. If we scale \(S_{\omega}\) down by a factor of five, we get one third of the set \(S_{n+1}\). Now setting \(n=\infty\), we see that the even-fifths Cantor set \(S_{\infty}\) is made of three copies of itself, shrunken by a factor of 5. Hence \(m=3\) when \(r=5\), and so \(d=\ln 3/\ln 5\). There are so many different Cantor-like sets that mathematicians have abstracted their essence in the following definition. A closed set \(S\) is called a _topological Cantor set_ if it satisfies the following properties: 1. \(S\) is "totally disconnected." This means that \(S\) contains no connected subsets (other than single points). In this sense, all points in \(S\) are separated from each other. For the middle-thirds Cantor set and other subsets of the real line, this condition simply says that \(S\) contains no intervals. 2. On the other hand, \(S\) contains no "isolated points." This means that every point in \(S\) has a neighbor arbitrarily close by--given any point \(p\in S\) and any small distance \(\varepsilon>0\), there is some other point \(q\in S\) within a distance \(\varepsilon\) of \(p\). The paradoxical aspects of Cantor sets arise because the first property says that points in \(S\) are spread apart, whereas the second property says they're packed together! In Exercise 11.3.6, you're asked to check that the middle-thirds Cantor set satisfies both properties. Notice that the definition says nothing about self-similarity or dimension. These notions are geometric rather than topological; they depend on concepts of distance, volume, and so on, which are too rigid for some purposes. Topological features are more robust than geometric ones. For instance, if we continuously deform a self-similar Cantor set, we can easily destroy its self-similarity but properties 1 and 2 will persist. When we study strange attractors in Chapter 12, we'll see that the cross sections of strange attractors are often topological Cantor sets, although they are not necessarily self-similar. ### 11.4 Box Dimension To deal with fractals that are not self-similar, we need to generalize our notion of dimension still further. Various definitions have been proposed; see Falconer (1990) for a lucid discussion. All the definitions share the idea of "measurement at a scale \(\varepsilon\)"--roughly speaking, we measure the set in a way that ignores irregularities of size less than \(\varepsilon\), and then study how the measurements vary as \(\varepsilon\,\to\,0\). #### Definition of Box Dimension One kind of measurement involves covering the set with boxes of size \(\varepsilon\) (Figure 11.4.1). Let \(S\) be a subset of \(D\)-dimensional Euclidean space, and let \(N(\varepsilon)\) be the minimum number of \(D\)-dimensional cubes of side \(\varepsilon\) needed to cover \(S\). How does \(N(\varepsilon)\) depend on \(\varepsilon\)? To get some intuition, consider the classical sets shown in Figure 11.4.1. For a smooth curve of length \(L\), \(N(\varepsilon)\propto L/\varepsilon\); for a planar region of area \(A\) bounded by a smooth curve, \(N(\varepsilon)\propto A/\varepsilon^{2}\). The key observation is that the dimension of the set equals the exponent \(d\) in the _power law_\(N(\varepsilon)\propto 1/\varepsilon^{d}\). This power law also holds for most fractal sets \(S\), except that \(d\) is no longer an integer. By analogy with the classical case, we interpret \(d\) as a dimension, usually called the _capacity_ or _box dimension_ of \(S\). An equivalent definition is \[d=\lim_{\varepsilon\to 0}\frac{\ln N(\varepsilon)}{\ln(1/\varepsilon)},\text{ if the limit exists.}\] **Example 11.4.1:** Find the box dimension of the Cantor set. _Solution:_ Recall that the Cantor set is covered by each of the sets \(S_{n}\) used in its construction (Figure 11.2.1). Each \(S_{n}\) consists of \(2^{n}\) intervals of length \((1/3)^{n}\), so if we pick \(\varepsilon=(1/3)^{n}\), we need all \(2^{n}\) of these intervals to cover the Cantor set. Hence \(N=2^{n}\) when \(\varepsilon=(1/3)^{n}\). Since \(\varepsilon\to 0\) as \(n\to\infty\), we find Figure 11.4.1:\[d=\lim_{\varepsilon\to 0}\frac{\ln N(\varepsilon)}{\ln(1/\varepsilon)}=\frac{\ln(2^{n})}{\ln(3^{n})}= \frac{n\ln 2}{n\ln 3}=\frac{\ln 2}{\ln 3}\] in agreement with the similarity dimension found in Example 11.3.1. This solution illustrates a helpful trick. We used a discrete sequence \(\varepsilon=(1/3)^{n}\) that tends to zero as \(n\to\infty\), even though the definition of box dimension says that we should let \(\varepsilon\to 0\) continuously. If \(\varepsilon\neq(1/3)^{n}\), the covering will be slightly wasteful-- some boxes hang over the edge of the set--but the limiting value of \(d\) is the same. **Example 11.4.2:** A fractal that is _not_ self-similar is constructed as follows. A square region is divided into nine equal squares, and then one of the small squares is selected at random and discarded. Then the process is repeated on each of the eight remaining small squares, and so on. What is the box dimension of the limiting set? _Solution:_ Figure 11.4.2 shows the first two stages in a typical realization of this random construction. Pick the unit of length to equal the side of the original square. Then \(S_{1}\) is covered (with no wastage) by \(N=8\) squares of side \(\varepsilon=\frac{1}{3}\). Similarly, \(S_{2}\) is covered by \(N=8^{2}\) squares of side \(\varepsilon=(\frac{1}{3})^{2}\). In general, \(N=8^{n}\) when \(\varepsilon=(\frac{1}{3})^{n}\). Hence \[d=\lim_{\varepsilon\to 0}\frac{\ln N(\varepsilon)}{\ln(1/\varepsilon)}=\frac{\ln (8^{n})}{\ln(3^{n})}=\frac{n\ln 8}{n\ln 3}=\frac{\ln 8}{\ln 3}.\] **Critique of Box Dimension** When computing the box dimension, it is not always easy to find a minimal cover. There's an equivalent way to compute the box dimension that avoids this problem. We cover the set with a square mesh of boxes of side \(\varepsilon\), count the number of occupied boxes \(N(\varepsilon)\), and then compute \(d\) as before. Even with this improvement, the box dimension is rarely used in practice. Its computation requires too much storage space and computer time, compared to other Figure 11.4.2: Figure 11.3.1: types of fractal dimension (see below). The box dimension also suffers from some mathematical drawbacks. For example, its value is not always what it should be: the set of rational numbers between 0 and 1 can be proven to have a box dimension of 1 (Falconer 1990, p. 44), even though the set has only countably many points. Falconer (1990) discusses other fractal dimensions, the most important of which is the _Hausdorff dimension._ It is more subtle than the box dimension. The main conceptual difference is that the Hausdorff dimension uses coverings by small sets of _varying_ sizes, not just boxes of fixed size \(\varepsilon\). It has nicer mathematical properties than the box dimension, but unfortunately it is even harder to compute numerically. ### 11.5 Pointwise and Correlation Dimensions Now it's time to return to dynamics. Suppose that we're studying a chaotic system that settles down to a strange attractor in phase space. Given that strange attractors typically have fractal microstructure (as we'll see in Chapter 12), how could we estimate the fractal dimension? First we generate a set of very many points \(\{\mathbf{x}_{\varepsilon},\,i=1,\ldots,\,n\}\) on the attractor by letting the system evolve for a long time (after taking care to discard the initial transient, as usual). To get better statistics, we could repeat this procedure for several different trajectories. In practice, however, almost all trajectories on a strange attractor have the same long-term statistics so it's sufficient to run one trajectory for an extremely long time. Now that we have many points on the attractor, we could try computing the box dimension, but that approach is impractical, as mentioned earlier. Grassberger and Procaccia (1983) proposed a more efficient approach that has become standard. Fix a point \(\mathbf{x}\) on the attractor \(A\). Let \(N_{\mathbf{x}}(\varepsilon)\) denote the number of points on \(A\) inside a ball of radius \(\varepsilon\) about \(\mathbf{x}\) (Figure 11.5.1). Figure 11.5.1 Most of the points in the ball are unrelated to the immediate portion of the trajectory through \(\mathbf{x}\); instead they come from later parts that just happen to pass close to \(\mathbf{x}\). Thus \(N_{\mathbf{x}}(\varepsilon)\) measures how frequently a typical trajectory visits an \(\varepsilon\) neighborhood of \(\mathbf{x}\). Now vary \(\varepsilon\). As \(\varepsilon\) increases, the number of points in the ball typically grows as a power law: \[N_{\mathbf{x}}(\varepsilon)\propto\varepsilon^{d},\] where \(d\) is called the _pointwise dimension_ at \(\mathbf{x}\). The pointwise dimension can depend significantly on \(\mathbf{x}\); it will be smaller in rarefied regions of the attractor. To get an overall dimension of \(A\), one averages \(N_{\mathbf{x}}(\varepsilon)\) over many \(\mathbf{x}\). The resulting quantity \(C(\varepsilon)\) is found empirically to scale as \[C(\varepsilon)\propto\varepsilon^{d},\] where \(d\) is called the _correlation dimension_. The correlation dimension takes account of the density of points on the attractor, and thus differs from the box dimension, which weights all occupied boxes equally, no matter how many points they contain. (Mathematically speaking, the correlation dimension involves an invariant measure supported on a fractal, not just the fractal itself.) In general, \(d_{\text{correlation}}\leq d_{\text{box}}\), although they are usually very close (Grassberger and Procaccia 1983). To estimate \(d\), one plots \(\log\,C(\varepsilon)\) vs. \(\log\,\varepsilon\). If the relation \(C(\varepsilon)\propto\varepsilon^{d}\) were valid for all \(\varepsilon\), we'd find a straight line of slope \(d\). In practice, the power law holds only over an intermediate range of \(\varepsilon\) (Figure 11.5.2). The curve saturates at large \(\varepsilon\) because the \(\varepsilon\)-balls engulf the whole attractor and so \(N_{\mathbf{x}}(\varepsilon)\) can grow no further. On the other hand, at extremely small \(\varepsilon\), the only point in each \(\varepsilon\)-ball is \(\mathbf{x}\) itself. So the power law is expected to hold only in the _scaling region_ where \[\text{(minimum separation of points on }A)<<\varepsilon<<\text{(diameter of }A).\] ### 11.5 Pointwise and correlation dimensions Figure 11.5.2: **Example 11.5.1:** Estimate the correlation dimension of the Lorenz attractor, for the standard parameter values \(r=28\), \(\sigma=10\), \(b=\frac{8}{3}\). _Solution:_ Figure 11.5.3 shows the results of Grassberger and Procaccia (1983). (Note that in their notation, the radius of the balls is \(\ell\) and the correlation dimension is \(v\)) A line of slope \(d_{\mbox{\tiny corr}}=2.05\pm 0.01\) gives an excellent fit to the data, except for large \(\varepsilon\), where the expected saturation occurs. These results were obtained by numerically integrating the system with a Runge-Kutta method. The time step was 0.25, and 15,000 points were computed. Grassberger and Procaccia also report that the convergence was rapid; the correlation dimension could be estimated to within \(\pm 5\) percent using only a few thousand points. **Example 11.5.2:** Consider the logistic map \(x_{n+1}=rx_{n}(1-x_{n})\) at the parameter value \(r=r_{\infty}=3.5699456\ldots\), corresponding to the onset of chaos. Show that the Figure 11.5.3: Grassberger and Procaccia (1983), p.196attractor is a Cantor-like set, although it is not strictly self-similar. Then compute its correlation dimension numerically. _Solution:_ We visualize the attractor by building it up recursively. Roughly speaking, the attractor looks like a \(2^{n}\)-cycle, for \(n>>1\). Figure 11.5.4 schematically shows some typical \(2^{n}\)-cycles for small values of \(n\). Figure 11.5.4 The dots in the left panel of Figure 11.5.4 represent the superstable \(2^{n}\)-cycles. The right panel shows the corresponding values of \(x\). As \(n\to\infty\), the resulting set approaches a topological Cantor set, with points separated by gaps of various sizes. But the set is not strictly self-similar--the gaps scale by different factors depending on their location. In other words, some of the "wishbones" in the orbit diagram are wider than others at the same \(r\). (We commented on this nonuniformity in Section 10.6, after viewing the computer-generated orbit diagrams of Figure 10.6.2.) The correlation dimension of the limiting set has been estimated by Grassberger and Procaccia (1983). They generated a single trajectory of 30,000 points, starting from \(x_{0}=\frac{1}{2}\). Their plot of log \(C(\varepsilon)\) vs. log \(\varepsilon\) is well fit by a straight line of slope \(d_{\rm corr}=0.500\pm 0.005\) (Figure 11.5.5). ### 11.5 Pointwise and Correlation Dimensions Figure 11.5.5: Grossberger and Procaccio (1983), p.193 This is smaller than the box dimension \(d_{\mbox{\tiny box}}\approx 0.538\) (Grassberger 1981), as expected. For very small \(\varepsilon\), the data in Figure 11.5.5 deviate from a straight line. Grassberger and Procaccia (1983) attribute this deviation to residual correlations among the \(x_{n}\)'s on their single trajectory. These correlations would be negligible if the map were strongly chaotic, but for a system at the onset of chaos (like this one), the correlations are visible at small scales. To extend the scaling region, one could use a larger number of points or more than one trajectory. ### Multifractals We conclude by mentioning a more refined concept, although we cannot go into details. In the logistic attractor of Example 11.5.2, the scaling varies from place to place, unlike in the middle-thirds Cantor set, where there is a uniform scaling by \(\frac{1}{3}\) everywhere. Thus we cannot completely characterize the logistic attractor by its dimension, or any other single number--we need some kind of distribution function that tells us how the dimension varies across the attractor. Sets of this type are called _multifractals_. The notion of pointwise dimension allows us to quantify the local variations in scaling. Given a multifractal \(A\), let \(S_{{}_{\alpha}}\) be the subset of \(A\) consisting of all points with pointwise dimension \(\alpha\). If \(\alpha\) is a typical scaling factor on \(A\), then it will be represented often, so \(S_{{}_{\alpha}}\) will be a relatively large set; if \(\alpha\) is unusual, then \(S_{{}_{\alpha}}\) will be a small set. To be more quantitative, we note that each \(S_{{}_{\alpha}}\) is itself a fractal, so it makes sense to measure its "size" by its fractal dimension. Thus, let \(f(\alpha)\) denote the dimension of \(S_{{}_{\alpha}}\). Then \(f(\alpha)\) is called the _multifractal spectrum_ of \(A\) or the _spectrum of scaling indices_ (Halsey et al. 1986). Roughly speaking, you can think of the multifractal as an interwoven set of fractals of different dimensions \(\alpha\), where \(f(\alpha)\) measures their relative weights. Since very large and very small \(\alpha\) are unlikely, the shape of \(f(\alpha)\) typically looks like Figure 11.5.6. The maximum value of \(f(\alpha)\) turns out to be the box dimension (Halsey et al. 1986). For systems at the onset of chaos, multifractals lead to a more powerful version of the universality theory mentioned in Section 10.6. The universal quantity is now a Figure 11.5.6 function_\(f(\alpha)\), rather than a single number; it therefore offers much more information, and the possibility of more stringent tests. The theory's predictions have been checked for a variety of experimental systems at the onset of chaos, with striking success. See Glazier and Libchaber (1988) for a review. On the other hand, we still lack a rigorous mathematical theory of multifractals; see Falconer (1990) for a discussion of the issues. ## 11.1 Countable and Uncountable Sets Why doesn't the diagonal argument used in Example 11.1.4 show that the rationals are also uncountable? (After all, rationals can be represented as decimals.) Show that the set of odd integers is countable. Are the irrational numbers countable or uncountable? Prove your answer. Consider the set of all real numbers whose decimal expansion contains only 2's and 7's. Using Cantor's diagonal argument, show that this set is uncountable. Consider the set of integer lattice points in three-dimensional space, i.e., points of the form (_p,q,r_), where _p, q_, and \(r\) are integers. Show that this set is countable. (10\(x\) mod l) Consider the decimal shift map \(x_{n+1}=10x_{n}\) (mod l). Show that the map has countably many periodic orbits, all of which are unstable. Show that the map has uncountably many aperiodic orbits. An "eventually-fixed point" of a map is a point that iterates to a fixed point after a finite number of steps. Thus \(x_{n+1}=x_{n}\) for all \(n>N\), where \(N\) is some positive integer. Is the number of eventually-fixed points for the decimal shift map countable or uncountable? Show that the binary shift map \(x_{n+1}=2x_{n}\) (mod l) has countably many periodic orbits and uncountably many aperiodic orbits. ### 11.2 Cantor Set (Cantor set has measure zero) Here's another way to show that the Cantor set has zero total length. In the first stage of construction of the Cantor set, we removed an interval of length \(\frac{1}{3}\) from the unit interval [0,1]. At the next stage we removed two intervals, each of length \(\frac{1}{9}\). By summing an appropriate infinite series, show that the total length of all the intervals removed is 1, and hence the leftovers (the Cantor set) must have length zero. Show that the rational numbers have zero measure. (Hint: Make a list of the rationals. Cover the first number with an interval of length \(\varepsilon\), cover the second with an interval of length \(\frac{1}{2}\varepsilon\). Now take it from there.) Show that any countable subset of the real line has zero measure. (This generalizes the result of the previous question.) Consider the set of irrational numbers between 0 and 1. What is the measure of the set? Is it countable or uncountable? Is it totally disconnected? Does it contain any isolated points? (Base-3 and the Cantor set) Find the base-3 expansion of 1/2. Find a one-to-one correspondence between the Cantor set \(C\) and the interval [0,1]. In other words, find an invertible mapping that pairs each point \(c\in C\) with precisely one \(x\in[0,1]\). Some of my students have thought that the Cantor set is "all endpoints"--they claimed that any point in the set is the endpoint of some sub-interval involved in the construction of the set. Show that this is false by explicitly identifying a point in \(C\) that is not an endpoint. (Devil's staircase) Suppose that we pick a point at random from the Cantor set. What's the probability that this point lies to the left of \(x\), where \(0\leq x\leq 1\) is some fixed number? The answer is given by a function \(P(x)\) called the _devil's staircase_. It is easiest to visualize \(P(x)\) by building it up in stages. First consider the set \(S_{0}\) in Figure 11.2.1. Let \(P_{0}(x)\) denote the probability that a randomly chosen point in \(S_{0}\) lies to the left of \(x\). Show that \(P_{0}(x)=x\). Now consider \(S_{1}\) and define \(P_{1}(x)\) analogously. Draw the graph of \(P_{1}(x)\). (Hint: It should have a plateau in the middle.) Draw the graphs of \(P_{n}(x)\), for \(n=2,3,4\). Be careful about the widths and heights of the plateaus. The limiting function \(P_{\infty}(x)\) is the devil's staircase. Is it continuous? What would a graph of its derivative look like? Like other fractal concepts, the devil's staircase was long regarded as a mathematical curiosity. But recently it has arisen in physics, in connection with mode-locking of nonlinear oscillators. See Bak (1986) for an entertaining introduction. ### Dimension of Self-Similar Fractals 3.1 (Middle-halves Cantor set) Construct a new kind of Cantor set by removing the middle half of each sub-interval, rather than the middle third. 1. Find the similarity dimension of the set. 2. Find the measure of the set. 3.2 (Generalized Cantor set) Consider a generalized Cantor set in which we begin by removing an open interval of length \(0<a<1\) from the middle of [0,1]. At subsequent stages, we remove an open middle interval (whose length is the same fraction _a_) from each of the remaining intervals, and so on. Find the similarity dimension of the limiting set. 3.3 (Generalization of even-fifths Cantor set) The "even-sevenths Cantor set" is constructed as follows: divide [0,1] into seven equal pieces; delete pieces 2, 4, and 6; and repeat on sub-intervals. 1. Find the similarity dimension of the set. 2. Generalize the construction to any odd number of pieces, with the even ones deleted. Find the similarity dimension of this generalized Cantor set. 3.4 (No odd digits) Find the similarity dimension of the subset of [0,1] consisting of real numbers with only even digits in their decimal expansion. 3.5 (No 8's) Find the similarity dimension of the subset of [0,1] consisting of real numbers that can be written without the digit 8 appearing anywhere in their decimal expansion. Show that the middle-thirds Cantor set contains no intervals. But also show that no point in the set is isolated. 3.7 (Snowflake) To construct the famous fractal known as the _von Koch snowflake curve_, use an equilateral triangle for \(S_{\varphi}\). Then do the von Koch procedure of Figure 11.3.1 on each of the three sides. 1. Show that \(S_{1}\) looks like a star of David. 2. Draw \(S_{2}\) and \(S_{3}\). 3. The snowflake is the limiting curve \(S=S_{\infty}\). Show that it has infinite arc length. 4. Find the area of the region enclosed by \(S\). 5. Find the similarity dimension of \(S\). The snowflake curve is continuous but nowhere differentiable--loosely speaking, it is "all corners"! 3.8 (Sierpinski carpet) Consider the process shown in Figure 1. The closed unit box is divided into nine equal boxes, and the open central box is deleted. Then this process is repeated for each of the eight remaining sub-boxes, and so on. Figure 1 shows the first two stages. 1. Sketch the next stage \(S_{3}\). 2. Find the similarity dimension of the limiting fractal, known as the _Sierpinski carpet_. 3. Show that the Sierpinski carpet has zero area. #### 11.3.9 (Sponges) Generalize the previous exercise to three dimensions--start with a solid cube, and divide it into 27 equal sub-cubes. Delete the central cube on each face, along with the central cube. (If you prefer, you could imagine drilling three mutually orthogonal square holes through the centers of the faces.) Infinite iteration of this process yields a fractal called the _Menger sponge_. Find its similarity dimension. Repeat for the Menger hypersponge in \(N\) dimensions, if you dare. #### 11.3.10 (Fat fractal) A _fat fractal_ is a fractal with a nonzero measure. Here's a simple example: start with the unit interval [0,1] and delete the open middle 1/2, 1/4, 1/8, etc., of each remaining sub-interval. (Thus a smaller and smaller fraction is removed at each stage, in contrast to the middle-thirds Cantor set, where we always remove 1/3 of what's left.) 1. Show that the limiting set is a topological Cantor set. 2. Show that the measure of the limiting set is greater than zero. Find its exact value if you can, or else just find a lower bound for it. Fat fractals answer a fascinating question about the logistic map. Farmer (1985) has shown numerically that the set of parameter values for which chaos occurs is a fat fractal. In particular, if \(r\) is chosen at random between \(r_{\infty}\) and \(r=4\), there is about an 89% chance that the map will be chaotic. Farmer's analysis also suggests that the odds of making a mistake (calling an orbit chaotic when it's actually periodic) are about one in a million, if we use double precision arithmetic! ### Box Dimension Find the box dimension of the following sets. #### 11.4.1 von Koch snowflake (see Exercise 11.3.7) Figure 1: #### 11.4.2 Sierpinski carpet (see Exercise 11.3.8) #### 11.4.3 Menger sponge (see Exercise 11.3.9) #### 11.4.4 The Cartesian product of the middle-thirds Cantor set with itself. #### 11.4.5 Menger hypersponge (see Exercise 11.3.9) 4.6 (A strange repeller for the tent map) The tent map on the interval [0,1] is defined by \(x_{{}_{n + 1}} = f(x_{{}_{n}})\), where \[f(x) = \begin{cases}rx, &0 \leq x \leq \frac{1}{2} \\ r(1 - x), &\frac{1}{2} \leq x \leq 1 \end{cases}\] and \(r > 0\). In this exercise we assume \(r > 2\). Then some points get mapped outside the interval [0,1]. If \(f(x_{{}_{0}}) > 1\) then we say that \(x_{{}_{0}}\) has "escaped" after one iteration. Similarly, if \(f^{*}(x_{{}_{0}}) > 1\) for some finite \(n\), but \(f^{\times}(x_{{}_{0}}) \in [0,1]\) for all \(k < n\), then we say that \(x_{{}_{0}}\) has escaped after \(n\) iterations. a) Find the set of initial conditions \(x_{{}_{0}}\) that escape after one or two iterations. b) Describe the set of \(x_{{}_{0}}\) that _never_ escape. c) Find the box dimension of the set of \(x_{{}_{0}}\) that never escape. (This set is called the invariant set.) d) Show that the Liapunov exponent is positive at each point in the invariant set. The invariant set is called a _strange repeller_, for several reasons: it has a fractal structure; it repels all nearby points that are not in the set; and points in the set hop around chaotically under iteration of the tent map. 4.7 (A lopsided fractal) Divide the closed unit interval [0,1] into four quarters. Delete the open second quarter from the left. This produces a set \(S_{1}\). Repeat this construction indefinitely; i.e., generate \(S_{{}_{n + 1}}\) from \(S_{{}_{n}}\) by deleting the second quarter of each of the intervals in \(S_{{}_{n}}\). a) Sketch the sets \(S_{1}\),...,\(S_{{}_{A}}\). b) Compute the box dimension of the limiting set \(S_{{}_{\infty}}\). c) Is \(S_{{}_{\infty}}\) self-similar? 4.8 (A thought question about random fractals) Redo the previous question, except add an element of randomness to the process: to generate \(S_{{}_{n + 1}}\) from \(S_{{}_{n}}\), flip a coin; if the result is heads, delete the second quarter of every interval in \(S_{{}_{n}}\); if tails, delete the third quarter. The limiting set is an example of a _random fractal_. a) Can you find the box dimension of this set? Does this question even make sense? In other words, might the answer depend on the particular sequence of heads and tails that happen to come up? b) Now suppose if tails comes up, we delete the _first_ quarter. Could this make a difference? For instance, what if we had a long string of tails? See Falconer (1990, Chapter 15) for a discussion of random fractals. 4.9 (Fractal cheese) A fractal slice of swiss cheese is constructed as follows: The unit square is divided into \(p^{2}\) squares, and \(m^{2}\) squares are chosen at random and discarded. (Here \(p>m\ +1\), and \(p\), \(m\) are positive integers.) The process is repeated for each remaining square (\(\text{side}=1/p\)). Assuming that this process is repeated indefinitely, find the box dimension of the resulting fractal. (Notice that the resulting fractal may or may not be self-similar, depending on which squares are removed at each stage. Nevertheless, we are still able to calculate the box dimension.) 4.10 (Fat fractal) Show that the fat fractal constructed in Exercise 11.3.10 has box dimension equal to 1. ### Pointwise and Correlation Dimensions 5.1 (Project) Write a program to compute the correlation dimension of the Lorenz attractor. Reproduce the results in Figure 11.5.3. Then try other values of \(r\). How does the dimension depend on \(r\)? ## Chapter 12 Strange Attractors ### 12.0 Introduction Our work in the previous three chapters has revealed quite a bit about chaotic systems, but something important is missing: intuition. We know _what_ happens but not _why_ it happens. For instance, we don't know what causes sensitive dependence on initial conditions, nor how a differential equation can generate a fractal attractor. Our first goal is to understand such things in a simple, geometric way. These same issues confronted scientists in the mid-1970s. At the time, the only known examples of strange attractors were the Lorenz attractor (1963) and some mathematical constructions of Smale (1967). Thus there was a need for other concrete examples, preferably as transparent as possible. These were supplied by Henon (1976) and Rossler (1976), using the intuitive concepts of stretching and folding. These topics are discussed in Sections 12.1-12.3. The chapter concludes with experimental examples of strange attractors from chemistry and mechanics. In addition to their inherent interest, these examples illustrate the techniques of attractor reconstruction and Poincare sections, two standard methods for analyzing experimental data from chaotic systems. ### 12.1 The Simplest Examples Strange attractors have two properties that seem hard to reconcile. Trajectories on the attractor remain confined to a bounded region of phase space, yet they separate from their neighbors exponentially fast (at least initially). How can trajectories diverge endlessly and yet stay bounded? The basic mechanism involves repeated _stretching and folding_. Consider a small blob of initial conditions in phase space (Figure 12.1.1). A strange attractor typically arises when the flow contracts the blob in some directions (reflecting the dissipation in the system) and stretches it in others (leading to sensitive dependence on initial conditions). The stretching cannot go on forever--the distorted blob must be folded back on itself to remain in the bounded region. To illustrate the effects of stretching and folding, we consider a domestic example. Figure 12.1.2 shows a process used to make filo pastry or croissant. The dough is rolled out and flattened, then folded over, then rolled out again, and so on. After many repetitions, the end product is a flaky, layered structure--the culinary analog of a fractal attractor. Furthermore, the process shown in Figure 12.1.2 automatically generates sensitive dependence on initial conditions. Suppose that a small drop of food coloring is put in the dough, representing nearby initial conditions. After many iterations of stretching, folding, and re-injection, the coloring will be spread throughout the dough. Figure 12.1.3 presents a more detailed view of this _pastry map_, here modeled as a continuous mapping of a rectangle into itself. ## Chapter 3 Figure 12.1.2: Figure 12.1.1: The rectangle _abcd_ is flattened, stretched, and folded into the _horseshoe_\(a^{\prime}b^{\prime}c^{\prime}d^{\prime}\), also shown as \(S_{\mbox{\tiny 1}}\). In the same way, \(S_{\mbox{\tiny 1}}\) is itself flattened, stretched, and folded into \(S_{\mbox{\tiny 2}}\), and so on. As we go from one stage to the next, the layers become thinner and there are twice as many of them. Now try to picture the limiting set \(S_{\infty}\). It consists of infinitely many smooth layers, separated by gaps of various sizes. In fact, a vertical cross section through the middle of \(S_{\infty}\) would resemble a _Cantor set_! Thus \(S_{\infty}\) is (locally) the product of a smooth curve with a Cantor set. The fractal structure of the attractor is a consequence of the stretching and folding that created \(S_{\infty}\) in the first place. ### Terminology The transformation shown in Figure 12.1.3 is normally called a horseshoe map, but we have avoided that name because it encourages confusion with another horseshoe map (the _Smale horseshoe_), which has very different properties. In particular, Smale's horseshoe map does _not_ have a strange attractor; its invariant set is more like a strange saddle. The Smale horseshoe is fundamental to rigorous discussions of chaos, but its analysis and significance are best deferred to a more advanced course. See Exercise 12.1.7 for an introduction, and Guckenheimer and Holmes (1983) or Arrowsmith and Place (1990) for detailed treatments. Because we want to reserve the word _horseshoe_ for Smale's mapping, we have used the name _pastry map_ for the mapping above. A better name would be "the baker's map" but that name is already taken by the map in the following example. ### Example 12.1.1: The _baker's map_\(B\) of the square \(0\leq x\leq 1\), \(0\leq y\leq 1\) to itself is given by Figure 12.1.3: \[(x_{n+1},y_{n+1}) = \begin{cases} (2x_{n},\;ay_{n})&\text{for }0 \leq x_{n} \leq \frac{1}{2} \\ (2x_{n} - 1,\;ay_{n} + \frac{1}{2})&\text{for }\frac{1}{2} \leq x_{n} \leq 1 \end{cases}\] where \(a\) is a parameter in the range \(0<a\leq\frac{1}{2}\). Illustrate the geometric action of \(B\) by showing its effect on a face drawn in the unit square. _Solution:_ The reluctant experimental subject is shown in Figure 12.1.4a. As we'll see momentarily, the transformation may be regarded as a product of two simpler transformations. First the square is stretched and flattened into a \(2\times a\) rectangle (Figure 12.1.4b). Then the rectangle is cut in half, yielding two \(1\times a\) rectangles, and the right half is stacked on top of the left half such that its base is at the level \(y=\frac{1}{2}\) (Figure 12.1.4c). Why is this procedure equivalent to the formulas for \(B\)? First consider the left half of the square, where \(0\leq x_{n}\leq\frac{1}{2}\). Here \((x_{n+1},y_{n+1})=(2x_{n},\;ay_{n})\), so the horizontal direction is stretched by \(2\) and the vertical direction is contracted by \(a\), as claimed. The same is true for the right half of the rectangle, except that the image is shifted left by \(1\) and up by \(\frac{1}{2}\), since \((x_{n+1},y_{n+1})=(2x_{n},\;ay_{n})+(-1,\frac{1}{2})\). This shift is equivalent to the stacking just claimed. The baker's map exhibits sensitive dependence on initial conditions, thanks to the stretching in the \(x\)-direction. It has many chaotic orbits--uncountably many, in fact. These and other dynamical properties of the baker's map are discussed in the exercises. The next example shows that, like the pastry map, the baker's map has a strange attractor with a Cantor-like cross section. Figure 12.1.4: **Example 12.1.2**: _Show that for \(a<\frac{1}{2}\), the baker's map has a fractal attractor \(A\) that attracts all orbits. More precisely, show that there is a set \(A\) such that for any initial condition \((x_{\varphi},y_{\vartheta})\), the distance from \(B^{\ast}(x_{\vartheta},y_{\vartheta})\) to \(A\) converges to zero as \(n\to\infty\)._ _Solution:_ First we construct the attractor. Let \(S\) denote the square \(0\leq x\leq 1\), \(0\leq y\leq 1\); this includes all possible initial conditions. The first three images of \(S\) under the map \(B\) are shown as shaded regions in Figure 12.1.5. **Figure 12.1.5** The first image \(B(S)\) consists of two strips of height \(a\), as we know from Example 12.1.1. Then \(B(S)\) is flattened, stretched, cut, and stacked to yield \(B^{2}(S)\). Now we have four strips of height \(a^{2}\). Continuing in this way, we see that \(B^{\ast}(S)\) consists of \(2^{\ast}\) horizontal strips of height \(a^{\ast}\). The limiting set \(A=B^{\infty}(S)\) is a fractal. Topologically, it is a Cantor set of line segments. A technical point: How we can be sure that there actually is a "limiting set"? We invoke a standard theorem from point-set topology. Observe that the successive images of the square are _nested_ inside each other like Chinese boxes: \(B^{\ast+1}(S)\subset B^{\ast}(S)\) for all \(n\). Moreover each \(B^{\ast}(S)\) is a compact set. The theorem (Munkres 1975) assures us that the countable intersection of a nested family of compact sets is a _non-empty_ compact set--this set is our \(A\). Furthermore, \(A\subset B^{\ast}(S)\) for all \(n\). The nesting property also helps us to show that \(A\) attracts all orbits. The point \(B^{\ast}(x_{\vartheta},y_{\vartheta})\) lies somewhere in one of the strips of \(B^{\ast}(S)\), and all points in these strips are within a distance \(a^{\ast}\) of \(A\), because \(A\) is contained in \(B^{\ast}(S)\). Since \(a^{\ast}\to 0\) as \(n\to\infty\), the distance from \(B^{\ast}(x_{\vartheta},y_{\vartheta})\) to \(A\) tends to zero as \(n\to\infty\), as required. **Example 12.1.3**: _Find the box dimension of the attractor for the baker's map with \(a<\frac{1}{2}\)._ ### 12.1 The simplest examples Figure 12.1.5:_Solution:_ The attractor \(A\) is approximated by \(B^{*}(S)\), which consists of \(2^{n}\) strips of height \(a^{n}\) and length 1. Now cover \(A\) with square boxes of side \(\varepsilon=a^{n}\) (Figure 12.1.6). Since the strips have length 1, it takes about \(a^{-n}\) boxes to cover each of them. There are \(2^{n}\) strips altogether, so \(N\approx a^{-n}\times 2^{n}=(a/2)^{-n}\). Thus \[d=\lim_{\varepsilon\to 0}\frac{\ln N}{\ln(\frac{1}{\varepsilon})}=\lim_{n\to 0}\frac{\ln(a/2)^{-n}}{\ln(a^{-n})}=1+\frac{\ln\frac{1}{2}}{\ln a}.\] As a check, note that \(d\to 2\) as \(a\to\frac{1}{2}\); this makes sense because the attractor fills an increasingly large portion of square \(S\) as \(a\to\frac{1}{2}\). ### The Importance of Dissipation For \(a<\frac{1}{2}\), the baker's map shrinks areas in phase space. Given any region \(R\) in the square, \[\operatorname{area}(B(R))<\operatorname{area}(R).\] This result follows from elementary geometry. The baker's map elongates \(R\) by a factor of 2 and flattens it by a factor of \(a\), so \(\operatorname{area}(B(R))=2a\times\operatorname{area}(R)\). Since \(a<\frac{1}{2}\) by assumption, \(\operatorname{area}(B(R))<\operatorname{area}(R)\) as required. (Note that the cutting operation does not change the region's area.) Area contraction is the analog of the volume contraction that we found for the Lorenz equations in Section 9.2. As in that case, it yields several conclusions. For instance, the attractor \(A\) for the baker's map must have zero area. Also, the baker's map cannot have any repelling fixed points, since such points would expand area elements in their neighborhood. In contrast, when \(a=\frac{1}{2}\) the baker's map is _area-preserving_: \(\operatorname{area}(B(R))=\operatorname{area}(R)\). Now the square \(S\) is mapped _onto_ itself, with no gaps between the strips. The map has qualitatively different dynamics in this case. Transients never Figure 12.1.6 decay--the orbits shuffle around endlessly in the square but never settle down to a lower-dimensional attractor. This is a kind of chaos that we have not seen before! This distinction between \(a<\frac{1}{2}\) and \(a=\frac{1}{2}\) exemplifies a broader theme in non-linear dynamics. In general, if a map or flow contracts volumes in phase space, it is called _dissipative_. Dissipative systems commonly arise as models of physical situations involving friction, viscosity, or some other process that dissipates energy. In contrast, area-preserving maps are associated with conservative systems, particularly with the Hamiltonian systems of classical mechanics. The distinction is crucial because _area-preserving maps cannot have attractors_ (strange or otherwise). As defined in Section 9.3, an "attractor" should attract all orbits starting in a sufficiently small open set containing it; that requirement is incompatible with area-preservation. Several of the exercises give a taste of the new phenomena that arise in area-preserving maps. To learn more about the fascinating world of Hamiltonian chaos, see the review articles by Jensen (1987) or Henon (1983), or the books by Tabor (1989) or Lichtenberg and Lieberman (1992). ### Henon Map In this section we discuss another two-dimensional map with a strange attractor. It was devised by the theoretical astronomer Michel Henon (1976) to illuminate the microstructure of strange attractors. According to Gleick (1987, p. 149), Henon became interested in the problem after hearing a lecture by the physicist Yves Pomeau, in which Pomeau described the numerical difficulties he had encountered in trying to resolve the tightly packed sheets of the Lorenz attractor. The difficulties stem from the rapid volume contraction in the Lorenz system: after one circuit around the attractor, a volume in phase space is typically squashed by a factor of about 14,000 (Lorenz 1963). Henon had a clever idea. Instead of tackling the Lorenz system directly, he sought a mapping that captured its essential features but which also had an adjustable amount of dissipation. Henon chose to study mappings rather than differential equations because maps are faster to simulate and their solutions can be followed more accurately and for a longer time. The _Henon map_ is given by \[x_{n+1}=y_{n}+1-ax_{n}^{2},\hskip 28.452756pty_{n+1}=bx_{n}, \tag{1}\] where \(a\) and \(b\) are adjustable parameters. Henon (1976) arrived at this map by an elegant line of reasoning. To simulate the stretching and folding that occurs in the Lorenz system, he considered the following chain of transformations (Figure 12.2.1). Start with a rectangular region elongated along the \(x\)-axis (Figure 12.2.1a). Stretch and fold the rectangle by applying the transformation \[T^{\prime}\!\!:\ \ \ \ x^{\prime}=x,\ \ \ \ \ y^{\prime}=1+y-ax^{2}.\] (The primes denote iteration, not differentiation.) The bottom and top of the rectangle get mapped to parabolas (Figure 12.2.1b). The parameter \(a\) controls the folding. Now fold the region even more by contracting Figure 12.2.1b along the \(x\)-axis: \[T^{\prime\prime}\!\!:\ \ \ x^{\prime\prime}=bx^{\prime},\ \ \ \ \ \ y^{\prime\prime}=y^{\prime}\] where \(-1<b<1\). This produces Figure 12.2.1c. Finally, come back to the orientation along the \(x\)-axis by reflecting across the line \(y=x\) (Figure 12.2.1d): \[T^{\prime\prime\prime}\!\!:\ \ \ x^{\prime\prime\prime}=y^{\prime\prime},\ \ \ \ \ \ \ y^{\prime\prime\prime}=x^{\prime\prime}.\] Then the composite transformation \(T=T^{\prime\prime\prime}T^{\prime\prime}\) yields the Henon mapping (I), where we use the notation \((x_{\mu},y_{\mu})\) for \((x,y)\) and \((x_{\mu+1},y_{\mu+1})\) for \((x^{\prime\prime\prime},y^{\prime\prime\prime})\). ### Elementary Properties of the Henon Map As desired, the Henon map captures several essential properties of the Lorenz system. (These properties will be verified in the examples below and in the exercises.) 1. _The Henon map is invertible._ This property is the counterpart of the fact that in the Lorenz system, there is a unique trajectory through each point in phase space. In particular, each point has a unique past. In this respect the Henon map is superior to the logistic map, its one-dimensional analog. The logistic map stretches and folds the unit interval, but it is not invertible since all points (except the maximum) come from _two_ pre-images. 2. _The Henon map is dissipative._ It contracts areas, and does so at the same rate everywhere in phase space. This property is the analog of constant negative divergence in the Lorenz system. Figure 12.2.1: 3. _For certain parameter values, the Henon map has a trapping region._ In other words, there is a region \(R\) that gets mapped inside itself (Figure 12.2.2). As in the Lorenz system, the strange attractor is enclosed in the trapping region. The next property highlights an important difference between the Henon map and the Lorenz system. 4. _Some trajectories of the Henon map escape to infinity._ In contrast, all trajectories of the Lorenz system are bounded; they all eventually enter and stay inside a certain large ellipsoid (Exercise 9.2.2). But it is not surprising that the Henon map has some unbounded trajectories; far from the origin, the quadratic term in (I) dominates and repels orbits to infinity. Similar behavior occurs in the logistic map--recall that orbits starting outside the unit interval eventually become unbounded. Now we verify properties 1 and 2. For 3 and 4, see Exercises 12.2.9 and 12.2.10. **Example 12.2.1:** Show that the Henon map \(T\) is invertible if \(b\approx 0\), and find the inverse \(T^{-1}\). _Solution:_ We solve (I) for \(x_{n}\) and \(y_{n}\), given \(x_{n+1}\) and \(y_{n+1}\). Algebra yields \(x_{n}=b^{-1}y_{n+1}\), \(y_{n}=x_{n+1}-1+ab^{-2}(y_{n+1})^{2}\). Thus \(T^{-1}\) exists for all \(b\approx 0\). ### Henon Map Figure 12.2.2: **Example 12.2.2:** Show that the Henon map contracts areas if \(-1<b<1\). _Solution:_ To decide whether an arbitrary two-dimensional map \(x_{{}_{n+1}}=f(x_{{}_{n}},y_{{}_{n}})\), \(y_{{}_{n+1}}=g(x_{{}_{n}},y_{{}_{n}})\) is area-contracting, we compute the determinant of its Jacobian matrix \[\mathbf{J} = \begin{pmatrix}\frac{\partial\gamma}{\partial x}&\frac{\partial\gamma }{\partial y}\\ \frac{\partial y}{\partial x}&\frac{\partial y}{\partial y}\end{pmatrix}.\] If \(|\det\mathbf{J}(x,y)|<1\) for all \((x,y)\), the map is area-contracting. This rule follows from a fact of multivariable calculus: if \(\mathbf{J}\) is the Jacobian of a two-dimensional map \(T\), then \(T\) maps an infinitesimal rectangle at \((x,y)\) with area \(dxdy\) into an infinitesimal parallelogram with area \(|\det\mathbf{J}(x,y)|dxdy\). Thus if \(|\det\mathbf{J}(x,y)|<1\) everywhere, the map is area-contracting. For the Henon map, we have \(f(x,y)=1-ax^{2}+y\) and \(g(x,y)=bx\). Therefore \[\mathbf{J} = \begin{pmatrix}-2ax&1\\ b&0\end{pmatrix}\] and \(\det\mathbf{J}(x,y)=-b\) for all \((x,y)\). Hence the map is area-contracting for \(-1<b<1\), as claimed. In particular, the area of any region is reduced by a _constant_ factor of \(|b|\) with each iteration. ### Choosing Parameters The next step is to choose suitable values of the parameters. As Henon (1976) explains, \(b\) should not be too close to zero, or else the area contraction will be excessive and the fine structure of the attractor will be invisible. But if \(b\) is too large, the folding won't be strong enough. (Recall that \(b\) plays two roles: it controls the dissipation _and_ produces extra folding in going from Figure 12.2.1b to Figure 12.2.1c) A good choice is \(b=0.3\). To find a good value of \(a\), Henon had to do some exploring. If \(a\) is too small or too large, all trajectories escape to infinity; there is no attractor in these cases. (This is reminiscent of the logistic map, where almost all trajectories escape to infinity unless \(0\leq r\leq 4\).) For intermediate values of \(a\), the trajectories either escape to infinity or approach an attractor, depending on the initial conditions. As \(a\) increases through this range, the attractor changes from a stable fixed point to a stable 2-cycle. The system then undergoes a period-doubling route to chaos, followed by chaos intermingled with periodic windows. Henon picked \(a=1.4\), well into the chaotic region. ### Zooming In on a Strange Attractor In a striking series of plots, Henon provided the first direct visualization of the fractal structure of a strange attractor. He set \(a=1.4\), \(b=0.3\) and generated the attractor by computing ten thousand successive iterates of (I), starting from the origin. You really must try this for yourself on a computer. The effect is cerie--the points \((x_{n^{\prime}},y_{n})\) hop around erratically, but soon the attractor begins to take form, "like a ghost out of the mist" (Gleick 1987, p. 150). The attractor is bent like a boomerang and is made of many parallel curves (Figure 12.2.3a). Figure 12.2.3b is an enlargement of the small square of Figure 12.2.3a. The characteristic fine structure of the attractor begins to emerge. There seem to be six parallel curves: a lone curve near the middle of the frame, then two closely spaced curves above it, and then three more. If we zoom in on those three curves (Figure 12.2.3c), it becomes clear that they are actually six curves, grouped one, two, three, exactly as before! And those curves are themselves made of thinner curves in the same pattern, and so on. The self-similarity continues to arbitrarily small scales. ### Henon Map Figure 12.2.3: Henon (1976), pp 74–76 ### The Unstable Manifold of the Saddle Point Figure 2.2.3 suggests that the Henon attractor is Cantor-like in the transverse direction, but smooth in the longitudinal direction. There's a reason for this. The attractor is closely related to a locally smooth object--the unstable manifold of a saddle point that sits on the edge of the attractor. To be more precise, Benedicks and Carleson (1991) have proven that the attractor is the closure of a branch of the unstable manifold; see also Simo (1979). Hobson (1993) developed a method for computing this unstable manifold to very high accuracy. As expected, it is indistinguishable from the strange attractor. Hobson also presents some enlargements of less familiar parts of the Henon attractor, one of which looks like Saturn's rings (Figure 2.2.4). ### 2.3 Rossler System So far we have used two-dimensional maps to help us understand how stretching and folding can generate strange attractors. Now we return to differential equations. In the culinary spirit of the pastry map and the baker's map, Otto Rossler (1976) found inspiration in a taffy-pulling machine. By pondering its action, he was led to a system of three differential equations with a simpler strange attractor than Lorenz's. The _Rossler system_ has only one quadratic nonlinearity \(xz\): \[\begin{array}{l}\dot{x}=-y-z\\ \dot{y}=x+ay\\ \dot{z}=b+z(x-c).\end{array} \tag{1}\] Figure 2.2.4: Courtesy of Dano HobsonWe first met this system in Section 10.6, where we saw that it undergoes a period-doubling route to chaos as \(c\) is increased. Numerical integration shows that this system has a strange attractor for \(a=b=0.2\), \(c=5.7\) (Figure 12.3.1). A schematic version of the attractor is shown in Figure 12.3.2. Neighboring trajectories separate by spiraling out ("stretching"), then cross without intersecting _by going into the third dimension_ ("folding") and then circulate back near their starting places ("re-injection"). We can now see why three dimensions are needed for a flow to be chaotic. Let's consider the schematic picture in more detail, following the visual approach of Abraham and Shaw (1983). Our goal is to construct a geometric model of the Rossler attractor, guided by the stretching, folding, and re-injection seen in numerical integrations of the system. Figure 12.3.3a shows the flow near a typical trajectory. In one direction there's _compression toward_ the attractor, and in the other direction there's _divergence along_ the attractor. Figure 12.3.3b highlights the sheet on which there's sensitive dependence on initial conditions. These are the expanding directions along which stretching takes place. Next the flow folds the wide part of the sheet in two and then bends it around so that it nearly joins the narrow part (Figure 12.3.4a). Overall, the flow has taken the single sheet and produced _two_ sheets after one circuit. Repeating the process, those two sheets produce four (Figure 12.3.4b) and then those produce eight (Figure 12.3.4c), and so on. ### 12.3 Rossler System Figure 12.3.2 Abraham and Shaw (1983), p. 121 Figure 12.3.1: A geometric model of the Rossler attractor, guided by the stretching, folding, and re-injection seen in numerical integrations of the system. In effect, the flow is acting like the pastry transformation, and the phase space is acting like the dough! Ultimately the flow generates an infinite complex of tightly packed surfaces: the strange attractor. Figure 12.3.5 shows a _Poincare section_ of the attractor. We slice the attractor with a plane, thereby exposing its cross section. (In the same way, biologists examine complex three-dimensional structures by slicing them and preparing slides.) If we take a further one-dimensional slice or _Lorenz section_ through the Poincare section, we find an infinite set of points separated by gaps of various sizes. This pattern of dots and gaps is a topological Cantor set. Since each dot corresponds to one layer of the complex, our model of the Rossler attractor is a _Cantor set of surfaces_. More precisely, the attractor is locally topologically equivalent to the Cartesian product of a ribbon and a Cantor set. This is precisely the structure we would expect, based on our earlier work with the pastry map. Figure 12.3.4: Abraham and Shaw (1983), pp 122–123 Figure 12.3.5: Abroham and Shaw (1983), p. 123 ### 12.4 Chemical Chaos and Attractor Reconstruction In this section we describe some beautiful experiments on the Belousov-Zhabotinsky chemical reaction. The results show that strange attractors really do occur in nature, not just in mathematics. For more about chemical chaos, see Argoul et al. (1987). In the BZ reaction, malonic acid is oxidized in an acidic medium by bromate ions, with or without a catalyst (usually cerous or ferrous ions). It has been known since the 1950s that this reaction can exhibit limit-cycle oscillations, as discussed in Section 8.3. By the 1970s, it became natural to inquire whether the BZ reaction could also become _chaotic_ under appropriate conditions. Chemical chaos was first reported by Schmitz, Graziani, and Hudson (1977), but their results left room for skepticism--some chemists suspected that the observed complex dynamics might be due instead to uncontrolled fluctuations in experimental control parameters. What was needed was some demonstration that the dynamics obeyed the newly emerging laws of chaos. The elegant work of Roux, Simoyi, Wolf, and Swinney established the reality of chemical chaos (Simoyi et al. 1982, Roux et al. 1983). They conducted an experiment on the BZ reaction in a "continuous flow stirred tank reactor." In this standard set-up, fresh chemicals are pumped through the reactor at a constant rate to replenish the reactants and to keep the system far from equilibrium. The flow rate acts as a control parameter. The reaction is also stirred continuously to mix the chemicals. This enforces spatial homogeneity, thereby reducing the effective number of degrees of freedom. The behavior of the reaction is monitored by measuring \(B(t)\), the concentration of bromide ions. Figure 12.4.1 shows a time series measured by Roux et al. (1983). At first glance the behavior looks periodic, but it really isn't--the amplitude is erratic. Roux et al. (1983) argued that this aperiodicity corresponds to chaotic motion on a strange attractor, and is not merely random behavior caused by imperfect experimental control. ### 12.4 Chemical Chaos and Attractor Reconstruction Figure 12.4.1: Roux et al. (1983), p. 258The first step in their argument is almost magical. Put yourself in their shoes--how could you demonstrate the presence of an underlying strange attractor, given that you only measure a single time series _B_(_t_)? It seems that there isn't enough information. Ideally, to characterize the motion in phase space, you would like to simultaneously measure the varying concentrations of _all_ the other chemical species involved in the reaction. But that's virtually impossible, since there are at least twenty other chemical species, not to mention the ones that are unknown. Roux et al. (1983) exploited a surprising data-analysis technique, now known as _attractor reconstruction_ (Packard et al. 1980, Takens 1981). The claim is that for systems governed by an attractor, the dynamics in the full phase space can be reconstructed from measurements of just a _single_ time series! Somehow that single variable carries sufficient information about all the others.The method is based on time delays. For instance, define a two-dimensional vector \(\mathbf{x}(t) = (B(t),B(t + \tau))\) for some _delay_\(\tau > 0\). Then the time series _B_(_t_) generates a trajectory \(\mathbf{x}(t)\) in a two-dimensional phase space. Figure 12.4.2 shows the result of this procedure when applied to the data of Figure 12.4.1, using \(\tau = 8.8\) seconds. The experimental data trace out a strange attractor that looks remarkably like the Rossler attractor! Roux et al. (1983) also considered the attractor in three dimensions, by defining the three-dimensional vector \(\mathbf{x}(t) = (B(t),B(t + \tau),B(t + 2\tau))\). To obtain a Poincare section of the attractor, they computed the intersections of the orbits \(\mathbf{x}(t)\) with a fixed plane approximately normal to the orbits (shown in projection as a dashed line in Figure 12.4.2). Within the experimental resolution, the data fall on a one-dimensional curve. Hence the chaotic trajectories are confined to an approximately two-dimensional sheet. Roux et al. then constructed an approximate one-dimensional map that governs the dynamics on the attractor. Let \(X_{1}\), \(X_{2}\),..., \(X_{n}\), \(X_{n + 1}\),...denote successive values of \(B(t + \tau)\) at points where the orbit \(\mathbf{x}(t)\) crosses the dashed line shown in Figure 12.4.2. A plot of \(X_{n + 1}\) vs. \(X_{n}\)yields the result shown in Figure 12.4.3. The data fall on a smooth one-dimensional map, within experimental resolution. This confirms that the observed aperiodic behavior is governed by _deterministic_ laws: Given \(X_{n}\), the map determines \(X_{n + 1}\). Furthermore, the map is unimodal, like the logistic map. This suggests that the chaotic state shown in Figure 12.4.1 may be reached by a period-doubling scenario. Indeed such period-doublings were found experimentally (Coffman et al. 1987), as shown in Figure 12.4.4. The final nail in the coffin was the demonstration that the chemical system obeys the _U-sequence_ expected for unimodal maps (Section 10.6). In the regime past the onset of chaos, Roux et al. (1983) observed many distinct periodic windows. As the flow rate was varied, the periodic states occurred in precisely the order predicted by universality theory. Taken together, these results demonstrate that deterministic chaos can occur in a nonequilibrium chemical system. The most remarkable thing is that the results can be understood (to a large extent) in terms of one-dimensional maps, even though the chemical kinetics are at least twenty-dimensional. Such is the power of universality theory. But let's not get carried away. The universality theory works only because the attractor is nearly a two-dimensional surface. This low dimensionality results from the continuous stirring of the reaction, along with strong dissipation in the kinetics themselves. Higher-dimensional phenomena like chemical turbulence remain beyond the limits of the theory. ### Comments on Attractor Reconstruction The key to the analysis of Roux et al. (1983) is the attractor reconstruction. There are at least two issues to worry about when implementing the method. #### Chemical Chaos and Attractor Reconstruction Figure 12.4.4: Colfimon et al. (1987), p. 123 Figure 12.4.3: Roux et al. (1983), p. 262 First, how does one choose the _embedding dimension_, i.e., the number of delays? Should the time series be converted to a vector with two components, or three, or more? Roughly speaking, one needs enough delays so that the underlying attractor can disentangle itself in phase space. The usual approach is to increase the embedding dimension and then compute the correlation dimensions of the resulting attractors. The computed values will keep increasing until the embedding dimension is large enough; then there's enough room for the attractor and the estimated correlation dimension will level off at the "true" value. Unfortunately, the method breaks down once the embedding dimension is too large; the sparsity of data in phase space causes statistical sampling problems. This limits our ability to estimate the dimension of high-dimensional attractors. For further discussion, see Grassberger and Procaccia (1983), Eckmann and Ruelle (1985), and Moon (1992). A second issue concerns the optimal value of the delay \(\tau\). For real data (which are always contaminated by noise), the optimum is typically around one-tenth to one-half the mean orbital period around the attractor. See Fraser and Swinney (1986) for details. The following simple example suggests why some delays are better than others. **EXAMPLE 12.4.1:** Suppose that an experimental system has a limit-cycle attractor. Given that one of its variables has a time series \(x(t) = \sin\,t\), plot the time-delayed trajectory \(\mathbf{x}(t) = (x(t),x(t + \tau))\) for different values of \(\tau\). Which value of \(\tau\) would be best if the data were noisy? _Solution:_ Figure 12.4.5 shows \(\mathbf{x}(t)\) for three values of \(\tau\). For \(0 < \tau < \frac{\pi}{2}\), the trajectory is an ellipse with its long axis on the diagonal (Figure 12.4.5a). When \(\tau = \frac{\pi}{2},\ \mathbf{x}(t)\) traces out a circle (Figure 12.4.5b). This makes sense since \(x(t) = \sin\,t\) and \(y(t) = \sin\,(t + \frac{\pi}{2}) = \cos\,t\); these are the parametric equations of a circle. For larger \(\tau\) we find ellipses again, but now with their long axes along the line \(y = -x\) (Figure 12.4.5c). **Figure 12.4.5:**Note that in each case the method gives a closed curve, which is a topologically faithful reconstruction of the system's underlying attractor (a limit cycle). For this system the optimum delay is \(\tau=\frac{\pi}{2}\), i.e., one-quarter of the natural orbital period, since the reconstructed attractor is then as "open" as possible. Narrower cigar-shaped attractors would be more easily blurred by noise. In the exercises, you're asked to do similar calibrations of the method using quasiperiodic data as well as time series from the Lorenz and Rossler attractors. Many people find it mysterious that information about the attractor can be extracted from a single time series. Even Ed Lorenz was impressed by the method. When my dynamics class asked him to name the development in nonlinear dynamics that surprised him the most, he cited attractor reconstruction. In principle, attractor reconstruction can distinguish low-dimensional chaos from noise: as we increase the embedding dimension, the computed correlation dimension levels off for chaos, but keeps increasing for noise (see Eckmann and Ruelle (1985) for examples). Armed with this technique, many optimists have asked questions like, Is there any evidence for deterministic chaos in stock market prices, brain waves, heart rhythms, or sunspots? If so, there may be simple laws waiting to be discovered (and in the case of the stock market, fortunes to be made). Beware: Much of this research is dubious. For a sensible discussion, along with a state-of-the-art method for distinguishing chaos from noise, see Kaplan and Glass (1993). ### 12.5 Forced Double-Well Oscillator So far, all of our examples of strange attractors have come from autonomous systems, in which the governing equations have no explicit time-dependence. As soon as we consider forced oscillators and other _nonautonomous_ systems, strange attractors start turning up everywhere. That is why we have ignored driven systems until now--we simply didn't have the tools to deal with them. This section provides a glimpse of some of the phenomena that arise in a particular forced oscillator, the driven double-well oscillator studied by Francis Moon and his colleagues at Cornell. For more information about this system, see Moon and Holmes (1979), Holmes (1979), Guckenheimer and Holmes (1983), Moon and Li (1985), and Moon (1992). For introductions to the vast subject of forced nonlinear oscillations, see Jordan and Smith (1987), Moon (1992), Thompson and Stewart (1986), and Guckenheimer and Holmes (1983). A slender steel beam is clamped in a rigid framework. Two permanent magnets at the base pull the beam in opposite directions. The magnets are so strong that the beam buckles to one side or the other; either configuration is locally stable. These buckled states are separated by an energy barrier, corresponding to the unstable equilibrium in which the beam is straight and poised halfway between the magnets. To drive the system out of its stable equilibrium, the whole apparatus is shaken from side to side with an electromagnetic vibration generator. The goal is to understand the forced vibrations of the beam as measured by _x_(_t_), the displacement of the tip from the midline of the magnets. For weak forcing, the beam is observed to vibrate slightly while staying near one or the other magnet, but as the forcing is slowly increased, there is a sudden point at which the beam begins whipping back and forth erratically. The irregular motion is sustained and can be observed for hours--tens of thousands of drive cycles. ##### Double-Well Analog The magneto-elastic system is representative of a wide class of driven bistable systems. An easier system to visualize is a damped particle in a double-well potential (Figure 2.5.2). Here the two wells correspond to the two buckled states of the beam, separated by the hump at \(x\) = 0. Figure 2.5.1: Suppose the well is shaken periodically from side to side. On physical grounds, what might we expect? If the shaking is weak, the particle should stay near the bottom of a well, jiggling slightly. For stronger shaking, the particle's excursions become larger. We can imagine that there are (at least) _two_ types of stable oscillation: a small-amplitude, low-energy oscillation about the bottom of a well; and a large-amplitude, high-energy oscillation in which the particle goes back and forth over the hump, sampling one well and then the other. The choice between these oscillations probably depends on the initial conditions. Finally, when the shaking is extremely strong, the particle is always flung back and forth across the hump, for any initial conditions. We can also anticipate an intermediate case that seems complicated. If the particle has barely enough energy to climb to the top of the hump, and if the forcing and damping are balanced in a way that keeps the system in this precarious state, then the particle may sometimes fall one way, sometimes the other, depending on the precise timing of the forcing. This case seems potentially chaotic. ### Model and Simulations Moon and Holmes (1979) modeled their system with the dimensionless equation \[\ddot{x}+\delta\dot{x}-x+x^{3}=F\cos\omega t\] where \(\delta>0\) is the damping constant, \(F\) is the forcing strength, and \(\omega\) is the forcing frequency. Equation (I) can also be viewed as Newton's law for a particle in a double-well potential of the form \(V(x)=\frac{1}{4}x^{4}-\frac{1}{2}x^{2}\). In both cases, the force \(F\cos\omega t\) is an inertial force that arises from the oscillation of the coordinate system; recall that \(x\) is defined as the displacement relative to the _moving_ frame, not the lab frame. The mathematical analysis of (I) requires some advanced techniques from global bifurcation theory; see Holmes (1979) or Section 2.2 of Guckenheimer and Figure 12.5.2 Holmes (1983). Our more modest goal is to gain some insight into (I) through numerical simulations. In all the simulations below, we fix \[\delta = 0.25,\ \omega = 1,\] while varying the forcing strength \(F.\) **EXAMPLE 12.5.1:** By plotting \(x(t),\) show that (I) has several stable limit cycles for \(F=0.18.\) _Solution:_ Using numerical integration, we obtain the time series shown in Figure 12.5.3. **Figure 12.5.3** The solutions converge straightforwardly to periodic solutions. There are two other limit cycles in addition to the two shown here. There might be others, but they are harder to detect. Physically, all these solutions correspond to oscillations confined to a single well. The next example shows that at much larger forcing, the dynamics become complicated. **EXAMPLE 12.5.2:** Compute \(x(t)\) and the velocity \(y(t) = \dot{x}(t),\) for \(F = 0.40\) and initial conditions \((x_{0},y_{0}) = (0,0).\) Then plot \(x(t)\) vs. \(y(t).\) _Solution:_ The aperiodic appearance of \(x(t)\) and \(y(t)\) (Figure 12.5.4) suggests that the system is chaotic, at least for these initial conditions. Note that \(x\) changes sign repeatedly; the particle crosses the hump repeatedly, as expected for strong forcing. **Figure 12.5.4** The plot of \(x(t)\) vs. \(y(t)\) is messy and hard to interpret (Figure 12.5.5). **Figure 12.5.5** Note that Figure 12.5.5 is not a true phase portrait, because the system is nonautonomous. As we mentioned in Section 1.2, the state of the system is given by \((x,y,t)\), not \((x,y)\) alone, since all three variables are needed to compute the system's subsequent evolution. Figure 12.5.5 should be regarded as a two-dimensional projection of a three-dimensional trajectory. The tangled appearance of the projection is typical for nonautonomous systems. Much more insight can be gained from a _Poincare section_, obtained by plotting \((x(t),y(t))\) whenever \(t\) is an integer multiple of \(2\pi\). In physical terms, we "stroke" the system at the same phase in each drive cycle. Figure 12.5.6 shows the Poincare section for the system of Example 12.5.1. **12.5** **Forced double-well oscillator** **451**Now the tangle resolves itself--the points fall on a fractal set, which we interpret as a cross section of a strange attractor for (I). The successive points (_x_(_t_), _y_(_t_)) are found to hop erratically over the attractor, and the system exhibits sensitive dependence on initial conditions, just as we'd expect. These results suggest that the model is capable of reproducing the sustained chaos observed in the beam experiments. Figure 12.5.7 shows that there is good qualitative agreement between the experimental data (Figure 12.5.7a) and numerical simulations (Figure 12.5.7b). ### Transient Chaos Even when (I) has no strange attractors, it can still exhibit complicated dynamics (Moon and Li 1985). For instance, consider a regime in which two or more stable limit cycles coexist. Then, as shown in the next example, there can be _transient chaos_ before the system settles down. Furthermore the choice of final state depends sensitively on initial conditions (Grebogi et al. 1983b). ## Example 12.5.3: For \(F\) = 0.25, find two nearby trajectories that both exhibit transient chaos before finally converging to _different_ periodic attractors. Figure 12.5.6: Guckenheimer and Holmes (1983), p. 90 _Solution:_ To find suitable initial conditions, we could use trial and error, or we could guess that transient chaos might occur near the ghost of the strange attractor of Figure 12.5.6. For instance, the point \((x_{0},y_{0})=(0.2,0.1)\) leads to the time series shown in Figure 12.5.8a. After a chaotic transient, the solution approaches a periodic state with \(x>0\). Physically, this solution describes a particle that goes back and forth over the hump a few times before settling into small oscillations at the bottom of the well on the right. But if we change \(x_{0}\) slightly to \(x_{0}=0.195\), the particle eventually oscillates in the _left_ well (Figure 12.5.8b). ### Fractal Basin Boundaries Example 12.5.3 shows that it can be hard to predict the final state of the system, even when that state is simple. This sensitivity to initial conditions is conveyed more vividly by the following graphical method. Each initial condition in a 900 \(\times\) 900 grid is color-coded according to its fate. If the trajectory starting at \((x_{0},y_{0})\) ends up in the left well, we place a blue dot at \((x_{0},y_{0})\); if the trajectory ends up in the right well, we place a red dot. Color plate 3 shows the computer-generated result for (I). The blue and red regions are essentially cross sections of the basins of attraction for the two attractors, to the accuracy of the grid. Color plate 3 shows large patches in which all the points are colored red, and others in which all the points are colored blue. In between, however, the slightest change in initial conditions leads to alternations in the final state reached. In fact, if we magnify these regions, we see further intermingling of red and blue, down to arbitrarily small scales. Thus _the boundary between the basins is a fractal._ Near the basin boundary, long-term prediction becomes essentially impossible, because the final state of the system is exquisitely sensitive to tiny changes in initial condition (Color plate 4). ### 12.5 Forced double-well oscillator Figure 12.5.8:
###### Abstract In this chapter we give a brief introduction to PDEs. In Section 1.1 some simple problems that arise in real-life phenomena are derived. (A more detailed derivation of such problems will follow in later chapters.) We show by a number of examples how they may often be seen as continuous analogues of discrete formulations (i.e., based on difference equations). In Section 1.2 we briefly summarize the terminology used to describe various PDEs. Thus concepts like order and linearity are introduced. In Chapter 2 we shall discuss the classification of the various types of PDEs in more detail. Finally, we introduce difference equations and notions like scheme and stencil, which play a role in numerical approximation, in Section 1.3. ## Chapter Differential and Difference Equations In this chapter we give a brief introduction to PDEs. In Section 1.1 some simple problems that arise in real-life phenomena are derived. (A more detailed derivation of such problems will follow in later chapters.) We show by a number of examples how they may often be seen as continuous analogues of discrete formulations (i.e., based on difference equations). In Section 1.2 we briefly summarize the terminology used to describe various PDEs. Thus concepts like order and linearity are introduced. In Chapter 2 we shall discuss the classification of the various types of PDEs in more detail. Finally, we introduce difference equations and notions like scheme and stencil, which play a role in numerical approximation, in Section 1.3. ### 1.1 Introduction Many phenomena in nature may be described mathematically by functions of a small number of independent variables and parameters. In particular, if such a phenomenon is given by a function of spatial position and time, its description gives rise to a wealth of (mathematical) models, which often result in equations, usually containing a large variety of derivatives with respect to these variables. Apart from the spatial variable(s), which are essential for the problems to be considered, the time variable will play a special role. Indeed, many events exhibit gradual or rapid changes as time proceeds. They are said to have an _evolutionary_ character and an essential part of their modeling is therefore based on _causality_; i.e., the situation at any time is dependent on the past. As far as (mathematical) modeling leads to PDEs, the latter will be called evolutionary, i.e., involve the time \(t\) as a variable. The other type of problems are often referred to as _steady state_. We will give some examples to illustrate this background. A typical PDE arises if one studies the flow of quantities like density, concentration, heat, etc. If there are no restoring forces, they usually have a tendency to spread out. In particular, one may, e.g., think of particles with higher velocities (or rather energy) colliding with particles with lower velocities. The former are initially rather clustered. The energy will gradually spread out, mainly because the high-velocity particles collide with other ones, thereby transferring some of the energy. This is called _dissipation_. A similar effect can be ###### Abstract We consider a long tube of cross section \(A\) filled with water and a dye. Initially the dye is concentrated in the middle. Let \(u(x,t)\) denote the concentration or density (mass per unit length) of the dye at position \(x\) and time \(t\); then we see that in a small volume \(A\Delta x\), positioned between \(x-\frac{1}{2}\Delta x\) and \(x+\frac{1}{2}\Delta x\) (Figure 1.1), the total amount of dye equals approximately \(u(x,t)\Delta x\). Now consider a similar neighbouring volume \(A\Delta x\) between \(x+\frac{1}{2}\Delta x\) and \(x+\frac{3}{2}\Delta x\), with a corresponding dye concentration \(u(x+\Delta x,t)\). The mass that flows per unit time through a cross section is called the mass flux. From the physics of solutions it is known that the dye will move from the volume with higher concentration to one with lower concentration such that the mass flux \(f\) between the respective volumes is proportional to the difference in concentration between both volumes and is thus given by \[f\left(x+\frac{1}{2}\Delta x,t\right)=\alpha\left(u\left(x+\frac{1}{2}\Delta x,t\right)\right)\frac{u(x+\Delta x,t)-u(x,t)}{\Delta x},\] where \(\alpha\), the diffusion coefficient, usually depends on \(u\). This relation is called Fick's law for mass transport by diffusion, which is the analogue of Fourier's law for heat transport by conduction. As there is a similar flux between the centre volume and its left neighbour, we have a rate of change of total amount of mass in the centre volume equal to the difference between both fluxes given by \[\frac{\partial}{\partial t}u(x,t)\Delta x=f\left(x+\frac{1}{2}\Delta x,t\right) -f\left(x-\frac{1}{2}\Delta x,t\right).\] If the diffusion coefficient \(\alpha\) is a constant, we have \[\frac{\partial}{\partial t}u(x,t)=\alpha\frac{u(x+\Delta x,t)-2u(x,t)+u(x- \Delta x,t)}{\Delta x^{2}}.\] By taking the limits for small volumes (i.e., \(\Delta x\to 0\)), we find \[\frac{\partial}{\partial t}u(x,t)=\alpha\frac{\partial^{2}}{\partial x^{2}}u(x,t),\] which is called the one-dimensional _diffusion equation_. As heat conduction satisfies the same equation, it is also called the _heat equation_ if \(u\) denotes temperature. ## 1 Introduction Another kind of PDE occurs in the transport of particles. Here a flow typically has a dominant direction; mutual collision of particles (which is felt globally as a kind of internal friction, or viscosity) is neglected. **Example 1.2**: Consider a road with heavy traffic moving in one direction, say the \(x\) direction (Figure 1.2). Let the number of cars at time \(t\) on a stretch \([x,\,x\,+\,\Delta x]\) be denoted by \(\Delta N(x,\,t)\). Furthermore, let the number of cars passing a point \(x\) per time period \(\Delta t\) be given by \(f(x,\,t)\Delta t\). In that period the number of cars \(\Delta N(x,\,t\,+\,\Delta t)\) can only be changed by a difference between inflow at \(x\) and outflow at \(x\,+\,\Delta x\); i.e., \[\Delta N(x,\,t\,+\,\Delta t)=\Delta N(x,\,t)-\big{(}f(x+\Delta x,\,t)-f(x,\,t )\big{)}\Delta t.\] Rather than the number of cars \(\Delta N\) per interval of length \(\Delta x\), it is convenient to consider a _car density_\(n(x,\,t)\), which is defined by \[\Delta N(x,\,t)=n(x,\,t)\Delta x.\] Hence we obtain the relation \[\frac{n(x,\,t\,+\,\Delta t)-n(x,\,t)}{\Delta t}=-\frac{f(x+\Delta x,\,t)-f(x, \,t)}{\Delta x}.\] Assuming sufficient smoothness (which implies that we have to allow for fractions of cars \(\ldots\)), this leads in the limit of \(\Delta t,\,\Delta x\,\to\,0\) to \[\frac{\partial n}{\partial t}+\frac{\partial f}{\partial x}=0,\] which takes the form of a _conservation law_. We may recognize \(f\) again as a flux. If this flux only depends on the local car density, i.e., \(f=f(n)\), and \(f\) is sufficiently smooth, we obtain \[\frac{\partial n}{\partial t}+f^{\prime}(n)\frac{\partial n}{\partial x}=0,\] also known as the _transport equation_. \(\Box\) An important class of problems arises from classical mechanics, i.e., Newtonian systems. **Example 1.3**: Consider a chain consisting of elements, each with mass \(m\), and springs, with spring constant \(\beta>0\) and length \(\Delta x\); see Figure 1.3. Denote the elements by \(V_{1},\,V_{2},\,\ldots\) with position of the masses \(x=u_{1},\,u_{2},\,\ldots\). Assuming linear springs, the force necessary to increase the original length \(\Delta x\) of the spring of element \(V_{i}\) by an amount \(\delta_{i}=u_{i}-u_{i-1}-\Delta x\) is equal to \(F_{i}=\beta\delta_{i}\). Apart from the endpoints, all masses are free to move in the \(x\) direction, their inertia being balanced by the reaction forces of the springs. Noting that each element \(V_{i}\) (except for the endpoints) experiences a spring force from the neighbouring \(i\)th and (\(i\,+\,1\))th springs, we have from Newton's law for the \(i\)th element that \[m\frac{\mathrm{d}^{2}u_{i}}{\mathrm{d}t^{2}}=F_{i+1}-F_{i}=\beta(u_{i+1}-u_{i} -u_{i}+u_{i-1}),\qquad i=1,2,\,\ldots.\] Figure 1.2: _Sketch of traffic flow._ If the chain elements increase in number, while the springs and masses decrease in size, it is natural and indeed more convenient not to distinguish the individual elements, but to blend the discrete description of (\(*\)) into a continuous analogue. The small masses are conveniently described by a density \(\rho\) such that \(m=\rho\,\Delta x\), while the large spring constants are best described by a stiffness \(\sigma=\beta\,\Delta x\). Then we obtain from (\(*\)) for the position function \(u(x,t)\) the PDE \[\frac{\partial^{2}u}{\partial t^{2}}=\frac{\sigma}{\rho}\,\frac{\partial^{2}u }{\partial x^{2}}.\] ( \[\dagger\] ) As solutions of this equation are typically wave like, it is known as the _wave equation_, with a wave velocity equal to \(\sqrt{\sigma/\rho}\). In our example it describes longitudinal waves along the suspended chain of masses. In the context of pressure-density perturbations of a compressible fluid like air, the equation describes one-dimensional sound waves, e.g., as they occur in organ pipes. In that case the air stiffness is equal to \(\sigma=\gamma p\), where \(\gamma=1.4\) is a gas constant and \(p\) is the atmospheric pressure (see Section 6.8.2). In the following example we mention the analogue in electrical circuits of the motion of coupled spring-dashpot elements. **Example 1.4**: The time-behaviour of electric currents in a network may be described by the variables potential \(V\), current \(I\), and charge \(Q\). If the network is made of simple wires connecting isolated nodes, resistances, capacities, and coils, and the frequencies are low, it may be modeled (a posteriori confirmed by analysis of the Maxwell equations) one dimensionally by a series of elements with the material properties resistance \(R\), capacitance \(C\), and inductance \(L\). Such a model is called an electrical circuit. If the frequencies are high, such that the wavelength is comparable with the length of the conductors, we have to be more precise. As the signal cannot change instantaneously at all locations, it propagates as a wave of voltage and current along the line. In such a case we cannot neglect the resistance and inductance properties of the wires. By considering the wires as being built up from a series of (infinitesimally) small elements, we can model the system by what is called a transmission line, leading to PDEs in time and space. In or across each element we have the following relations. The current is defined as the change of charge in time, \(I=\frac{4}{\omega}\,Q\). The capacitance of a pair of conductors is given by \(C=Q/V\), where \(V\) is the potential difference and \(Q\) is the charge difference between the conductors (Coulomb's law). The resistance between two points is given by \(R=V/I\), where \(V\) is the potential difference between these points and \(I\) is the corresponding current (Ohm's law). A changing electromagnetic current in a coil with inductance \(L\) induces a counteracting potential, given by \(V=-L\frac{4}{\omega}\,I\) (Faraday's law). At a junction no charge can accumulate, and we have the condition \(\sum I=0\), while around a loop the summed potential vanishes. \(\sum V=0\) (Kirchhoff's laws). With these building blocks we can construct transmission line models. Figure 1.3: Chain of coupled springs. A famous example is the _telegraph equation_, where an infinitesimal piece of telegraph wire is modeled (Figure 1.4) as an electrical circuit consisting of a resistance \(R\,\Delta x\) and an inductance \(L\,\Delta x\), while it is connected to the ground via a resistance \((G\,\Delta x)^{-1}\) and a capacitance \(C\,\Delta x\). Let \(i(x,t)\) and \(u(x,t)\) denote the current and voltage through the wire at position \(x\) and time \(t\). The change of voltage across the piece of wire is now given by \[u(x+\Delta x,t)-u(x,t)=\left[-i\,R\,\Delta x-\frac{\partial i}{\partial t}L\, \Delta x\right]_{x+\Delta x}.\] The amount of current that disappears via the ground is \[i(x+\Delta x,t)-i(x,t)=\left[-uG\,\Delta x-\frac{\partial u}{\partial t}C\, \Delta x\right]_{x}.\] By taking the limit \(\Delta x\,\rightarrow\,0\), we get \[\frac{\partial u}{\partial x}=-Ri-L\frac{\partial i}{\partial t},\quad\frac{ \partial i}{\partial x}=-Gu-C\frac{\partial u}{\partial t}.\] By eliminating \(i\), we may combine these equations into the telegraph equation for \(u\), i.e., \[\frac{\partial^{2}u}{\partial x^{2}}=LC\frac{\partial^{2}u}{\partial t^{2}}+ (LG+RC)\frac{\partial u}{\partial t}+RGu.\] **Example 1.5**: Consider the following crowd of \(N^{2}\) very accommodating people (Figure 1.5), for convenience ordered in a square of size \(L\times L\), while each person, labelled by (\(i\), \(j\)), is positioned at \(x_{i}=ih\), \(y_{j}=jh\), with \(h=L/N\). Each person has an opinion given by the (scalar) number \(p_{ij}\) and can only communicate with his or her immediate neighbours. Assume that each person tries to minimize any conflict with his or her neighbours and is willing to take an opinion that is the average of their opinions. So we have \[p_{ij}=\frac{1}{4}\big{(}p_{i+1,j}+p_{i-1,j}+p_{i,j+1}+p_{i,j-1}\big{)}.\] Only at the borders of the square are the individuals provided with information such that \(p\) is fixed. Figure 1.4: A transmission line model of a telegraph wire. If the number of people becomes so large that we may take the limit \(N\to\infty\) (i.e., \(h\to 0\)) and \(p\) becomes a continuous function of \((x,y)\), (\(\ast\)) becomes \[p(x,y)=\frac{1}{4}(p(x+h,y)+p(x-h,y)+p(x,y+h)+p(x,y-h)).\] This may be recast into \[\big{[}p(x+h,y)-2p(x,y)+p(x-h,y)\big{]}+\big{[}p(x,y+h)-2p(x,y)+p(x,y-h)\big{]}=0.\] If this is true for any \(h\), we may divide by \(h^{2}\), and the equation becomes in the limit \[\frac{\partial^{2}p}{\partial x^{2}}+\frac{\partial^{2}p}{\partial y^{2}}=0.\] This equation is called the _Laplace equation_ and describes phenomena where, in some sense, information is exchanged in all directions until equilibrium is achieved. From the above sociological example it is not difficult to appreciate that discontinuities and sharp gradients are smoothed out, while extremes only occur at the boundary. The best-known problem described by this equation is the stationary distribution of the temperature in a heat-conducting medium. \(\Box\) ### 1.2 Nomenclature In the previous section we met a number of equations with derivatives with respect to more than one variable. In general, such equations are called _partial differential equations_. Let \(x\) and \(t\) be two independent variables and let \(u(x,t)\) denote a quantity depending on \(x\) and \(t\). Furthermore, let \[t\in[0,T],\quad 0\leq T\leq\infty,\quad x\in[a,b]\subset\mathbb{R}. \tag{1.1}\] For an integer \(n\) a general form for a scalar PDE (in two independent variables) reads \[F\left(\frac{\partial^{n}u}{\partial t^{n}}\,,\,\frac{\partial^{n}u}{\partial t \partial x^{n-1}},\ldots,\frac{\partial^{n}u}{\partial x^{n}}\,,\,\frac{ \partial^{n-1}u}{\partial t^{n-1}},\ldots,\,\frac{\partial^{n-1}u}{\partial x ^{n-1}},\ldots,\,\frac{\partial u}{\partial t},\frac{\partial u}{\partial x},u, \,x,t\right)=0. \tag{1.2}\] The highest-order derivative is called the _order_ of the PDE; not all partial derivatives (except the highest of at least one variable) need to be present. The form (1.2) is an Figure 1.5: _An array of accommodating individuals._ _implicit_ formulation, i.e., the highest-order derivative(s), the _principal part_, do(es) not appear explicitly. If the latter is the case, we call it an _explicit_ PDE. The generalization to more than two independent variables is obvious. **Example 1.6**: Some important examples of PDEs are as follows: 1. \(\frac{\partial u}{\partial t}+c\left(1+\frac{3}{2}u\right)\frac{\partial u}{ \partial x}+\frac{1}{5}\varepsilon h^{2}\frac{\partial^{3}u}{\partial x^{3}}=0 \qquad\) (_Korteweg-de Vries equation_). This is a third order PDE. 2. \(\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}\,f(u)=0\qquad\) (_nonlinear transport equation_). If \(f\) is differentiable, we see that this is a first order PDE in \(u\). 3. \(\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\varepsilon\frac {\partial^{2}u}{\partial x^{2}}\qquad\) (_the Burgers' equation_). If \(\varepsilon=0\), this may be referred to as the inviscid Burgers' equation, which is a special case of the transport equation. 4. \(\frac{\partial^{2}u}{\partial t^{2}}-c^{2}\frac{\partial^{2}u}{\partial x^{2 }}-\frac{1}{3}h^{2}\frac{\partial^{4}u}{\partial x^{2}\partial t^{2}}=0\qquad\) (_linearized Boussinesq equation_). 5. \(EI\frac{\partial^{4}u}{\partial x^{4}}-T\frac{\partial^{2}u}{\partial x^{2}}+ m\frac{\partial^{2}u}{\partial t^{2}}=0\qquad\) (_vibrating beam equation_). 6. \(\frac{\partial u}{\partial y}\frac{\partial^{2}u}{\partial y\partial x}- \frac{\partial u}{\partial x}\frac{\partial^{2}u}{\partial y^{2}}=v\frac{ \partial^{3}u}{\partial y^{3}}\qquad\) (_Prandtl's boundary layer equation_). \(\Box\) In quite a few cases the order can only be deduced after some (trivial) manipulation. **Example 1.7**: \[\frac{\partial u}{\partial t}-\frac{\partial}{\partial x}\left(D(u)\frac{ \partial u}{\partial x}\right)=f(x)\qquad\mbox{(nonlinear diffusion equation).}\] It is clear that this PDE is second order. There is no analytical, numerical, or practical need to rework this and have \(\frac{\partial^{2}}{\partial x^{2}}u\) appear explicitly. \(\Box\) Usually, the variables are space and/or time. Although the variables in (1.2) are generic, we shall use the symbol \(t\) to indicate the _time_ variable in general. The variable \(x\) will refer to _space_. There are major differences between problems where time does and does not play a role. If the time is not explicitly there, the problem is referred to as a _steady state problem_. If the PDE possesses solutions that evolve explicitly with \(t\), we call it an _evolutionary problem_; i.e., there is _causality_. Most of the theory will be devoted to problems in one space variable. However, occasionally we shall encounter more than one such space variable. Fortunately, problems in more such variables often have many analogues of the one-dimensional case. We shall indicate vectors by boldface characters. So in higher-dimensional space the space variable is denoted by \(\boldsymbol{x}\), or by \((x,\,y,\,z)^{T}\). The PDE can still be scalar. We have obvious analogues for vector-dependent variables of the foregoing. **Example 1.8**: A few other examples are as follows: 1. \(\frac{\partial u}{\partial t}-\alpha\left(\frac{\partial^{2}u}{\partial x^{2}} +\frac{\partial^{2}u}{\partial y^{2}}+\frac{\partial^{2}u}{\partial z^{2}} \right)=0\qquad\) (_heat equation_ in three dimensions). We prefer to write this as \(\frac{\partial}{\partial t}u-\alpha\nabla^{2}u=0\). \(\nabla^{2}\) is referred to as the _Laplace operator_. 2. \(\frac{\partial^{2}u}{\partial t^{2}}-c^{2}\nabla^{2}u=0\qquad\) (_wave equation_ in three dimensions). 3. \(\nabla^{2}u+k^{2}u=0\qquad\) (_Helmholtz or reduced wave equation_). 4. \((1-M^{2})\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^ {2}}+\frac{\partial^{2}u}{\partial z^{2}}=0\qquad\) (_equation for small perturbations in steady subsonic (\(M^{2}<1\)) or supersonic (\(M^{2}>1\)) flow_). \(\Box\) Sometimes one also denotes a partial derivative of a certain variable by an index: \[u_{t}:=\frac{\partial u}{\partial t},\qquad u_{tx}:=\frac{\partial^{2}u}{ \partial t\partial x}. \tag{1.3}\] If we can write (1.2) as a linear combination of \(u\) and its derivatives with respect to \(x\) and \(t\), and with coefficients only depending on \(x\) and \(t\), the PDE is called _linear_. Moreover, it is called _homogeneous_ if it does not depend explicitly on \(x\) and/or \(t\). If the PDE is a linear combination of derivatives but the coefficients of the highest derivative, say \(n\), depend on \((n-1)\)th order derivatives at most, then we call it _quasi-linear_[29]. For any differential equation we have to prescribe certain initial conditions and boundary conditions for the time and space variable(s), respectively. In evolutionary problems they often both appear as initial boundary conditions. We shall encounter various types and combinations in later chapters. We finally remark that we may look for solutions that satisfy the PDE in a weak sense. In particular, the derivatives may not exist everywhere on the domain of interest. Again we refer to later chapters for further details. ### 1.3 Difference Equations Initially, the actual form of the equations we derived in the examples in Section 1.1 was of a difference equation. Like a PDE, we may define a partial difference equation as any relation between values of \(u(x,t)\) where \((x,t)\in\mathcal{F}\subset[a,b]\times[0,T)\), \(\mathcal{F}\) being a finite set of points of the domain \([a,b]\times[0,T)\). We shall encounter difference equations when solving a PDE numerically, so they should approximate the PDE in some well-defined way. The simplest way to describe the latter is by defining a _scheme_, i.e., a discrete analogue of the (continuous) PDE. Since we shall mainly deal with finite difference approximations in this book, we perceive a scheme as the result of replacing the differentials by finite differences. To this end we have to indicate some (generic) points in the domain \([a,b]\times[0,T)\) at which the function values \(u(x,t)\) are taken. The latter set of points is called a _stencil_. We shall clarify this with some examples. **Example 1.9**: 1. Consider Example 1.1 again. If we replace \(\frac{\partial}{\partial t}u(x,t)\) in equation (\(\ast\)) by a straightforward discretisation, then we obtain the scheme \[\frac{u(x,t+\Delta t)-u(x,t)}{\Delta t}=\alpha\frac{u(x+\Delta x,t)-2u(x,t)+u( x-\Delta x,t)}{\Delta x^{2}},\] and the stencil is the set of bullets (\(\bullet\)) in Figure 1.6. 2. Consider the wave equation (\(\dagger\)) of Example 1.3. A discrete version may be found to be \[\frac{u(x,t+\Delta t)-2u(x,t)+u(x,t-\Delta t)}{\Delta t^{2}}\\ =\frac{\sigma}{\rho}\frac{u(x+\Delta x,t)-2u(x,t)+u(x-\Delta x,t)} {\Delta x^{2}}.\] The stencil is given in Figure 1.7. \(\Box\) Given the special role of time and the implication it has for the actual computation, which should be based on the causality of the problem, we may distinguish schemes according to the number of time levels involved. If (\(k+1\)) such time levels are involved, we call the scheme a \(k\)_-step scheme_. If the scheme involves only spatial differences at earlier time levels, it is called _explicit_; otherwise it is called _implicit_. **Example 1.10** 1. The schemes in Example 1.9 are both explicit, the first being a one-step and the second a two-step scheme. 2. We could also approximate the \(u_{xx}\) term in the heat equation at time level \(t+\Delta t\) and obtain the scheme \[\frac{u(x,t+\Delta t)-u(x,t)}{\Delta t}\\ =\alpha\frac{u(x+\Delta x,t+\Delta t)-2u(x,t+\Delta t)+u(x-\Delta x,t+\Delta t)}{\Delta x^{2}}.\] Figure 1.6: _Stencil of Example 1.9_(i). Figure 1.7: _Stencil of Example 1.9_(ii). ###### Abstract 1.4. Determine the order of the PDE (where \(a\) and \(b\) are parameters) \[\frac{\partial u}{\partial t}=a\nabla^{2}u+b\frac{\partial u}{\partial x}+c(u).\] 1.5. Verify that the solution \(u=u(x,t)\) of the transport equation (cf. Example 1.2 or 1.6(ii)) \[\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}f(u)=0,\qquad u(x,0) =v(x),\] for sufficiently smooth \(f\) is implicitly given by \[u=v\big{(}x-f^{\prime}(u)t\big{)}.\] ## Clemson University TigerPrints All Theses 5-2015 Determination of Chaos in Different Dynamical Systems Sherli Koshy-Chenthittayil _Clemson University_ Follow this and additional works at: [https://tigerprints.clemson.edu/all_theses](https://tigerprints.clemson.edu/all_theses) ## Recommended Citation Koshy-Chenthittayil, Sherli, "Determination of Chaos in Different Dynamical Systems" (2015). _All Theses_. 2115. [https://tigerprints.clemson.edu/all_theses/2115](https://tigerprints.clemson.edu/all_theses/2115)Determination of Chaos in Different Dynamical Systems A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Science Mathematics by Sherli Koshy-Chenthittayil May 2015 Accepted by: Dr. Elena Dimitrova, Committee Chair Dr. Eleanor Jenkins Dr. Brian Dean ## Abstract It has been widely observed that most deterministic dynamical systems go into chaos for some values of their parameters. There are many ways to measure chaos. One popular way uses Lyapunov exponents. The objective of this thesis is to find the parameter values for a system that determines chaos via the Lyapunov exponents.The paper by Wolf et.al.,[2] proposed the frequently used choice of calculating such exponents using Gram-Schmidt orthonormalization process. The work in this thesis centered on coding and verifying the algorithm in [2], as well as using the code to investigate three biological models [7],[6] and [1] to find parameters/initial conditions to give chaos. Finally it also considers as future work choosing appropriate sampling algorithms to better understand the parameter space for which we may obtain chaos. ## Acknowledgments I would like to first acknowledge God Almighty for his grace and kindness. I would also like to thank my family for their support. Next I am really grateful to Dr.Elena Dimitrova for helping me understand chaos theory, Dr.Lea Jenkins for all the help with the Lyapunov calculator code and Dr.Brian Dean for help with the sampling algorithms and subsequent coding. A special thank you to Dr.Oleg Yordanov for all his help in rescaling the systems. Thank you all so much. ## Table of Contents Title Page Abstract Acknowledgments List of Tables List of Figures 1 Introduction 1.1 Background information 1.2 Behavior of the dynamical systems 2 Evaluation of Lyapunov Spectrum 2.1 Definitions 2.2 Procedure for calculation of Lyapunov Exponents 3 Systems under consideration 3.1 Kot System 3.2 Kravchenko System 3.3 System based on Becks paper [1] 4 Metropolis-Hastings Algorithm 4.1 The Algorithm 5 Conclusions Appendices A MATLAB code for determining Lyapunov Spectrum B MATLAB code for the mathematical models BibliographyList of Tables * 2.1 Lyapunov Spectrum in [2] vs Lyapunov Spectrum obtained through the MATLAB code * 3.1 Values of parameters for microbial model presented in Kot, et.al. [7] * 3.2 Table of values for model equations (3.11) - (3.14) used in the numerical simulations. List of Figures * 1.1 Lorenz attractor * 1.1 Manifold plot of forced model in [7] when \(\omega=\frac{5\pi}{6}\) and \(\epsilon=0.6\) * 1.2 3-D plot depicting chaos and non-chaos with changes in \(\epsilon\) and \(\omega\) * 1.3 3-D plot depicting chaos and non-chaos with changes in \(D\) and \(S_{i}\) * 1.4 Time series plot of the solutions to the system in [6] * 1.5 3-D plot of the Lyapunov exponent when \(X\) and \(Z\) were varied. (The purple denotes the z-plane at 0) * 1.6 3-D plot of the Lyapunov exponent when \(K_{SX}\) and \(Z\) were varied * 1.7 3-D plot depicting chaos and non-chaos with changes in \(D\) and \(N\) * 1.8 3-D plot depicting chaos and non-chaos with changes in \(D\) and \(P\) * 1.9 Parallel Coordinates Plot of \(\epsilon,\omega\), initial values of the variables \(x,y,z\) of the Forced System in [7] and the Maximum Lyapunov Exponent * 1.10 Parallel Coordinates Plot of \(\rho,\beta,\sigma\), initial values of the variables \(x,y,z\) of the Lorenz system and the Maximum Lyapunov Exponent ## Chapter 1 Introduction It is an indisputable fact that chaos exists not just in theory. The objective of this thesis is to find the parameter values for a system that determines chaos via Lyapunov exponents. Before we delve into chaos, let us go through the background needed for it. ### 1.1 Background information * **Dynamical systems** A dynamical system consists of a set of possible states, together with a rule determining the present state based on the previous state [3]. For example consider a simple dynamical system given by \(x_{n+1}=2x_{n}\). Here the variable \(n\) stands for time and \(x_{n}\) denotes the population at time \(n\). * **Deterministic Dynamical Systems** A deterministic dynamical system is one in which the present state is determined **uniquely** from the past states. In our previous example, the present population is completely determined by the previous one. If randomness occurs in the prediction of the new state, then the system is nolonger deterministic but a _random_ or _stochastic_ process. An example of such a process is flipping a fair coin to determine if it will rain or not. A coin has no predictive power over rain. **Types of Dynamical Systems** * _Discrete-time Dynamical Systems_: If the rule is applied at discrete times, the system is called a discrete-time dynamical system. Our example is a discrete system. * _Continuous-time Dynamical Systems_: It is essentially the limit of discrete system with smaller and smaller updating times. In this case, the governing rule will become a set of differential equations. Instead of expressing the current state as a function of the previous state, the differential equation expresses the **rate of change** of the current state as a function of the previous state [3]. We will be considering continuous dynamical systems with ordinary differential equations. As we all know, an ordinary differential equation is one in which the solutions are functions of an independent variable. In our case the independent variable will be time denoted by \(t\). Such equations come in two types: * An _autonomous differential equation_ is one in which \(t\) does not appear explicitly. An example for this would be the equation of pendulum given by: \[\frac{dx}{dt}=-\sin x.\]* A _nonautonomous differential equation_ is one where \(t\) explicitly appears. The equation of the forced damped pendulum: \[(1+c)\frac{dx}{dt}=-\sin x+\rho\sin t\] is an example for such an equation. Any nonautonomous system can be transformed into an autonomous system by introducing a new variable \(y\) and setting it to be equal to \(t\). This conversion requires an additional differential equation. For the above example the autonomous version would be: \[(1+c)\frac{dx}{dt} =-\sin x+\rho\sin y\] \[\frac{dy}{dt} =1\] ### 1.2 Behavior of the dynamical systems We shall describe the behavior of the dynamical systems in terms of equilibrium solutions, limit cycles and chaos. * **Equilibrium solutions:** A constant solution of the autonomous differential equation \(\frac{dx}{dt}=f(x)\) is called an _equilibrium_ of the equation[3]. In other words, it is a solution which satisfies \(f(x)=0\). The solutions either converge to the equilibrium or diverge away from it. * **Periodic orbits:** If there exists a \(T>0\) such that \(F(t+T,v_{0})=F(t,v_{0}),\forall t\) and if \(v_{0}\) is not an equilibrium, then the solution \(F(t,v_{0})\) is called a _periodic orbit_ or _cycle_. Here \(F(t,v_{0})\) denotes the value of the solution at time \(t\) with initial value \(v_{0}\). Also the periodic orbit traces out a simple closed curve. * **Chaotic orbit:** An orbit that exhibits an unstable behavior that is not itself fixed or periodic is called a _chaotic orbit_. At any point in such an orbit, there are points arbitrarily near that will move away from the point during further iteration. In terms of solutions, it means they are very sensitive to small perturbations in the initial conditions and almost all of them do not appear to be either periodic or converge to equilibrium solutions. For autonomous differential equations on the real line, bounded solutions must converge to an equilibrium. For planar autonomous systems, solutions that are bounded may instead converge to periodic orbits or cycles. In this case solutions cannot be chaotic. There is no such restriction in three-dimensional cases. These results follow from the Poincare-Bendixson Theorem. A classic three-dimensional system which displays stable equilibria and chaotic behavior for different values of a parameter is the Lorenz model given below: \[\dot{x} =\sigma(y-x)\] \[\dot{y} =x(\rho-z)-y\] \[\dot{z} =xy-\beta z.\]For \(\sigma=10,\beta=8/3\), Lorenz found that the system behaved chaotically for \(\rho\geq 24.74\). The chaotic attractor is shown below: This figure depicts the orbit of a single set of initial conditions. This is a numerically observed attractor since the choice of almost any initial condition in a neighborhood of the chosen set results in a similar figure [3]. A chaotic attractor can be dissipative (volume-decreasing), locally unstable (orbits do not settle down to stationary, periodic, or quasiperiodic motion) or stable at large scale(i.e. they get trapped in a strange attractor). In the next chapter, we discuss ways to measure chaos using the Lyapunov spectrum. In the third chapter, we go on to evaluate the spectrum for three continuous-time dynamical systems based on biological populations. The final chapter considers Figure 1.1: Lorenz attractor as future work choosing appropriate sampling algorithms to better understand the parameter space for which we may obtain chaos. ## Chapter 2 Evaluation of Lyapunov Spectrum ### 2.1 Definitions A usual measure of chaos is finding the Lyapunov spectrum of the system. If at least one of the Lyapunov exponents is positive then the bounded aperiodic orbit is said to be chaotic [3]. As the systems investigated in this thesis are continuous, the definition of the exponent will be given in terms of such a dynamical system.[2] Consider a continuous dynamical system in an n-dimensional phase space. We are observing the long term behavior of an _infinitesimal n-sphere_ (i.e. sphere of very small radius) of initial conditions. Due to the locally deforming nature of the flow, the sphere eventually becomes an _n-ellipsoid_. The Lyapunov exponent is calculated for each dimension and it is dependent on the length of the principal axis of the ellipsoid. It is given by: \[\lambda_{i}=\lim_{t\rightarrow\infty}\frac{1}{t}log_{2}\frac{p_{i}(t)}{p_{i}(0)} \tag{2.1}\] where \(p_{i}(t)\) denotes the length of the ellipsoidal principal axis at time \(t\) and \(p_{i}(0)\)denotes its length at time \(t=0\). The exponents are generally given in decreasing order, i.e. \(\lambda_{1}>\lambda_{2}>\cdots>\lambda_{n}\). The exponents give us an idea of whether a specific direction in the phase space is contracting or expanding. An expanding direction signifies a positive exponent and contracting a negative one. As the orientation of the ellipsoid is varying continuously, we cannot speak of a definite direction with respect to the exponent. For a dissipative dynamical system, we will have at least one negative Lyapunov exponent. If the exponent is positive, we wouldn't expect a bounded attractor unless some folding of widely separated trajectories takes place. So for that particular direction, the system goes through a repeated stretching and folding processes. As a result of this, we cannot predict the long-term behavior of the system given the initial conditions which is the very definition of chaos. For a one-dimensional system, the Lyapunov spectrum clearly consists of one value. For a discrete dynamical system, it is positive for a chaotic regime, zero for a marginally stable orbit and negative for a periodic orbit [2]. For a continuous one-dimensional dynamical system, the Lyapunov exponent will always be negative. For a continuous three-dimensional system which is dissipative (i.e volume decreasing), the possible spectra are as follows: \((+,0,-)\) denotes a strange attractor, \((0,0,-)\) denotes a two-torus, \((0,-,-)\) for a limit cycle and finally \((-,-,-)\) for a fixed point. This can be extended to n-dimensions. The magnitude of the Lyapunov exponent computes the attractor's dynamics;i.e it tells us the number of orbits after which we cannot predict the future behavior of the initial condition [2]. ### 2.2 Procedure for calculation of Lyapunov Exponents The definition of Lyapunov exponents requires us to define principal axes with initial conditions. These axes need to evolve with the equations of the system. The issue is we cannot guarantee the condition of small separations for times on the order of hundreds of orbital periods needed for convergence in a chaotic system. To overcome this, the authors of [2] use a phase space together with a tangent space approach. A _fiducial trajectory_ (center of the sphere) is obtained by the action of the non-linear system on some initial conditions. Now to obtain the trajectories of points on the surface of the sphere, we consider the action of the _linearized_ system on points very close to the fiducial trajectory. In fact, the principal axes are defined by the evolution via the linearized equations of an initially orthonormal vector frame anchored to the fiducial trajectory. [2]. To define the trajectories on the points of the sphere we need the concept of a _linearized system_ or _variational equations_. Consider a dynamical system of the form \(\vec{x}^{\prime}=\vec{F}(\vec{x})\),where \(\vec{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\vec{F}=(f_{1},f_{2},\ldots,f_{n})\). It's easy to generate the state-space trajectory \(\phi(\vec{x}_{0})\) by using any numerical ODE solver. But what happens if there are small perturbations in \(\vec{x}\)? The formal way to describe how these perturbations react is with the use of partial derivatives. For instance, \(\frac{\partial f_{1}}{\partial x_{2}}\) is how much the slope of the first variable (\(f_{1}\)) changes if you perturb the second variable \(x_{2}\). [5]Consider the Lorenz system: \[\dot{x} =\sigma(y-x)\] \[\dot{y} =x(\rho-z)-y\] \[\dot{z} =xy-\beta z\] To set up the linearized system corresponding to the above equations we would need the Jacobian of the right-hand side which is given by: \[J=\begin{bmatrix}\frac{\partial f_{1}}{\partial x}&\frac{\partial f_{1}}{ \partial y}&\frac{\partial f_{1}}{\partial z}\\ \frac{\partial f_{2}}{\partial x}&\frac{\partial f_{2}}{\partial y}&\frac{ \partial f_{2}}{\partial z}\\ \frac{\partial f_{3}}{\partial x}&\frac{\partial f_{3}}{\partial y}&\frac{ \partial f_{3}}{\partial z}\end{bmatrix}\] where \(f_{i}\) is the right-hand side of the \(i^{th}\) differential equation. For a \(n\)-dimensional system we would have an \(n\times n\) matrix. For the Lorenz system the Jacobian is \[\begin{bmatrix}-\sigma&\sigma&0\\ \rho-z&-1&-x\\ y&x&-\beta\end{bmatrix}\] To set up the variational equations we would need to describe the variations. For this consider the following matrix:\[[\delta]=\begin{bmatrix}\delta_{x1}&\delta_{y1}&\delta_{z1}\\ \delta_{x2}&\delta_{y2}&\delta_{z2}\\ \delta_{x3}&\delta_{y3}&\delta_{z3}\end{bmatrix}\] where \(\delta_{xi}\) is the component of the \(x\) variation that came from the \(i^{th}\) equation. The column sums of this matrix are the lengths of the \(x\),\(y\), and \(z\) coordinates of the evolved variation. The rows are the coordinates of the vectors into which the original \(x\),\(y\), and \(z\) components of the variation have evolved. The linearized equations are: \[\begin{bmatrix}\dot{\delta}_{x1}&\dot{\delta}_{y1}&\dot{\delta}_{z1}\\ \dot{\delta}_{x2}&\dot{\delta}_{y2}&\dot{\delta}_{z2}\\ \dot{\delta}_{x3}&\dot{\delta}_{y3}&\dot{\delta}_{z3}\end{bmatrix}=\begin{bmatrix} \frac{\partial f_{1}}{\partial x}&\frac{\partial f_{1}}{\partial y}&\frac{ \partial f_{1}}{\partial z}\\ \frac{\partial f_{2}}{\partial x}&\frac{\partial f_{2}}{\partial y}&\frac{ \partial f_{2}}{\partial z}\\ \frac{\partial f_{3}}{\partial x}&\frac{\partial f_{3}}{\partial y}&\frac{ \partial f_{3}}{\partial z}\end{bmatrix}\begin{bmatrix}\delta_{x1}&\delta_{y1} &\delta_{z1}\\ \delta_{x2}&\delta_{y2}&\delta_{z2}\\ \delta_{x3}&\delta_{y3}&\delta_{z3}\end{bmatrix}\] For the Lorenz system, it would be: \[\begin{bmatrix}\dot{\delta}_{x1}&\dot{\delta}_{y1}&\dot{\delta}_{z1}\\ \dot{\delta}_{x2}&\dot{\delta}_{y2}&\dot{\delta}_{z2}\\ \dot{\delta}_{x3}&\dot{\delta}_{y3}&\dot{\delta}_{z3}\end{bmatrix}=\begin{bmatrix} -\sigma&\sigma&0\\ \rho-z&-1&-x\\ y&x&-\beta\end{bmatrix}\begin{bmatrix}\delta_{x1}&\delta_{y1}&\delta_{z1}\\ \delta_{x2}&\delta_{y2}&\delta_{z2}\\ \delta_{x3}&\delta_{y3}&\delta_{z3}\end{bmatrix}\] So now in addition to the original system of \(n\) non-linear equations we will have an additional \(n^{2}\) linearized equations. The system now has \(n+n^{2}=n(n+1)\) equations. To implement the procedure mentioned initially for creating the fiducial trajectory we solve the new system of \(n(n+1)\) differential equations with any numerical ode algorithm, e.g., Runge-Kutta 4 for some initial conditions and a time range \([tstart,tstart+ts]\) where \(tstart\) denotes the initial time and \(ts\) denotes the time step. In a chaotic system, each vector tends to fall along the local direction of most rapid growth. In addition, the finite precision arithmetic of computing, the collapse towards a common direction causes the tangent space orientation of all axis vectors to become indistinguishable. To overcome this, Wolf et.al.,[2] use repeated Gram-Schmidt orthonormalization(GSR) procedure on the vector frame. Let the linearized equations act on the initial frame of orthonormal vectors to give a set of vectors \(\{v_{1},v_{2},\ldots,v_{n}\}\). In other words, after we solve the system of \(n(n+1)\) equations, consider the components corresponding to the variational equations. Then GSR provides the following orthonormal set \(\{v^{\prime}_{1},v^{\prime}_{2},\ldots,v^{\prime}_{n}\}\): \[v^{\prime}_{1} =\frac{v_{1}}{\|v_{1}\|},\] \[v^{\prime}_{2} =\frac{v_{2}-\langle v_{2},v^{\prime}_{1}\rangle v^{\prime}_{1}}{ \|v_{2}-\langle v_{2},v^{\prime}_{1}\rangle v^{\prime}_{1}\|}\] \[\vdots\] \[v^{\prime}_{n} =\frac{v_{n}-\langle v_{n},v^{\prime}_{n-1}\rangle v^{\prime}_{n- 1}-\cdots-\langle v_{n},v^{\prime}_{1}\rangle v^{\prime}_{1}}{\|v_{n}-\langle v _{n},v^{\prime}_{n-1}\rangle v^{\prime}_{n-1}-\cdots-\langle v_{n},v^{\prime}_ {1}\rangle v^{\prime}_{1}\|}\] where \(\langle,\rangle\) denotes the Euclidean inner-product. The orthonormal set thus obtained now serves as the new initial conditions for our linearized system. We then solve the system again now with these new initial conditions and a new time-range \([tstart,tstart+ts]\) where \(tstart\) has now been changed to \(tstart+ts\) and \(ts\) denotes the time step. This procedure is repeated \(n\) times. It is seen that GSR never affects the direction of the first vector in a system, so this vector tends to seek out the most rapidly growing direction in the tangent space[2]. The length of vector \(v_{1}\) is proportional to \(2^{\lambda_{1}t}\) so in this way we can obtain the first Lyapunov exponent \(\lambda_{1}\). According to the construction of \(v_{2}^{\prime}\), we are changing the direction of \(v_{2}\). Because \(v_{2}\)'s direction is being changed, it is not free to chase after the most rapidly growing direction nor the second most. Also note that the vectors \(v_{1}^{\prime}\) and \(v_{2}^{\prime}\) span the same subspace as \(v_{1}\text{ and }v_{2}\). The area defined by the vectors \(v_{1}\) and \(v_{2}\) is proportional to \(2^{(\lambda_{1}+\lambda_{2})t}\). As \(v_{1}^{\prime}\) and \(v_{2}^{\prime}\) are orthogonal, we may determine \(\lambda_{2}\) directly from the mean rate of growth of the projection of vector \(v_{2}\) on vector \(v_{2}^{\prime}\)[2]. Extending this line of thought to n-dimensions, we conclude that the subspace spanned by the n vectors is not affected by the GSR process. The long-term evolution of the \(n-\)volume defined by these vectors is proportional to \(2^{\sum_{i=1}^{n}\lambda_{i}t}\). Projection of the evolved vectors onto the new orthonormal frame correctly updates the rates of growth of each of the principal axes in turn, providing estimates of the Lyapunov exponents. The code from the Wolf paper [2] was verified on standard systems, like Lorenz and Rossler. \begin{table} \begin{tabular}{|c|l|l|l|l|l|} \hline System & Equations & Parameters & Initial & Lyapunov & Lyapunov \\ & & & Conditions & Spectrum & Spectrum \\ & & & & (in [2]) & obtained \\ \hline & \(\dot{x}=\sigma(y-x)\) & \(\sigma=10.0\), & \(x=10.0\), & \(\lambda_{1}=2.16\), & \(\lambda_{1}=2.1676\), \\ Lorenz & \(\dot{y}=x(\rho-z)-y\) & \(\rho=45.92\), & \(y=1\), & \(\lambda_{2}=0.00\), & \(\lambda_{2}=0.0001\), \\ & \(\dot{z}=xy-\beta z\) & \(\beta=4.0\) & \(z=0\) & \(\lambda_{1}=-32.4\) & \(\lambda_{3}=-32.4644\) \\ \hline & \(\dot{x}=-(y+z)\) & \(a=0.15\), & \(x=10.0\), & \(\lambda_{1}=0.13\), & \(\lambda_{1}=0.1309\), \\ Rossler & \(\dot{y}=x+ay\) & \(b=0.20\), & \(y=1\), & \(\lambda_{2}=0.00\), & \(\lambda_{2}=0.0013\), \\ & \(\dot{z}=b+z(x-c)\) & \(c=10.0\) & \(z=0\) & \(\lambda_{3}=-14.1\) & \(\lambda_{3}=-14.1669\) \\ \hline \end{tabular} \end{table} Table 2.1: Lyapunov Spectrum in [2] vs Lyapunov Spectrum obtained through the MATLAB code ## Chapter 3 Systems under consideration In this chapter we shall consider different dynamical systems and study some parameters in each system which may give chaos. The systems considered are all motivated by biological experiments. ### 3.1 Kot System #### The Unforced System The first system we consider was analyzed by Kot,Sayler and Schulz [7]. It is a double-monod system with a prey (protozoan) and a predator (bacteria). The system initially analyzed in the work did not consider a forced inflowing nutrient and did not exhibit chaotic behavior. The unforced nutrient system [7] is given by:\[\frac{dS}{dt} =D\left[S_{i}-S\right]-\frac{\mu_{1}}{Y_{1}}\frac{SH}{K_{1}+S}\] (3.1) \[\frac{dH}{dt} =\mu_{1}\frac{SH}{K_{1}+S}-DH-\frac{\mu_{2}}{Y_{2}}\frac{HP}{K_{2} +H}\] (3.2) \[\frac{dP}{dt} =\mu_{2}\frac{HP}{K_{2}+H}-DP\] (3.3) where 1. \(S\) represents the concentration of limiting substrate. 2. \(H\) represent the concentration of the prey. 3. \(P\) represents the predator concentration. 4. \(D\) is the dilution rate. 5. \(\mu_{1}\) and \(\mu_{2}\) represent the maximum specific growth rate of the prey and predator respectively. 6. \(Y_{1}\) is the yield of the prey per unit mass of substrate. Similarly \(Y_{2}\) is the biomass yield of the predator per unit mass of prey For ease of calculations, the authors of [7] re-scaled the concentrations by the inflowing substrate concentrations, the prey by its yield constant and the predator by its yield constants i.e. \[x=\frac{S}{S_{i}},y=\frac{H}{Y_{1}S_{i}},z=\frac{P}{Y_{1}Y_{2}S_{i}},\tau=D*t\]The resulting re-scaled equations are as follows: \[\frac{dx}{d\tau} =1-x-\frac{Axy}{a+x} \tag{3.4}\] \[\frac{dy}{d\tau} =\frac{Axy}{a+x}-y-\frac{Byz}{b+y}\] (3.5) \[\frac{dz}{d\tau} =\frac{Byz}{b+y}-z \tag{3.6}\] Here \(A=\frac{\mu_{1}}{D}\), \(a=\frac{K_{1}}{S_{i}}\), \(B=\frac{\mu_{2}}{D}\), and \(b=\frac{K_{2}}{Y_{1}S_{i}}\). The following parameters were used for the calculations: \(D=0.1,S_{i}=115\) As mentioned before, on analysis of the model, the authors Kot et.al.,[7] observed equilibrium points and no chaos. Upon calculation of Lyapunov spectrum we obtain (0,-,-) which agrees with the nature of the model. The zero exponent is due to the system being autonomous. #### The Forced System In this case they consider when the nutrient is being forced into the system out of phase with internal substrate. The equations used to model that system were \begin{table} \begin{tabular}{c c c c} \hline & \(Y_{i}\) & \(\mu_{i}\) h\({}^{-1}\) & \(K_{i}\) mg/l \\ \hline Prey & 0.4 & 0.5 & 8 \\ Predator & 0.6 & 0.2 & 9 \\ \hline \end{tabular} \end{table} Table 3.1: Values of parameters for microbial model presented in Kot, et.al. [7]as follows [7] \[\frac{dS}{dt} =D\left[S_{i}\left(1+\epsilon\sin\left(\frac{2\pi}{T}t\right)\right) -S\right]-\frac{\mu_{1}}{Y_{1}}\frac{SH}{K_{1}+S}\] \[\frac{dH}{dt} =\mu_{1}\frac{SH}{K_{1}+S}-DH-\frac{\mu_{2}}{Y_{2}}\frac{HP}{K_{2 }+H}\] \[\frac{dP}{dt} =\mu_{2}\frac{HP}{K_{2}+H}-DP\] where the parameters and variables are as in the unforced model (Eqn. 3.1). For ease of calculations, the authors of [7] again re-scaled the concentrations as they did in the unforced model. The resulting re-scaled equations are as follows: \[\frac{dx}{d\tau} =1+\epsilon\sin\left(\omega\tau\right)-x-\frac{Axy}{a+x} \tag{3.7}\] \[\frac{dy}{d\tau} =\frac{Axy}{a+x}-y-\frac{Byz}{b+y}\] (3.8) \[\frac{dz}{d\tau} =\frac{Byz}{b+y}-z \tag{3.9}\] where \(\omega=\frac{2\pi}{DT}\). The parameters \(A,a,B,b\) were calculated as before. The values in Table 3.1 were applied to this model as well. #### Results of Simulation In [7], the authors vary the value of \(\omega\) to observe the behavior of the model. The choice of this parameter was because it drives the periodic forcing of the inflowingsubstrate, which in turn causes chaos for certain values of \(\omega\). For \(\omega=\frac{5\pi}{6}\) and \(\epsilon=0.6\) they observed chaotic behavior which was simulated below: Accordingly, varying \(\omega\) over a range of \([0,6\pi]\) indicated that the system goes in and out of chaos. As \(\epsilon\) was the coefficient of the sinusoidal term in Eqn.(3.9), it was also varied in a range of \([0,1]\) to observe the dynamics. In the above figure, the maximum Lyapunov exponent was plotted against \(\omega\) and \(\epsilon\). A positive maximum exponent depicts chaos, a negative maximum exponent depicts a fixed point and if the maximum exponent is zero it could mean either a two-torus or a limit cycle. Another set of parameters which seemed worthwhile to investigate were the dilution rate \(D\) and the inflowing substrate concentration \(S_{i}\). These two parameters can be controlled by the chemostat's operator. Varying them would give us an idea of whether the system exhibits chaos or not. Figure 3.2: 3-D plot depicting chaos and non-chaos with changes in \(\epsilon\) and \(\omega\) As can be seen in both the figures, the parameters \(\epsilon\),\(\omega\),\(D\),\(S_{i}\) are interesting parameters which can be investigated further to understand the model's behavior. Figure 3.3: 3-D plot depicting chaos and non-chaos with changes in \(D\) and \(S_{i}\) ### 3.2 Kravchenko System Nikolay S.Strigul and Lev V.Kravchenko in their paper [6] consider a model based on Monod kinetics. The variables under consideration are the concentration of PGPR (aerobic non-nitrogen fixing bacteria), resident micro-organisms, oxygen and soluble substrate. The mathematical model consists of four non-linear ordinary differential equations which are as follows: \[\frac{dX}{dT} =X\left(\mu_{X}\left[S,P,N\right]+F\left[Z\right]-\alpha X-D_{1}\right)\] \[\frac{dZ}{dT} =Z\left(\mu_{Z}\left[S,P,N\right]+F\left[X\right]-\beta Z-D_{2}\right)\] \[\frac{dS}{dT} =W(t)+L-D_{S}(S-S_{0})-\frac{X\mu_{X}\left[S,P,N\right]}{Y_{XS}} -\frac{Z\mu_{Z}\left[S,P,N\right]}{Y_{ZS}}\] \[\frac{dP}{dT} =D_{P}(P_{0}-P)-\frac{X\mu_{X}\left[S,P,N\right]}{Y_{XP}}-\frac{Z \mu_{Z}\left[S,P,N\right]}{Y_{ZP}}\] A brief description of the parameters and variables are as follows [6] * \(X\) stands for the concentration of PGPR * \(Z\) is the concentration of micro-organisms * \(S\) denotes concentration of soluble organic compounds. * \(P\) is the amount of molecular oxygen. * \(T\) stands for time * \(\mu_{X}\left[S,P,N\right]\) is the specific growth-rate which is given by the Monod formula for several limiting resources and the formula is\[\mu_{X}\left[S,P,N\right]=\mu_{mX}\frac{S}{S+\theta K_{SX}}\frac{P}{P+K_{PX}}\frac{ N}{N+\theta K_{NX}}\] where \(\mu_{mX}\), maximal specific growth rate, \(K_{SX},K_{PX},K_{NX}\) stand for the affinity constants for the organic substrate, molecular oxygen and mineral nitrogen compounds respectively and \(\theta\) soil's water content. * \(\mu_{Z}\left[S,P,N\right]\) is also derived in the same manner as above. The form is \[\mu_{Z}\left[S,P,N\right]=\mu_{mZ_{1}}\frac{S}{S+\theta K_{SZ_{1}}}\frac{P}{P +K_{PZ_{1}}}\frac{N}{N+\theta K_{NZ_{1}}}+\] \[\mu_{mZ_{2}}\frac{S}{S+\theta K_{SZ_{2}}}\frac{K_{PZ_{2}}}{P+K_{PZ_{ 2}}}\frac{N}{N+\theta K_{NZ_{2}}}\] where \(\mu_{mZ_{1}}\),\(\mu_{mZ_{2}}\) are the growth rates for the aerobic and anaerobic parts of the microbes, \(K_{SZ_{1}},K_{SZ_{2}},K_{PZ_{1}},K_{PZ_{2}},K_{NZ_{1}},K_{NZ_{2}}\) like before are affinity constants for the organic substrate, oxygen and nitrogen with 1 and 2 denoting the aerobic and anaerobic parts, respectively. * \(F(Z)=H_{1}Z,G(X)=H_{2}X\) stand for the inter-species interaction between the microflora and PGPR. * \(\alpha X,\beta Z\) denote intra-specific interactions. * \(D_{1},D_{2}\) are the death coefficients. * \(W(t)\) is a root exudation function which is maximum during day time(i.e. first 12 hours) and minimum during night time(i.e. last 12 hours). For our simulation purposes \(W(t)\) has been estimated using a Fourier series calculation. * \(L\) is the rate of decomposition of insoluble carbon compounds. * \(Y_{XS},Y_{ZS},Y_{XP},Y_{ZP}\) are growth yield constants for the substrate and oxygen. * \(D_{S}\) is the rate of diffusion of soluble carbon from the rhizosphere. * \(D_{P}\) stands for the diffusion rate of oxygen into the rhizosphere. * \(S_{0}\) is the substrate concentration outside the rhizosphere. * \(P_{0}\) is the concentration of oxygen in the surrounding area. The authors developed the model to match experimental results. For the parameters mentioned in the paper there was no chaos detected which can be seen by the time-series portrait of the solutions.[6] Figure 3.4: Time series plot of the solutions to the system in [6] #### Simulation Results Further analysis of the model in [6] indicates possibly chaos. Based on that observation the initial conditions of the PGPR and micro-organism concentration were varied in the ranges \([0.1,60]\) and \([0.1,10]\) respectively. For those values, the maximum Lyapunov exponent was calculated. As can be seen in the figure, the maximum Lyapunov exponent stays negative and thus no chaos is seen. Another choice was the parameter \(K_{SX}\in[0.1,60]\) and the micro-organism concentration. It was seen that though the exponent was still negative, it was closer to zero. Figure 3.5: 3-D plot of the Lyapunov exponent when \(X\) and \(Z\) were varied. (The purple denotes the z-plane at 0)For efficiency we could re-scale the model. Using the transformations \(S=\theta K_{SXS}s;P=K_{PXP};N=\theta K_{NX}n;X=Y_{SX}x;Z=Y_{ZS}z;T=\frac{1}{\mu_{mX}}t\) we get the following equations: \[\begin{split}\frac{dx}{dt}&=x\left[\mu_{x}(s,p,n)-h_ {1}z-\alpha_{s}x-d_{1}\right]\\ \frac{dz}{dt}&=z\left[\mu_{z}(s,p,n)-h_{2}x-\beta_{s }z-d_{2}\right]\\ \frac{ds}{dt}&=\nu_{s}w(t/\mu_{mX})+l-d_{s}(s-s_{0}) -\frac{x}{y_{xs}}\mu_{x}(s,p,n)-\frac{z}{y_{zs}}\mu_{z}(s,p,n)\\ \frac{dp}{dt}&=d_{p}(p-p_{0})-\frac{x}{y_{xp}}\mu_{x }(s,p,n)-\frac{z}{y_{zp}}\mu_{z}(s,p,n)\end{split} \tag{3.10}\] where \(\mu_{x}(s,p,n)=\frac{s}{s+1}\cdot\frac{p}{p+1}\cdot\frac{n}{n+1}\) \(\mu_{z}(s,p,n)=\frac{\mu_{mZ_{1}}}{\mu_{mX}}\cdot\frac{s}{s+k_{sz_{1}}}\cdot \frac{p}{p+k_{pz_{1}}}\cdot\frac{n}{n+k_{nz_{1}}}+\frac{\mu_{mZ_{2}}}{\mu_{mX} }\cdot\frac{s}{s+k_{sz_{2}}}\cdot\frac{p}{p+k_{pz_{2}}}\cdot\frac{n}{n+k_{nz_{ 2}}}\) \(h_{1}=\frac{Y_{ZS}H_{1}}{\mu_{mX}}\), \(h_{2}=\frac{Y_{XS}H_{2}}{\mu_{mX}}\), \(\alpha_{s}=\frac{Y_{XS}\alpha}{\mu_{mX}}\), \(\beta_{s}=\frac{Y_{ZS}\beta}{\mu_{mX}}\), Figure 3.6: 3-D plot of the Lyapunov exponent when \(K_{SX}\) and \(Z\) were varied \[d_{1}=\frac{D_{1}}{\mu_{mX}},\,d_{2}=\frac{D_{2}}{\mu_{mX}},\,d_{s}= \frac{D_{s}}{\mu_{mX}},\,d_{p}=\frac{D_{p}}{\mu_{mX}},\] \[\nu_{s}=\frac{1}{\mu_{mX}\vartheta K_{XS}},\,l=\frac{L}{\mu_{mX} \vartheta K_{XS}},\,s_{0}=\frac{S_{0}}{K_{SX}},\,p_{0}=\frac{P_{0}}{K_{PX}},\] \[k_{sz_{1}}=\frac{K_{PZ_{1}}}{K_{XS}},\,k_{sz_{2}}=\frac{K_{PZ_{2 }}}{K_{SX}},\,\,\,\,\,\,y_{xs}=y_{zs}=\vartheta K_{SX},\] \[k_{pz_{1}}=\frac{K_{PZ_{1}}}{K_{PX}},\,k_{pz_{2}}=\frac{K_{PZ_{2} }}{K_{PX}},\,\,\,\,\,\,y_{xp}=\frac{K_{PX}Y_{XP}}{Y_{XS}},\,y_{zp}=\frac{K_{PX} Y_{ZP}}{Y_{ZS}},\] \[k_{nz_{1}}=\frac{K_{NZ_{1}}}{K_{NX}},\,k_{nz_{2}}=\frac{K_{NZ_{2} }}{K_{NX}}\] It was noticed that on changing just two parameters or initial conditions chaos was not obtained. This gives rise to the question that we might need to investigate more parameters to get a clear idea of whether chaos truly exists in this model. System based on Becks paper [1] In a paper by Becks et.al.,[1], experimental data on a chemostat experiment including 2 preys, a predator and a nutrient was observed and chaotic states were observed for varying levels of the dilution parameter. The model given below was motivated by the results of that paper. We describe the data using a similar kind of kinetics as described in the previous two models. The general description of the system is given by \[\frac{dR}{dt} =R\left[\mu_{NR}\left(\frac{N}{K_{NR}+N}\right)-\delta_{R}\right] -\frac{\mu_{PR}}{Y_{PR}}\left(\frac{R}{K_{PR}+R}\right)P-DR \tag{3.11}\] \[\frac{dC}{dt} =C\left[\mu_{NC}\left(\frac{N}{K_{NC}+N}\right)-\delta_{C}\right] -\frac{\mu_{PC}}{Y_{PC}}\left(\frac{C}{K_{PC}+C}\right)P-DC\] (3.12) \[\frac{dP}{dt} =P\left[\mu_{PR}\left(\frac{R}{K_{PR}+R}\right)+\mu_{PC}\left( \frac{C}{K_{PC}+C}\right)-\delta_{P}\right]-DP\] (3.13) \[\frac{dN}{dt} =DN_{0}-R\left[\frac{\mu_{NR}}{Y_{NR}}\left(\frac{N}{K_{NR}+N} \right)\right]-C\left[\frac{\mu_{NC}}{Y_{NC}}\left(\frac{N}{K_{NC}+N}\right) \right]-DN. \tag{3.14}\] The variables \(R\) and \(C\) represent the prey species of rods and cocci, respectively. We let \(P\) represent the predator species, and \(N\) represents a nutrient source to the system. The parameter \(D\) represents dilution rate for input of nutrients to the system. The parameters in the model determine the feeding habits of the predator and prey, as well as death and growth rates for each species. These parameters may be used to specify particular behaviors of the organisms, e.g., growth rates due to feeding on nutrient sources rather than prey. * \(\mu_{N*}\) is defined as maximum growth rates for the associated species based on consumption of nutrients and * \(\mu_{P*}\) denotes the maximum growth rates for predator based on consumption of the associated prey species. * \(K_{N*}\) is the half saturation constant for the species on the nutrient * \(K_{P*}\) is the half saturation constant for the predator on the associated species. Note these latter constants may determine a "preference" for one prey over the other. * \(Y_{N*}\) denotes yield coefficients for the species on the nutrient * \(Y_{P*}\) denotes the yield coefficients for the predator associated with the prey species. * Death rates for each species are given by \(\delta_{*}\) The model equations and system parameters were estimated by Molz as part of personal correspondence. The intent was to derive a mathematical model whose dynamics closely resembled the experimental dynamics seen in Becks, et.al. [1]. Specific values of the parameters used for the model are provided in Table 3.2. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Species} \\ \hline Parameter & R & C & P \\ \hline \(\mu_{N*}\) & 12 / day & 6 / day & \\ \(\mu_{P*}\) & 2.2 / day & 2.2 / day & \\ \(K_{N*}\) & 8e-6 gm/cc & 8e-6 gm/cc & \\ \(K_{P*}\) & 1e-6 gm/cc & 1e-6 gm/cc & \\ \(Y_{N*}\) & 0.1 gm R / gm N & 0.1 gm C / gm N & \\ \(Y_{P*}\) & 0.12 gm P / gm R & 0.12 gm P / gm C & \\ \(\delta_{*}\) & 0.5 / day & 0.25 / day & 0.08 / day \\ \hline \end{tabular} \end{table} Table 3.2: Table of values for model equations (3.11) - (3.14) used in the numerical simulations. #### Results of Simulation In [1], the authors found that on varying only the dilution rate, the system went into and out of chaos. So the dilution rate \(D\) is a suitable choice to study the dynamics of the system. Another choice is the initial nutrient concentration. They were varied in the ranges \([0.3,2]\) and \([0.1,1]\) respectively and the maximum Lyapunov exponent was calculated. We observe that the system is in chaos for those choices of the parameters. Another choice was the dilution rate and concentration of the predator population. It was again observed that the system stays in chaos. Figure 3.7: 3-D plot depicting chaos and non-chaos with changes in \(D\) and \(N\)Figure 3.8: 3-D plot depicting chaos and non-chaos with changes in \(D\) and \(P\) ## Chapter 4 Metropolis-Hastings Algorithm In all the models that were investigated, a common thread was the choice of parameters that lead into chaos. They were chosen based on their significance to the model. But the simulations could only investigate at most two at a time. The entire parameter space was not investigated. For some of the models that involved exploring a 28-dimensional space. An efficient way of doing this would be to use the Metropolis-Hastings Algorithm. All the notations that follow are taken from Chib and Greenberg 1995 [4] ### 4.1 The Algorithm Our objective is to generate those parameter values which would give us a positive Lyapunov exponent. Thus, we need an efficient algorithm for accepting or rejecting a possible combination of parameters. Suppose we have a starting point \(x\) and we wish to move to another point \(y\) in the space. We do so by introducing a probability that the move is made. If the move is not made we return to our previous point. Thus transitions from \(x\) to \(y\) are given by:\[q(x,y)\alpha(x,y)\] where \(q(x,y)\) is the _candidate generating density_ and \(\alpha(x,y)\) is the _probability of move_. We assume both \(x\) and \(y\) are generated from a probability distribution \(\pi(*)\). Now we define the probability of the move as follows: \[\alpha(x,y)=\left\{\begin{array}{ll}\min\left[\frac{\pi(y)q(y,x)}{\pi(x)q(x, y)},1\right]&\mbox{if $\pi(x)q(x,y)>0$}\\ 1&\mbox{otherwise}\end{array}\right. \tag{4.1}\] The choice of the candidate generating density is ours. It could be based on the Lyapunov exponents. The algorithm is as follows: * Initialize with \(x^{0}\). * For \(j=1,2,\ldots,N\). * Generate y from \(q(x^{j},*)\) and \(u\) from Uniform(0,1). * Set \[x^{j+1}=\left\{\begin{array}{ll}y&\mbox{if $u\leq\alpha(x^{j},y)$}\\ x^{j}&\mbox{otherwise}\end{array}\right.\] (4.2) * Return the values \(\left\{x^{1},x^{2},\ldots,x^{N}\right\}\). It should be noted that we do not need knowledge of the normalizing constant(it is called so since \(\pi(x)\) and \(\pi(y)\) are constant for given values of \(x\) and \(y\)) \(\pi(*)\) because it appears in both the numerator and denominator. Also we are assuming that \(q(x,y)\) need not be \(q(y,x)\) i.e., the candidate generating density need not be symmetric. ## Chapter 5 Conclusions Most deterministic dynamical systems go into chaos for some values of their parameters. There are many ways to measure chaos. One popular way uses Lyapunov exponents. The paper by Wolf et.al.,[2] proposed the frequently used choice of calculating such exponents using Gram-Schmidt orthonormalization process. The work in this thesis centered on coding and verifying the algorithm, as well as using the code to investigate three biological models to find parameters/initial conditions to give chaos. It was also noted that the Metropolis-Hastings algorithm can be used as an effective way of investigating the parameter space to obtain chaotic behavior. This can be done by using the Metropolis-Hastings sampler to move from one point in the paramater space to another in an effective way. A possible way to depict higher-dimensional results would be to use the parallel-coordinates plot. This would help us to understand the interaction between the parameters and the initial conditions. Implementation of all of these components could help in analyzing further models. An example of such a plot for the Kot system is given below:The figure helps us understand the values of the parameters for which we get a positive Lyapunov exponent and thus chaos. Another example for the Lorenz system with all the parameters varied is shown below: Figure 5.1: Parallel Coordinates Plot of \(\epsilon,\omega\), initial values of the variables \(x,y,z\) of the Forced System in [7] and the Maximum Lyapunov Exponent Further mathematical analysis of the models may give us a proper starting value for the Metropolis sampling so that we don't take a shot in the dark. Another possible avenue of research would be to consider if chaos is a proper indicator of the health of the system. In other words, if we can properly convert a dynamical system into a optimization/game-theoretic model, we can analyze if chaotic behavior is an optimal solution. To conclude, the calculation of Lyapunov exponents and sampling of parameter space would give us a better understanding of some models which may be difficult to analyze otherwise. Figure 5.2: Parallel Coordinates Plot of \(\rho,\beta,\sigma\), initial values of the variables \(x,y,z\) of the Lorenz system and the Maximum Lyapunov Exponent ## Appendix A Appendix ## Appendix A MATLAB code for determining Lyapunov Spectrum This appendix includes the MATLAB code for determining the Lyapunov Spectrum. It is based on the paper by Wolf et al [2]. %Code from Wolf paper %Before passing f make sure f=@system where system is function %corresponding to your system function lyapOut = myLyap(f,p,Initial,t,ts)%f is the ode system, %p is the parameter set, init is the initial conditions, %t is the time interval for the ode solver, ts is the timestep %N = number of nonlinear equations, %NN = Total number of equations N=length(Initial); %length(Initial) gives us the size of original system NN=N*(N+1); % initialize arrays Y=zeros(NN,1); CUM=zeros(N,1); GSC= zeros(N,1);znorm=zeros(N,1); y0 = Y; lyap = zeros(N,1); S=zeros(N,1); len = round((t(2)-t(1))/ts); for i = 1:N Y(i,1) = Initial(i); end %Initial Conditions for linear system(Orthonormal frame) for i =1:N Y((N+1)*i,1) = 1.0; end; tstart = t(1); for iterLyap=1:len [tvals,y] = ode45(@(t,y)(f(t,y,p)), [tstart,tstart+ts], Y); Y = y(size(y,1),:)'; for i = 1:Nfor j = 1:N y0(N*i+j,1) = Y(N*i+j,1); end end tstart=tstart+ts; %Construct a new orthonormal basis by Gram-Schmidt Method %Normalize first vector znorm(1,1)=0.0; for j=1:N znorm(1,1)=znorm(1,1) + y0(N*j+1,1)^2; end; znorm(1,1)=sqrt(znorm(1,1)); for j=1:N y0(N*j+1,1) = y0(N*j+1,1)/znorm(1,1); end; %Generate the new orthnormal set of vectors for j=2:N %Generate j-1 GSR coefficients for k= 1:j-1 GSC(k,1) =0.0; for l=1:N GSC(k,1) = GSC(k,1) + y0(N*l+j,1)*y0(N*l+k,1);end; end; %Construct a new vector for k=1:N for l=1:j-1 y0(N*k+j,1) = y0(N*k+j,1) - GSC(1,1)*y0(N*k+l,1); end; %calculate the vector's norm znorm(j,1) =0.0; for k=1:N znorm(j,1)= znorm(j,1) + y0(N*k+j,1)^2; end; znorm(j,1) =sqrt(znorm(j,1)); %normalize the new vector for k=1:N y0(N*k+j,1) = y0(N*k+j,1)/znorm(j,1); end; end; %update running vector magnitudes for k=1:N CUM(k,1) = CUM(k,1) + log(znorm(k,1))/log(2.0);end; %normalize exponent and print every 10 iterations %if (rem(i,10)== 0) for k=1:N lyap(1,k) = CUM(k,1)/(tstart-t(1)); end; if iterLyap == 1 lyapExp = lyap; else lyapExp = [lyapExp; lyap]; end for i = 1:N for j = 1:N Y(N*j+i,1) = y0(N*j+i,1); end end lyapOut = lyap(1,1:N); end ## Appendix B MATLAB code for the mathematical models ### The Kot system %Kot Equations function YPRIME= kotfn1(t,x,p) YPRIME = zeros(12,1); D = p(1);si=p(2);mu1=p(3);mu2=p(4);y1=p(5);y2=p(6);k1=p(7);k2=p(8);eps=p(9);om=p; A = mu1/D ; a = k1/si ; B = mu2/D ; b = k2/y1/si ; YPRIME(1,1) = 1 + eps*sin(om*t) - x(1,1) -... (A*x(1,1)*x(2,1)/(a+x(1,1))); YPRIME(2,1) = (A*x(1,1)*x(2,1))/(a+x(1,1)) -... x(2,1) -( B*x(2,1)*x(3,1))/( b+x(2,1) ); YPRIME(3,1) = (B*x(2,1)*x(3,1))/(b+x(2,1)) - x(3,1) ; % Copies of linearized equations of motionfor j=0:2 YPRIME(4+j,1) = x(4+j,1)*(-1-(A*x(2,1))/(a+x(1,1)) +... (A*x(2,1)*x(1,1))/(a+x(1,1))^2) -... x(7+j,1)*((A*x(1,1))/(a+x(1,1))); YPRIME(7+j,1) = x(4+j,1)*((A*x(2,1))/(a+x(1,1))-... (A*x(1,1)*x(2,1))/((a+x(1,1))^2))+... ((A*x(1,1))/(a+x(1,1))-1-(B*x(3,1))/(b+x(2,1))+... B*x(2,1)*x(3,1)/(b+x(2,1))^2)*x(7+j,1)-... ((B*x(2,1))/(b+x(2,1)))*x(10+j,1); YPRIME(10+j,1) = (B*x(3,1)/(b+x(2,1))-... (B*x(3,1)*x(2,1))/(b+x(2,1))^2)*x(7+j,1)+... ((B*x(2,1))/(b+x(2,1))-1)*x(10+j,1); end; end #### b.1.1 The Kravchenko system %Kravchenko Equations function xp= Krav_original(t,x,p) xp = zeros(20,1); H1 = p(1) ; % original H2 = p(2) ; % original alpha = p(3); % original beta = p(4) ; % original D1 = p(5) ; D2 = p(6); R = p(7) ; r = p(8) ; DS = 2*R*1e-6/(R-r)^2/(R+r) ; DP = 2*R*1e-3/(R-r)^2/(R+r) ; % DS = p(7); % DP = p(8); L = p(9) ; S0 = p(10); P0 = p(11); % original KSZ1 = p(12); % original KSZ2 = p(13); % original YXS = p(14); YZS = p(15); KPZ1 = p(16) ;KPZ2 = p(17); YXP = p(18); YZP = p(19); KNZ1 = p(20); KNZ2 = p(21); MUZ1 = p(22); MUZ2 = p(23); MUX = p(24); N = p(25); theta = p(26); KSX = p(27); KPX = p(28); KNX = p(29); %Defining the function Mu_x(s,p,n) Mu_x = MUX*((x(3,1)/(x(3,1)+theta*KSX))*... (x(4,1)/(x(4,1)+KPX))*(N/(N+theta*KNX))); %Defining the functions Mu_z1(s,p,n) and Mu_z2(s,p,n) Mu_z1 = (MUZ1)*((x(3,1)/(x(3,1)+KSZ1*theta))*... (x(4,1)/(x(4,1)+KPZ1))*(N/(N+KNZ1*theta))); Mu_z2 = (MUZ2)*((x(3,1)/(x(3,1)+KSZ2*theta))*... (KPZ2/(x(4,1)+KPZ2))*(N/(N+KNZ2*theta))); %Evaluating the root exudation function f = 4; for n =1:100 f = f + 8/n/pi*( sin(18.5*n*pi/12) - sin(6.5*n*pi/12) )*cos(n*pi*t/12); f = f + 8/n/pi*( cos(6.5*n*pi/12) - cos(18.5*n*pi/12) )*sin(n*pi*t/12); end %Original System of equations xp(1,1) = x(1,1)*(Mu_x+H1*x(2,1)-alpha*x(1,1)-D1) ; xp(2,1) = x(2,1)*(Mu_z1 + Mu_z2+H2*x(1,1)-beta*x(2,1)-D2); xp(3,1) = f+L - DS*(x(3,1)-S0)-((x(1,1)/YXS)*Mu_x)-(x(2,1)/YXS)*(Mu_z1+Mu_z2); xp(4,1) = DP*(P0-x(4,1)) -((x(1,1)/YXP)*Mu_x)-(x(2,1)/YZP)*(Mu_z1+Mu_z2); % Copies of linearized equations of motion for j=0:3 xp(5+j,1) = (Mu_x-2*alpha*x(1,1)+H1*x(2,1)-D1)*x(5+j,1)+H1*x(1,1)*x(9+j,1)+... (x(1,1)*((Mu_x*theta*KSX)/(x(3,1)*(x(3,1)+theta*KSX))))*x(13+j,1)+... (x(1,1)*((Mu_x*KPX)/(x(4,1)*(x(4,1)+KPX)))))*x(17+j,1); xp(9+j,1) = H2*x(2,1)*x(5+j,1)+(Mu_z1+Mu_z2-2*beta*x(2,1)+H2*x(1,1)-D2)*x(9+j,1); x(2,1)*(((theta*KSZ1*Mu_z1)/(x(3,1)*(x(3,1)+KSZ1*theta)))+... (((KSZ2*Mu_z2*theta)/(x(3,1)*(x(3,1)+KSZ2*theta)))))*x(13+j,1)+... x(2,1)*(((KPZ1*Mu_z1)/(x(4,1)*(x(4,1)+KPZ1)))-... (((Mu_z2)/((x(4,1)+KPZ2)))))*x(17+j,1); xp(13+j,1) = -(Mu_x/YXS)*x(5+j,1)-((Mu_z1+Mu_z2)/YXS)*x(9+j,1)-... (DS+x(1,1)*(Mu_x*theta*KSX)/(YXS*x(3,1)*(x(3,1)+theta*KSX))+... (x(2,1)/YZS)*(((theta*KSZ1*Mu_z1)/(x(3,1)*(x(3,1)+theta*KSZ1)))+... (((theta*KSZ2*Mu_z2)/(x(3,1)*(x(3,1)+KSZ2*theta))))))*x(13+j,1)-... (x(1,1)*(KPX*Mu_x)/(YXS*x(4,1)*(x(4,1)+KPX))+... (x(2,1)/YZS)*(((KPZ1*Mu_z1)/(x(4,1)*(x(4,1)+KPZ1)))-... (((Mu_z2)/((x(4,1)+KPZ2))))))*x(17+j,1); xp(17+j,1) = -(Mu_x/YXP)*x(5+j,1)-((Mu_z1+Mu_z2)/YZP)*x(9+j,1)-... (x(1,1)*(Mu_x*theta*KSX)/(YXP*x(3,1)*(x(3,1)+theta*KSX))+... (x(2,1)/YZP)*(((theta*KSZ1*Mu_z1)/(x(3,1)*(x(3,1)+KSZ1*theta)))+... (((theta*KSZ2*Mu_z2)/(x(3,1)*(x(3,1)+KSZ2*theta))))))*x(13+j,1)-... (DP+x(1,1)*(KPX*Mu_x)/(YXP*x(4,1)*(x(4,1)+KPX))+(x(2,1)/YZP)*... (((KPZ1*Mu_z1)/(x(4,1)*(x(4,1)+KPZ1)))-... (((Mu_z2)/((x(4,1)+KPZ2))))))*x(17+j,1); end end ### The Becks System %Becks Equations function xp= Becks(t,x,p) xp = zeros(20,1); % growth rates (1/sec) % values below are in 1/day; divide by 24*3600 to get seconds munr = p(1) ; % original munc = p(2) ; % original mupr = p(3); % original mupc = p(4) ; % original % mass (g) mr = p(5) ; mc = p(6); mp = p(7); % % half-saturation constants (g/cc) knr = p(8); knc = p(9) ; kpr = p(10) ; kpc = p(11); Kpr = kpr/mr; Kpc = kpc/mc; % death rates (1/sec) % values below are in 1/day; divide by 24*3600 to get seconds dr = p(12); % original dc = p(13); % original dp = p(14); % original % initial nutrient (g/cc)N0 = p(15); % original % NO = 2.3e-5 ; % % yield coefficients ypr = p(16); ypc = p(17) ; ynr = p(18); ync = p(19); % % dilution rate % values below are in 1/day; divide by 24*3600 to get seconds D = p(20); % original xp(1,1) = x(1,1)*(munr*x(4,1)/(x(4,1)+knr) - dr) -... murpr/ypr*mp/mr*x(1,1)/(Kpr+x(1,1))*x(3,1) - D*x(1,1) ; xp(2,1) = x(2,1)*(munc*x(4,1)/(x(4,1)+knc) - dc) -... murpc/ypc*mp/mc*x(2,1)/(Kpc+x(2,1))*x(3,1) - D*x(2,1) ; xp(3,1) = x(3,1)*(murr*x(1,1)/(Kpr+x(1,1)) +... murpc*x(2,1)/(Kpc+x(2,1)) - dp) - D*x(3,1) ; xp(4,1) = D*N0 - x(1,1)*munr*mr/ynr*x(4,1)/(knr+x(4,1)) -... x(2,1)*munc*mc/ync*x(4,1)/(knc+x(4,1)) - D*x(4,1);% Copies of linearized equations of motion for j=0:3 xp(5+j,1) = ((munr*x(4,1))/(x(4,1)+knr)-dr-mupr/ypr*mp/mr*x(3,1)/(Kpr+x(1,1))+... mupr/ypr*mp/mr*x(1,1)/((Kpr+x(1,1))^2)*x(3,1))*x(5+j,1)-... (mupr/ypr*mp/mr*x(1,1)/(Kpr+x(1,1)))*x(13+j,1)+... x(1,1)*x(17+j,1)*(munr/(x(4,1)+knr)-munr*x(4,1)/(x(4,1)+knr)^2); xp(9+j,1) = x(9+j,1)*(munc*x(4,1)/(x(4,1)+knc) - dc-... mupc/ypc*mp/mc*x(3,1)/(Kpc+x(2,1))+... mupc/ypc*mp/mc*x(2,1)/(Kpc+x(2,1))^2-D)-... (mupc/ypc*mp/mc*x(2,1)/(Kpc+x(2,1)))*x(13+j,1)+... (x(2,1)*(munc/(x(4,1)+knc)-munc*x(4,1)/(x(4,1)+knc)^2))*x(17+j,1); xp(13+j,1) = x(3,1)*x(5+j,1)*((mupr/(Kpr+x(1,1))-mupr*x(1,1)/(Kpr+x(1,1))^2)) +. x(3,1)*x(9+j,1)*((mupc/(Kpc+x(1,1))-mupc*x(1,1)/(Kpc+x(1,1))^2))+... x(17+j,1)*(mupr*x(1,1)/(Kpr+x(1,1)) + mupc*x(2,1)/(Kpc+x(2,1)) - dp-D) ; xp(17+j,1) = -x(5+j,1)*(munr*mr/ynr*x(4,1)/(knr+x(4,1))) -... x(9+j,1)*munc*mc/ync*x(4,1)/(knc+x(4,1)) +... x(17+j,1)*(- x(1,1)*munr*mr/ynr/(knr+x(4,1))+... x(1,1)*munr*mr/ynr*x(4,1)/(knr+x(4,1))^2-... x(2,1)*munc*mc/ync/(knc+x(4,1))+ x(2,1)*munc*mc/ync*x(4,1)/(knc+x(4,1))^2-D); end; ## Bibliography * [1] Lutz Becks, Frank M.Hilker, Horst Malchow, Kalus Jurgens, Hartmut Arndt. Experimental demonstration of chaos in a microbial food web. _Nature03627_, 435, 2005. * [2] Alan Wolf, Jack B.Swift, Harry L.Swinney, John A.Vastano. Determining lyapunov exponents from a time series. _Physica_, Volume 16D:Pg.285-317, 1985. * [3] Kathleen T.Alligood, Tim D.Sauer, James A.Yorke. _Chaos: An Introduction to Dynamical Systems_. Springer-Verlag New York Inc, 1997. * [4] Siddhartha Chib and Edward Greenberg. Understanding the metropolis-hastings algorithm. _The American Statistician_, 1995. * [5] Liz Bradley, Dept.of Computer Science, Uty. of Colorado. The variational equation notes for a course. * [6] Nikolay S.Strigul, Lev V.Kravchenko. Mathematical modeling of PGPR inoculation into the rhizosphere. _Environmental Modeling and Software_, 21:1158-1171, 2006. * [7] Mark Kot, Gary S.Sayler, Terry W.Schultz. Complex dynamics in a model microbial system. _Bulletin of Mathematical Biology_, Vol No.54(No.4):Pg.619-648, 1992. [ Received January 16, 2002 ###### Abstract In the rest of this Introduction we review basic concepts and algorithms. In Section 2 we propose Jacobian free discrete and continuous QR first order methods to approximate the LEs. In Section 3 we give second order methods for both discrete and continuous QR approaches, and point out how generalizations to higher order techniques may be made. In Section 4 we discuss implementation issues and show numerical performance of the new methods on a couple of examples. Given the differential equation \[\dot{x}=f(x),\qquad x(0)=x_{0}, \tag{1}\] the LEs are a characterization of the asymptotic properties of the solution \(x(t,x_{0})\) via analysis of the linearized problem (for ease of notation, the dependence of the solution on \(x_{0}\) is suppressed) \[\dot{y}=f_{x}(x(t))\ y. \tag{2}\] Formally, the LEs associated to (2) are defined as follows. Let \(\{p_{i}\}\) be the columns of an initial conditions (full rank) matrix \(Y_{0}\), and define the numbers \(\lambda_{i}\), \(i=1\),..., \(n\), \[\dot{\lambda}_{i}(p_{i})=\limsup_{t\to\infty}\frac{1}{t}\log\|Y(t)\ p_{i}\|, \tag{3}\] where \(Y(t)\) is the solution of \[\dot{Y}=f_{x}(x(t))\ Y,\qquad Y(0)=Y_{0}. \tag{4}\] When the sum of these numbers is minimized as we vary over all possible ICs (initial conditions) \(Y_{0}\), the numbers are called Lyapunov exponents of the system. Naturally, there are \(n\) LEs, \(\{\lambda_{i}\}\), counted with their multiplicity, for a given \(n\)-dimensional system. However, in many circumstances, one does not need to approximate all \(n\) LEs of a system. Often, only the \(p\) most dominant ones are needed (and \(p\) can be much smaller than \(n\)). For example, see [12], in order to approximate the entropy of a system only all the positive LEs are needed, and to estimate the dimension of an attractor only the most dominant (positive and negative) having positive sum are needed. Also, a commonly adopted criterion to assess chaotic dynamics on an attractor rests upon detection of a positive Lyapunov exponent, and thus only the largest LE needs to be tracked. Finally, there are situations where one knows before hand that the LEs enjoy special symmetries (see [15, 11], and also [7]), which reduce the number of LEs one needs to approximate. From the practical point of view, in case one needs only the \(p\) most dominant LEs of the system, then the matrix \(Y_{0}\) in (4) is made up by just \(p\) columns, which are typically chosen to be \(\left[\frac{I_{p}}{6}\right]\). With this in mind, in (4), \(Y\colon t\to\mathbb{R}^{n\times p}\). To approximate the LEs, the most widely adopted techniques rest on the QR factorization of the solution \(Y(t)\) of (4) \[Y(t)=Q(t)\ R(t),\] where \(Q\) and \(R\) are as smooth as \(Y\), \(Q\colon t\to\mathbb{R}^{n\times p}\) is orthogonal, \(Q^{T}Q=I\) for all \(t\), and \(R\colon t\to\mathbb{R}^{p\times p}\) is upper triangular with positive diagonal entries. Then, one extract approximation to the LEs as time averages of the logarithms of the diagonal of \(R\): \[\lambda_{i}=\lim_{t\to\infty}\frac{1}{t}\log(R_{ii}(t)),\qquad i=1,...,p. \tag{5}\] The existence of smooth factors in the QR factorization of a fundamental matrix solution has been known at least since Perron, [21], and the role of the QR factorization in the study of Lyapunov exponents has also been known for a long time; see [17]. From the numerical point of view, there are two ways in which the QR factorization of \(Y\) is traditionally found, and these lead to so-called discrete and continuous QR approaches, respectively. We recall them next. _Discrete QR approach_: [2, 11, 13, 22]. Say we want the QR factorization of \(Y(t_{k+1})\in\mathbb{R}^{n\times p}\). With \(t_{0}=0\), the idea is to write \(Y(t_{k+1})\) as composition of transition matrices: \[Y(t_{k+1})=Y(t_{k+1},\,t_{k})\ Y(t_{k},\,t_{k-1})\cdots Y(t_{2},\,t_{1})\ Y(t_ {1},\,0)\ Y_{0},\] where \(Y(t,\,t_{j})\), \(j=0,\)..., \(k\), is the solution of \[\left\{\begin{array}{l}\dot{Y}(t,\,t_{j})=f_{x}(x(t))\ Y(t,\,t_{j}),\qquad t _{j}\leqslant t\leqslant t_{j+1}\\ Y(t_{j},\,t_{j})=I_{n}.\end{array}\right.\] Now, let \(Y_{0}=Q_{0}R_{0}\), and progressively update the QR factorizations as \[Y(t_{j+1},\,t_{j})\ Q_{j}=Q_{j+1}R_{j+1},\qquad j=0,...,k,\]so that \[Y(t_{k+1})=Q_{k+1}[\,R_{k+1}R_{k}\cdots R_{1}R_{0}\,] \tag{6}\] gives the sought QR factorization of \(Y(t_{k}+1)\). In (6), \(Q_{k+1}\in\mathbb{R}^{n\times p}\) and \(\prod_{j-k+1}^{0}R_{j}\in\mathbb{R}^{p\times p}\). Of course, there is no need to compute the full transition matrices \(Y(t_{j+1},\,t_{j})\), and the following compact reformulation must be preferred. With above notation, we let \(Y_{j+1}(t)=Y(t,\,t_{j})\,Q_{j}\), \(t\in[t_{j},\,t_{j+1}]\to\mathbb{R}^{n\times p}\). For \(j=0\),..., \(k\), we solve \[\begin{cases}\dot{Y}_{j+1}=f_{x}(x(t))\,Y_{j+1},\qquad t_{j}\leqslant t \leqslant t_{j+1}\\ Y_{j+1}(t_{j})=Q_{j},\end{cases} \tag{7}\] and then let \[Y_{j+1}(t_{j+1})=Q_{j+1}R_{j+1}, \tag{8}\] leading to (6) as before. _Continuous \(QR\) approach_: [3, 8, 10, 13]. Here one derives--and integrates--the differential equations governing the evolution of the \(Q\) and \(R\) factors in the QR factorization of \(Y\). Differentiating the relation \(Y=QR\) and using (4) one gets \(\dot{Q}R+Q\dot{R}=f_{x}(x(t))\,QR\), and multiplying by \(Q^{T}\) on the left one gets the equation for \(R\): \[\dot{R}=(Q^{T}f_{x}(x(t))\,Q-S)\,\,R,\qquad R(0)=R_{0}, \tag{9}\] where we have set \(S:=Q^{T}\dot{Q}\). Observe that since \(Q^{T}Q=I\), then \(S\) is skew-symmetric. Further, since \(R\) is triangular, then the coefficient \(Q^{T}f_{x}(x(t))\,Q-S\) in (9) must be upper triangular. This fact, and skew-symmetry, then give the following form for \(S\): \[S_{ij}=\begin{cases}(Q^{T}(t)\,f_{x}(x(t))\,Q(t))_{ij},&i>j,\\ 0,&i=j,\\ -S_{ji},&i<j.\end{cases} \tag{10}\] Next, multiplying \(\dot{Q}R+Q\dot{R}=f_{x}(x(t))\,QR\) by \(R^{-1}\) on the right, and using (9), we get the differential equation for \(Q\): \[\dot{Q}=(I-QQ^{T})\,f_{x}(x(t))\,Q+QS,\qquad Q(0)=Q_{0}. \tag{11}\]Finally, observe that no explicit integration for \(R\) needs to be done in order to approximate the LEs. In fact, from (1.9) and (1.5), one has \[\lambda_{i}=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\big{(}Q^{T}(s)\;f_{x}(x(s)) \;Q(s)\big{)}_{ii}\;ds,\qquad i=1,\ldots,p. \tag{1.12}\] From (1.12), we further define the new functions (as in [10]) for \(i=1,\ldots,\,p\), \[v_{i}(t)=\int_{0}^{t}\big{(}Q^{T}(s)\;f_{x}(x(s))\;Q(s)\big{)}_{ii}\;ds\quad \mbox{ so that }\;\;\dot{v}_{i}=(Q^{T}(t)\;f_{x}(x(t))\;Q(t))_{ii}, \tag{1.13}\] and these can be "integrated" directly along with (1.11). [Of course, the result is a quadrature rule on (1.12)]. **Remark 1.1.** Of course, the original differential equation (1.1) must be integrated along with (1.7) or (1.11) and the needed values for \(f_{x}(\,\cdot\,)\) must be supplied at the appropriate order of accuracy. The simplest way to do this in practice is to use the same integration rule for all differential equations involved, say the same Runge-Kutta scheme. **Remark 1.2.** In general, the LEs of (1.1) will depend on the initial conditions \(x_{0}\). Further, it is generally not clear why the limits in (1.5) exist (and equal the LEs) and to what extent they depend on the initial conditions \(Y_{0}\) for (1.4). Extensive theoretical studies address these concerns. For example, use of limits in (1.5) is justified for so-called _regular_ systems (see [1]), which further are prevalent in a measure theoretic sense; see [19] and [20]. The dependence of the LEs upon the initial conditions is the domain of ergodic theory, and we again refer to the work of Oseledec, [20], for statements in this case. **Remark 1.3.** There is numerical and theoretical evidence (e.g., see [8]) that the continuous QR approach is preferable to the discrete QR approach. While this is generally true, a couple of considerations are in order. First, the analysis showing the shortcoming of the discrete QR approach in [8] highlights that difficulties are chiefly caused by the QR factorization at the end of each step, and affect the (large) negative LEs; thus, if only a few dominant LEs are needed, this should not be a concern. Second, the numerical evidence highlights that controlling the local error (for either the Q-factor or the transition matrix itself) while integrating (1.7) tends to require (much) smaller stepsizes than for the continuous QR approach. This is certainly a concern--and a true drawback--in case the original problem is linear (i.e., (1.1) is really \(\dot{x}=A(t)\,x\), and thus \(f_{x}(x(t))\) in (1.7) is just \(A(t)\)), because no other simple mechanism of error control is in place if one wants to approximate the LEs. However, for nonlinear problems, controlling the error on the trajectory should enforce error control on the linearized problem as well, and no further need to control local errors while integrating (1.7) ought to be required. ## 2 Jacobian Free Exponents: 1st Order Methods Here we present our approach in the simplest setting possible. This will facilitate understanding of the simple idea we exploited, and will lead to appropriate ways to generalize to higher order methods. We again find it convenient to separate between discrete and continuous QR approaches. ### Remark 2.1 We call our methods "Jacobian free" in analogy with so-called _matrix free_ methods which are used in implicit schemes for stiff systems of differential equations to bypass the need for a formal Newton iteration (e.g., see [4, 5, 6]). In that context, however, the "matrix free" formulation is conceptually only a means to solve the nonlinear system, and has no impact on the order of the scheme used, nor on the formulation itself (i.e., the scheme itself is not changed). Instead, as we will clarify below, our "Jacobian free" reformulation effectively modifies the scheme one is using and may alter its order if one is not careful. ### Discrete QR: Forward Euler Suppose we use the forward Euler method with step \(h\) to approximate (1.7). Thus, the basic step is \[Y_{j+1}=Q_{j}+hf_{x}(x_{j})\;Q_{j},\qquad\mbox{then}\quad Y_{j+1}=Q_{j+1}\,R_ {j+1}. \tag{2.1}\] Here, \(x_{j}\) is the approximation to \(x(t_{j})\) which may have been obtained with forward Euler method or any other method of order at least one. Clearly, the resulting method is of first order of accuracy (second order locally). The expensive part of this scheme is given by the need for \(f_{x}(x_{j})\) and the matrix multiplication \(f_{x}(x_{j})\;Q_{j}\). To avoid these, we reason as follows. Let \(Q_{j}=[q_{1}^{j},\ldots,\;q_{p}^{j}]\). Then, for \(k=1,\ldots,p\), we approximate the action \(hf_{x}(x_{j})\;q_{k}^{j}\) as \[hf_{x}(x_{j})\;q_{k}^{j}\approx f(x_{j}+hq_{k}^{j})-f(x_{j}),\qquad k=1,\ldots,\;p.\]The resulting scheme becomes \[Z_{j+1}=Q_{j}+B_{j},\qquad B_{j}:=[f(x_{j}+hq_{1}^{j})-f(x_{j}),...,\,f(x_{j}+hq_{p }^{j})-f(x_{j})],\] \[\text{then}\quad Z_{j+1}=Q_{j+1}R_{j+1}. \tag{2.2}\] ##### Expense Comparison. We delay until Section 4 a careful comparison of the cost of each scheme. Momentarily, we observe that computing \(Z_{j+1}\) from (2.2) rather than \(Y_{j+1}\) from (2.1) avoids the Jacobian evaluation and the multiplication \(f_{x}(x_{j})\ Q_{j}\). Upon obtaining \(R_{j+1}\), we can update the approximation of the LEs. If \(\lambda_{i}^{j}\), \(i=1,...,\,p\), are the approximation of the LEs at \(t_{j}\), then from (1.5) and (1.6) the approximate values \(\lambda_{i}^{j+1}\) at \(t_{j+1}\) are easily obtained as \[\lambda_{i}^{j+1}=\frac{t_{j}}{t_{j}+h}\lambda_{i}^{j}+\frac{1}{t_{j}+h}\log(R _{j+1})_{it},\qquad i=1,...,\,p. \tag{2.3}\] It is trivial to assess the error caused by the Jacobian free replacement, and to appreciate that (2.2) is a first order scheme. However, since we will need the explicit error expression in the next section, we now derive it. Because of linearity of the problem (1.7), it suffices to look at the error on a single column. Further, we look at the local error on a single step, and thus can consider the first step. So, let \(y_{0}\) be the initial condition (that is, \(y_{0}\) is any of the \(q_{k}^{0}\), \(k=1,...,\,p\)). For later use, recall that the local error for (2.1) is given by \[y(t_{1})-y_{1}=\tfrac{1}{2}h^{2}\vec{y}_{0}+O(h^{3}). \tag{2.4}\] For (2.2), instead, we have \[f(x_{0}+hy_{0})-f(x_{0})=hf_{x}(x_{0})+\tfrac{1}{2}h^{2}f_{xx}(x_{0})(y_{0},y_{ 0})+O(h^{3}),\] so that \[z_{1}=y_{1}+\tfrac{1}{2}h^{2}f_{xx}(x_{0})(y_{0},y_{0})+O(h^{3})\] and thus \[y(t_{1})-z_{1}=\tfrac{1}{2}h^{2}\vec{y}_{0}-\tfrac{1}{2}h^{2}f_{xx}(x_{0})(y_{ 0},y_{0})+O(h^{3}). \tag{2.5}\] Therefore, (2.2) and (2.3) is a first order scheme. ### Continuous QR: Forward Euler The issue here is to integrate (1.11) and approximate the integral in (1.12). Integration of (1.11) has received a lot of attention in recent times (e.g., see [3] and [9]), and sophisticated choices exist for this task. All of these choices can be applied in the present context, but for the sake of simplicity here we consider the simplest possibility: integrate (1.11) with a forward Euler step \(h\) and then orthogonalize the solution (in the current terminology, a first order projected integrator). Orthogonalization is handily carried out by the QR factorization (though the polar factorization may also be used). In other words, the basic step is \[P_{1}=Q_{0}+h[f_{x}(x_{0})\,Q_{0}-Q_{0}(Q_{0}^{T}f_{x}(x_{0})\,Q_{0}-S_{0})], \qquad\mbox{then}\quad Q_{1}\colon P_{1}=Q_{1}R_{1}. \tag{2.6}\] Clearly, the resulting method is of first order of accuracy (second order locally), and the expensive part is again given by the need for \(f_{x}(x_{0})\) and the matrix multiplication \(f_{x}(x_{0})\,Q_{0}\). The same reasoning as in the discrete QR case can however be applied. So, if \(Q_{0}=[q_{1}^{0},\ldots,\,q_{p}^{0}]\), we approximate the action \(hf_{x}(x_{0})\,Q_{0}\) as \[hf_{x}(x_{0})\,Q_{0}\approx B_{0}:=[f(x_{0}+hq_{1}^{0})-f(x_{0}),\ldots,\,f(x_ {0}+hq_{p}^{0})-f(x_{0})],\] and further use this approximation in (2.6) also to approximate \(S_{0}\) (recall (1.10)); in other words, we form \(Q_{0}^{T}B_{0}\) and use it to approximate \(hS_{0}\), call it \(H_{0}\), which is thus defined as \((H_{0})_{ij}=(Q_{0}^{T}B_{0})_{ij}\), \(i>j\), and by skew-symmetry. The resulting scheme becomes \[V_{1}=Q_{0}+B_{0}-Q_{0}(Q_{0}^{T}B_{0}-H_{0}),\qquad\mbox{then}\quad Q_{1} \colon V_{1}=Q_{1}R_{1}. \tag{2.7}\] It is again immediate to appreciate that (2.7) is a first order scheme. To approximate the LEs, one can replace the integral in (1.12) by a simple quadrature rule. Since we are using the scheme (2.7) to approximate \(Q\), a rectangle rule is adequate (i.e., a forward Euler step for (1.13)). That is, if \(\lambda_{i}^{j}\), \(i=1,\ldots,\,p\), are the approximation of the LEs at \(t_{j}\), then from (1.12) we update the approximations at \(t_{j+1}\) as \[\lambda_{i}^{j+1}=\frac{t_{j}}{t_{j}+h}\,\lambda_{i}^{j}+\frac{1}{t_{j}+h}\,(Q _{j}^{T}B_{j})_{ii},\qquad i=1,\ldots,\,p, \tag{2.8}\] and obtain a first order approximation for the LEs. Computing \(V_{1}\) from (2.7), rather than \(P_{1}\) from (2.6), avoids the Jacobian evaluation and the multiplication \(f_{x}(x_{0})\,Q_{0}\). ## 3 Second Order Methods Often, the LEs are used to infer qualitative properties of the differential system (1.1) and high accuracy approximation of the LEs may not be really needed. However, there are situations where one needs more accurate approximations than those delivered by the first order schemes of the previous section.2 Here we extend the simple methods of the previous section to second order techniques, and point out how to generalize to higher order. Footnote 2: Still, we stress that the trajectory itself can be approximated at any—higher—order of accuracy. In the literature of numerical differential equations there are many ways to produce high order schemes (see [16]). We consider two of them here: second order Runge-Kutta (RK) schemes, and extrapolation. As before, we differentiate between discrete and continuous QR approaches. ### Discrete QR: 2nd Order We look at two ways to obtain second order schemes: (a) using the explicit midpoint rule, or (b) extrapolating forward Euler approximations. Neither of these choices can be implemented naively in a Jacobian free way, and some care must be paid in order to obtain order 2. #### (a) Explicit Midpoint Rule The basic scheme on one step \(h\) from \(Q_{0}\) to \(Q_{1}\) is \[\eqalign{Y_{1/2}&=Q_{0}+{h\over 2}\,f_{x}(x_{0})\,Q_{0},\cr Y_{1}&=Q_{0}+hf_{x}(x_{1/2})\,Y_{1/2},\qquad{\rm then}\quad Q_{1}\colon Y_{1}=Q_{1}\,R_{1}.\cr}\] Above, \(x_{0}\) and \(x_{1/2}\) are approximations to the solution of (1.1) which may have been obtained by the midpoint rule, so that \(x_{1/2}=x_{0}+{h\over 2}\,f(x_{0})\), or also by some other scheme, as long as they have the appropriate order (e.g., we can use \(x_{1/2}=1/2(x_{0}+x_{1})\) if we are using a high order scheme to integrate (1.1)). Naturally, (3.1) is a 2nd order scheme ([16]). To make (3.1) a Jacobian free method, we must approximate the actions \(f_{x}(x_{0})\,Q_{0}\) and \(f_{x}(x_{1/2})\,Y_{1/2}\) by appropriate directional derivatives. For \(f_{x}(x_{0})\,Q_{0}\), this is as before, since an \(O(h^{2})\) approximation to \(Y_{1/2}\) is all that is needed. Thus, if \(Q_{0}=[\,q_{1}^{0},\ldots,\,q_{p}^{0}\,]\), we let \[hf_{x}(x_{0})\,Q_{0}\approx B_{0}:=\!\!\left[\,f\left(x_{0}+{h\over 2}\,q_{1}^{0}\,\right)-f(x_{0}),\ldots,\,f\left(x_{0}+{h\over 2}\,q_{p}^{0}\,\right)-f(x_{0})\,\right],\]and notice that \(f_{x}(x_{0})\)\(Q_{0}-B_{0}=O(h^{2})\). We thus obtain \[Z_{1/2}=Q_{0}+B_{0}\] instead of \(Y_{1/2}\). Next, we need to approximate Jacobian free the term \(hf_{x}(x_{1/2})\)\(Z_{1/2}\). We cannot use the simple difference quotient above to approximate this term, since we would get stuck with terms of \(O(h^{2})\). For this reason, we use a higher order centered difference approximation. So, if we let \(Z_{1/2}=[z_{1}^{1/2},...,z_{p}^{1/2}]\), we then use \[hf_{x}(x_{1/2})\)\(Z_{1/2}\approx M_{0}\qquad\hbox{where}\] \[M_{0}:={\frac{1}{2}}[f(x_{1/2}+hz_{1}^{1/2})-f(x_{1/2}-hz_{1}^{1/2}),...,f(x_ {1/2}+hz_{p}^{1/2})-f(x_{1/2}-hz_{p}^{1/2})].\] With the above notation, the Jacobian free midpoint scheme is \[Z_{1/2}=Q_{0}+B_{0},\qquad Z_{1}=Q_{0}+M_{0},\qquad\hbox{then}\quad Q_{1}\colon Z_{1}=Q_{1}R_{1}.\] It is simple to appreciate that the scheme (3.2) is a second order scheme, since the local error is \(O(h^{3})\). This is because \(Z_{1/2}\) is an \(O(h^{2})\) approximation to \(Y_{1/2}\) and \(M_{0}-hf_{x}(x_{1/2})\)\(Z_{1/2}=O(h^{3})\). The latter follows by straightforward Taylor expansion; for \(j=1\),..., \(p\), one has \[f(x_{1/2}+hz_{j}^{1/2})-f(x_{1/2}-hz_{j}^{1/2})\] \[=2hf_{x}(x_{1/2})\)\(z_{j}^{1/2}+{h^{3}\over 6}\)\(f_{xxx}(x_{1/2})\)\(z_{j}^{1/2}z_{j}^{1/2}z_{j}^{1/2}+O(h^{5}).\] Clearly, computing \(Z_{1}\) from (3.2), rather than \(Y_{1}\) from (3.1), avoids two Jacobian evaluations and the matrix multiplications with the Jacobians. **Remark 3.1.** The above second order scheme (3.2) requires three evaluations per step of the vector field \(f\) for each LE. We grew accustomed to second order RK-like schemes requiring just two \(f\) evaluations. Whether or not this is actually possible in the present context is not clear to us, but in our--admittedly, limited--efforts we have not succeeded. **Remark 3.2.** It is in principle possible to extend the above reasoning to obtain Jacobian free methods of even higher order. For example, for explicit RK schemes, one needs to replace all actions \(hf_{x}(\,\cdot\,)\)\(Z_{j}\) by suitable difference quotients so that the order of the RK scheme is retained. However, for our scopes, second order schemes are sufficient, and we did not spend any time in trying to obtain high order analogs of (3.2), leaving this task to future work. **(b) Extrapolating Forward Euler.** The basic extrapolation scheme on one step \(h\) from \(Q_{0}\) to \(Q_{1}\) is \[\begin{array}{l}Y_{1}=Q_{0}+hf_{x}(x_{0})\ Q_{0}\qquad\text{ forward Euler step}\\ Y_{1/2}=Q_{0}+\frac{h}{2}\,f_{x}(x_{0})\ Q_{0},\qquad\hat{Y}_{1}=Q_{0}+\frac{h}{2}\,f_{x}(x_{1/2})\ Y_{1/2},\qquad\text{two half steps}\\ \qquad\qquad\text{ then}\quad Y_{1}^{*}\gets Y_{1}+2(\hat{Y}_{1}-Y_{1}),\qquad\text{and}\quad Q_{1}\colon Y_{1}^{*}=Q_{1}\,R_{1}.\end{array} \tag{3.3}\] It is well known that (3.3) is a second order scheme, see [16]. Our scope here is to derive a Jacobian free second order scheme from (3.3). In what follows, we let \(Q_{0}=[\,q_{1}^{0},\ldots,\,q_{p}^{0}\,]\) and \(Z_{1/2}=[\,z_{1}^{1/2},\ldots,\,z_{p}^{1/2}\,]\). We propose the following scheme \[\begin{array}{l}Z_{1}=Q_{0}+[\,f(x_{0}+hq_{1}^{0}\,)-f(x_{0}),\ldots,\,f(x_{0}+hq_{p}^{0}\,)-f(x_{0})\,]\\ Z_{1/2}=Q_{0}+\left[\,f\left(\,x_{0}+\frac{h}{2}\,q_{1}^{0}\,\right)-f(x_{0}),\ldots,\,f\left(\,x_{0}+\frac{h}{2}\,q_{p}^{0}\,\right)-f(x_{0})\,\,\right]\\ \tilde{Z}_{1}=Q_{0}+\left[\,f\left(\,x_{1/2}+\frac{h}{2}\,z_{1}^{1/2}\,\right)-f(x_{1/2}),\ldots,\,f\left(\,x_{1/2}+\frac{h}{2}\,z_{p}^{1/2}\,\right)-f(x_{1/2})\,\,\right]\\ \qquad\text{then}\quad Z_{1}^{*}\gets Z_{1}+2(\tilde{Z}_{1}-Z_{1}),\qquad\text{and}\quad Q_{1}\colon Z_{1}^{*}=Q_{1}\,R_{1}.\end{array} \tag{3.4}\] It is not obvious that this is a second order scheme, so we prove it. **Theorem 3.3.**_The scheme (3.4) is a second order scheme. That is, if \(Y(h)\) is the exact solution at \(h\) of \(\hat{Y}=f_{x}(x(t))\ Y\), \(Y(0)=Q_{0}\), and \(x_{1/2}=x_{0}+\frac{h}{2}\,f(x_{0})+O(h^{2})\),3 then \(Y(h)-Z_{1}^{*}=O(h^{3})\)._ Footnote 3: i.e., \(x_{1/2}\) is a second order approximation to \(x(h/2)\). **Proof.** Since it suffices to consider one single column, we just use lower case letters \(q_{0}\), \(y_{1/2}\), \(z_{1/2}\), etc., where the notation is inherited from (3.3) and (3.4). Recall that (see (2.5)) \[y(h)-z_{1}=\frac{h^{2}}{2}\,\ddot{y}_{0}-\frac{h^{2}}{2}\,f_{xx}(x_{0})(q_{0},q_{0})+O(h^{3}),\qquad y(h)-y_{1}=\frac{h^{2}}{2}\,\ddot{y}_{0}+O(h^{3}).\] We have \[z_{1/2}=y_{1/2}+\frac{h^{2}}{8}\,f_{xx}(x_{0})(q_{0},q_{0})+O(h^{3}),\]and thus \[\hat{z}_{1} =y_{1/2}+\frac{h^{2}}{8}\,f_{xx}(x_{0})(q_{0},\,q_{0})+\frac{h}{2}\,f _{x}(x_{1/2})\,z_{1/2}+\frac{h^{2}}{8}\,f_{xx}(x_{1/2})(z_{1/2},z_{1/2})+O(h^{3})\] \[=y_{1/2}+\frac{h}{2}\,f_{x}(x_{1/2})\,\,y_{1/2}+\frac{h^{2}}{8}\,f _{xx}(x_{0})(q_{0},\,q_{0})+\frac{h^{2}}{8}\,f_{xx}(x_{1/2})(\,y_{1/2},\,y_{1/2 })+O(h^{3}).\] But \[\frac{h^{2}}{8}\,f_{xx}(x_{1/2})(\,y_{1/2},\,y_{1/2})=\frac{h^{2}}{8}\,f_{xx}(x _{0})(q_{0},\,q_{0})+O(h^{3}),\] and so \[\hat{z}_{1}=\hat{y}_{1}+\frac{h^{2}}{4}\,f_{xx}(x_{0})(q_{0},\,q_{0})+O(h^{3}).\] But also \[z_{1}=y_{1}+\frac{h^{2}}{2}\,f_{xx}(x_{0})(q_{0},\,q_{0})+O(h^{3}),\] and therefore \[\hat{z}_{1}-z_{1}=\hat{y}_{1}-y_{1}-\frac{h^{2}}{4}\,f_{xx}(x_{0})(q_{0},\,q_{0 })+O(h^{3}).\] Now, since \(\hat{y}_{1}-y_{1}=\frac{h^{2}}{4}\,\hat{y}_{0}+O(h^{3})\), we have that \[2(\hat{z}_{1}-z_{1})=\frac{h^{2}}{2}\,\hat{y}_{0}-\frac{h^{2}}{2}\,f_{xx}(x_{0} )(q_{0},\,q_{0})+O(h^{3}),\] and so \[z_{1}^{*}=z_{1}+2(\hat{z}_{1}-z_{1})=y(h)+O(h^{3}).\qed\] **Remark 3.4.** In (3.4), it is tempting to replace the terms \(f(x_{0}+\frac{h}{2}q_{j}^{0})\)\(-\,f(x_{0})\), \(j=1,\)..., \(p\), in the definition of \(Z_{1/2}\) by \(\frac{1}{2}[f(x_{0}+hq_{j}^{0})-f(x_{0})]\), \(j=1,\)..., \(p\), which are of the same order of accuracy, and would save us \(p\) function evaluations (the term in brackets was already computed to obtain \(Z_{1}\)). However, if we do so, the extrapolation procedure does not increase the order. Exactly as for (3.2) and (3.1), to compute \(Z_{1}\) from (3.4) rather than \(Y_{1}\) from (3.3) avoids the Jacobian evaluations and the matrix multiplications with the Jacobians. **Remark 3.5.** Regardless of whether one uses (3.2) or (3.4), of course the LEs are still updated as in (2.3), which now gives a 2nd order approximation for the LEs. ### Continuous QR: 2nd Order We adapt to this case the projected integrator based on the second order midpoint rule. **(a) Explicit Midpoint Rule.** The procedure is the same as what we did for the explicit midpoint in the case of the discrete QR method. The true midpoint rule projected integrator would be \[\begin{array}{c}P_{1/2}=Q_{0}+\frac{h}{2}\,[f_{x}(x_{0})\;Q_{0}-Q_{0}(Q_{0}^{ T}f_{x}(x_{0})\;Q_{0}-S_{0})],\\ P_{1}=Q_{0}+h[f_{x}(x_{1/2})\;P_{1/2}-P_{1/2}(P_{1/2}^{T}f_{x}(x_{1/2}P_{1/2}-S_ {1/2})],\\ \text{then orthogonalize }P_{1}\text{ to get }\quad Q_{1}\colon P_{1}=Q_{1}R_{1}.\end{array} \tag{3.5}\] To make this scheme Jacobian free, and to retain second order, we proceed as follows. The terms \(hf_{x}(x_{0})\;Q_{0}\) and \(hS_{0}\) in the definition of \(P_{1/2}\) can be approximated as we did to get to (2.7); that is, we let \(B_{0}\) to be our approximation to \(hf_{x}(x_{0})\;Q_{0}\) and \(H_{0}\) the approximation to \(hS_{0}\colon B_{0}:=[f(x_{0}+hq_{1}^{0})-f(x_{0}),\ldots,\)\(f(x_{0}+hq_{p}^{0})-f(x_{0})]\), and \((H_{0})_{ij}=(Q_{0}^{T}B_{0})_{ij}\), \(i>j\), plus skew-symmetry. So doing, we obtain a value \(V_{1/2}\) which is an \(O(h^{2})\) approximation to \(P_{1/2}\): \[V_{1/2}=Q_{0}+\tfrac{1}{2}[B_{0}-Q_{0}(Q_{0}^{T}B_{0})+Q_{0}H_{0}].\] Next, we need to approximate at 3rd order the action \(hf_{x}(x_{1/2})\,V_{1/2}\) and further use this in forming \(hS_{1/2}\) (here, \(S_{1/2}\) is defined by using \(V_{1/2}\)). This is accomplished by centered differences. We let \(V_{1/2}=[v_{1}^{1/2},\ldots,\,v_{p}^{1/2}]\), and let \[\begin{array}{c}hf_{x}(x_{1/2})\,V_{1/2}\,\approx\,B_{1/2},\\ B_{1/2}:=\tfrac{1}{2}[f(x_{1/2}+hv_{1}^{1/2})-f(x_{1/2}-hv_{1}^{1/2}),\ldots,\\ f(x_{1/2}+hv_{p}^{1/2})-f(x_{1/2}-hv_{p}^{1/2})],\end{array}\]and also use this to approximate \(hS_{1/2}\) by \(H_{1/2}\), which is thus given by \((H_{1/2})_{ij}=(V^{T}_{1/2}B_{1/2})_{ij}\), \(i>j\), and then using skew-symmetry. In the end, the resulting scheme is \[V_{1/2} = Q_{0}+{\frac{1}{2}}[B_{0}-Q_{0}(Q_{0}^{T}B_{0})+Q_{0}H_{0}],\] \[V_{1} = Q_{0}+B_{1/2}+V_{1/2}(H_{1/2}-V^{T}_{1/2}B_{1/2}),\] \[\qquad\text{then orthogonalize $V_{1}$ to get}\quad Q_{1} \colon V_{1}=Q_{1}R_{1}.\] By construction, this is a second order Jacobian free discretization of (1.11). To take advantage of the increased accuracy, we now update the LEs using the midpoint rule rather than the forward Euler as in (2.8). Thus, if \(\lambda_{i}^{j}\), \(i=1,\ldots\), \(p\), are the approximations of the LEs at \(t_{j}\), we update these approximations at \(t_{j+1}\) as \[\lambda_{i}^{j+1}=\frac{t_{j}}{t_{j}+h}\,\lambda_{i}^{j}+\frac{1}{t_{j}+h}\,( Q_{j+1/2}^{T}B_{j+1/2}Q_{j+1/2})_{ii},\qquad i=1,\ldots,p, \tag{3.7}\] which--given the way we approximate \(B_{j+1/2}\)--gives a 2nd order approximation to the LEs. Again, computing \(V_{1}\) from (3.6) rather than \(P_{1}\) from (3.5) avoids the Jacobian evaluations and multiplications with the Jacobians. ## 4 Implementation and Examples We implemented all first and second order schemes given in the previous sections. For simplicity, we report on results where integration for the trajectory was done with the same scheme used to approximate the LEs. However, we also made experiments in which integration for the trajectory was carried out by a fourth order RK scheme; this had _no_ impact on the quality of the answers obtained for the LEs, which continued to be approximated at first or second order, according to the schemes adopted for their approximation. ### Expense Before reporting on our experiments, we give a breakdown of the computational costs of the different methods. We monitor the cost per integration step in terms of the number of required flops,4 and the number of required function evaluations (i.e., evaluations of the vector field \(f\) in (1.1)). As it is customary, we will consider the cost of evaluating the Jacobian \(f_{x}(\bar{x})\) to be \(n\) function evaluations. This is appropriate in general, as evidenced by the common case in which the Jacobian is approximated by divided differences, whereby for each of its columns one uses \[f_{x}(\bar{x})\,e_{j}\approx\frac{f(\bar{x}+\epsilon e_{j})-f(\bar{x})}{\epsilon },\qquad j=1,\ldots,\,n,\] with \(\epsilon\neq 0\) sufficiently small (typically, \(\epsilon\) is the square root of the machine precision). We will use the following naming conventions for the schemes we implemented * FE is "Euler" method matrix Free, JE is Euler method using the Jacobian. We further differentiate between FED and FEC for the Discrete or Continuous QR approaches. * Similarly, FEXD is the "EXtrapolation" method used matrix free for the discrete QR approach, JMPC is the "MidPoint" method using the Jacobian for the continuous QR approach, etc.. To arrive at the results summarized in Table 1, we have adopted the following choices: (i) the schemes have been implemented in the form in which they have been written in the previous sections; (ii) to perform the QR factorization of a \((n,\,p)\)-matrix (required by all schemes), we implemented the modified Gram-Schmidt algorithm, whose cost is reported as \(2np^{2}\) flops in [14]; (iii) the cost of adding two \((n,\,p)\)-matrices is \(np\) flops, and of multiplying a \((m,\,q)\)-matrix by a \((q,\,s)\)-matrix is \(2mqs\) flops (again, \begin{table} \begin{tabular}{c c c} \hline \hline Method & Flops & \(f\)-evaluations \\ \hline JED, (2.1) & \(2n^{2}p+2np^{2}+np\) & \(n\) \\ FED, (2.2) & \(2np^{2}+2np\) & \(p\) \\ JEC, (2.6) & \(2n^{2}p+5np^{2}+3np\) & \(n\) \\ FEC, (2.7) & \(5np^{2}+4np\) & \(p\) \\ JMPD, (3.1) & \(4n^{2}p+2np^{2}+2np\) & \(2n\) \\ FMPD, (3.2) & \(2np^{2}+4np\) & \(3p\) \\ JEXD, (3.3) & \(4n^{2}p+2np^{2}+7np\) & \(2n\) \\ FEXD, (3.4) & \(2np^{2}+7np\) & \(3p\) \\ JMPC, (3.5) & \(4n^{2}p+8np^{2}+6np\) & \(2n\) \\ FMPC, (3.6) & \(6np^{2}+9np\) & \(3p\) \\ \hline \hline \end{tabular} \end{table} Table 1: Computational Costssee [14]); (iv) for the continuous methods, we have made use of the form of \(S\) in (1.10) (and similarly for the Jacobian free versions), to save on arithmetic operations; (v) we have not counted the costs of updating the LEs, which is the same for all methods of same order, and is negligible for \(p\) small. Quite clearly, in the important case of \(p\ll n\), the Jacobian-free methods have a cost per step proportional to \(O(np^{2})\) flops, and \(O(p)\) function evaluations, while the methods requiring the Jacobian have a cost of \(O(n^{2}p)\) flops and \(O(n)\) function evaluations. ### Examples All experiments were made with FORTRAN codes, without any optimization option, on a Workstation with clock speed of 300 Mhz/s. Notation used in the tables later is as follows. * Method is the method used. * \(h\) is the stepsize used. * CPU are the computing times. * \(\lambda_{t}\), \(i=1\),..., are the approximated exponents. #### Example 4.1 This is a problem adapted from one in [11], for which in [10] we computed accurate approximations of the LEs, and thus we will use it to test order of convergence of the schemes, and perform a moderate comparison. We have a ring of oscillators with an external force proportional to the position component of the limit cycle of the van der Pol oscillator: \[\begin{array}{l}\ddot{y}+\alpha(y^{2}-1)\ \dot{y}+\omega^{2}y=0\\ \ddot{x}_{i}+d_{i}\dot{x}_{i}+\gamma[\Phi^{\prime}(x_{t}-x_{i-1})-\Phi^{\prime }(x_{i+1}-x_{i})]=\sigma y\delta_{i1},\qquad i=1,...,m.\end{array} \tag{4.1}\] Above, we set \(\alpha=1\), \(\omega=1.82\), \(\gamma=1\) and \(\sigma=4\). Also, \(\Phi(x)=(x^{2}/2)+(x^{4}/4)\) is the single well Duffing potential, \(x_{i}\) is the displacement of the \(i\)th particle, \(d_{i}\) is the damping coefficient, and we have periodic boundary conditions to be used in the expressions for \(\Phi^{\prime}\) (\(x_{0}=x_{m}\) and \(x_{m+1}=x_{1}\)). For the present set of experiments, we take 5 oscillators, and \(d_{i}=0.25\) for \(i\) odd and \(d_{i}=0.15\) for \(i\) even. Initial conditions are taken \(y(0)=0\), \(\dot{y}(0)=-2\), \(x_{i}(0)=\dot{x}_{i}(0)=1\), \(i=1,...,m\). We integrate to \(t=1000\), approximating 4 Lyapunov exponents (i.e., in the previous notation, \(n=12\) and \(p=4\)). The following values of these first 4 approximate exponents are believed to be accurate to the two digits shown (cf. [10]): \[1.7\mathrm{E}-3,\qquad 8.7\mathrm{E}-4,\qquad-9.7\mathrm{E}-2,\qquad-1.0\mathrm{E}-1.\] As it can be observed from Table II, the results of the Jacobian free versions agree very closely with those using the Jacobian. The first order scheme requires \(h\) to be too small to deliver decent accuracy, while the second order schemes are all fairly accurate. On such small problem, the savings achieved with the Jacobian free versions are a bit more than 50%. From Table III, we again observe that the results of the Jacobian free versions agree very closely with those using the Jacobian, the first order scheme requires \(h\) to be too small to become accurate, and the savings in the Jacobian free version are a bit less than 50%. The continuous QR approach is slightly less accurate than the discrete QR counterpart, and 2 to 3 times as expensive. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Method & \(h\) & CPU & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\lambda_{3}\) & \(\lambda_{4}\) \\ \hline FED & \(1.0\mathrm{E}-2\) & \(6^{\ast}\) & \(2.9\mathrm{E}-2\) & \(1.45\mathrm{E}-2\) & \(3.5\mathrm{E}-3\) & \(1.3\mathrm{E}-4\) \\ JED & \(1.0\mathrm{E}-2\) & \(12^{\ast}\) & \(2.8\mathrm{E}-2\) & \(1.6\mathrm{E}-2\) & \(3.3\mathrm{E}-3\) & \(1.2\mathrm{E}-3\) \\ FED & \(1.0\mathrm{E}-4\) & \(10^{\ast}\)\(8^{\ast}\) & \(1.85\mathrm{E}-3\) & \(8.8\mathrm{E}-4\) & \(-9.9\mathrm{E}-2\) & \(-9.7\mathrm{E}-2\) \\ JED & \(1.0\mathrm{E}-4\) & \(20^{\ast}\) & \(1.85\mathrm{E}-3\) & \(8.8\mathrm{E}-4\) & \(-9.9\mathrm{E}-2\) & \(-9.7\mathrm{E}-2\) \\ FEXD & \(1.0\mathrm{E}-2\) & \(8^{\ast}\) & \(1.6\mathrm{E}-3\) & \(8.6\mathrm{E}-4\) & \(-9.7\mathrm{E}-2\) & \(-1.0\mathrm{E}-1\) \\ JEXD & \(1.0\mathrm{E}-2\) & \(19.5^{\ast}\) & \(1.6\mathrm{E}-3\) & \(8.6\mathrm{E}-4\) & \(-9.7\mathrm{E}-2\) & \(-1.0\mathrm{E}-1\) \\ FMPD & \(1.0\mathrm{E}-2\) & \(7.4^{\ast}\) & \(1.6\mathrm{E}-3\) & \(8.6\mathrm{E}-4\) & \(-9.7\mathrm{E}-2\) & \(-1.0\mathrm{E}-1\) \\ JMPD & \(1.0\mathrm{E}-2\) & \(20^{\ast}\) & \(1.6\mathrm{E}-3\) & \(8.6\mathrm{E}-4\) & \(-9.7\mathrm{E}-2\) & \(-1.0\mathrm{E}-1\) \\ \hline \hline \end{tabular} \end{table} Table II: Example 1. Discrete QR MethodIn Table IV, we report the CPU times when we approximate all 12 exponents at \(t=1000\) by the midpoint rule methods. Notice that the Jacobian free continuous QR method scales poorly with increasing \(p\) (here, \(p=n\)), because of the need for the matrix multiplications \(Q^{T}B\) (recall (12)). **Example 4.2.** This is the same as Example 4 except that we now increase the number of oscillators, taking \(m=15\) and \(m=150\), respectively. Comparison values in this case are not known. However, we perform this experiment by taking the following parameter values (as in [11]): \(\alpha=1\), \(\omega=1.6\), \(\gamma=1\), \(\sigma=2\), and \(d_{i}=0.4\) for all \(i=1\),..., \(m\). With these values, we expect one LE equal to 0, and all other LEs negative. Initial conditions are as in Example 4.1. In all runs below integration is done up to \(t=1000\), and \(p=4\) LEs are approximated. From Table V, observe that the first order method for the discrete QR approach delivers qualitatively correct answers, while for the continuous QR approach does not. The Jacobian free and Jacobian based methods give nearly identical results, with the Jacobian free methods costing about 20 to 25% of the methods requiring the Jacobian. The Jacobian free second order methods for the discrete and continuous QR approaches give nearly identical results in terms of accuracy, with the continuous approach being 3 times as expensive. Finally, in Table VI we report on the CPU times for the methods when \(m=150\). The approximate LEs show the same quality as in Table V: those \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multirow{2}{*}{CPU} & \multirow{2}{*}{\(\lambda_{1}\)} & \multirow{2}{*}{\(\lambda_{2}\)} & \multirow{2}{*}{\(\lambda_{3}\)} & \multirow{2}{*}{\(\lambda_{4}\)} \\ \cline{1-1} \cline{5-5} \multicolumn{1}{c}{} & & & & & \\ \hline FED & 15.3†& 9.3E\(-\)4 & \(-\)7.6E\(-\)4 & \(-\)9.1E\(-\)2 & \(-\)9.4E\(-\)2 \\ JED & 1’2†& 1.4E\(-\)3 & \(-\)7.6E\(-\)4 & \(-\)9.3E\(-\)2 & \(-\)9.4E\(-\)2 \\ FMPD & 18.2†& 1.6E\(-\)3 & \(-\)7.3E\(-\)4 & \(-\)8.6E\(-\)2 & \(-\)8.75E\(-\)2 \\ JMPD & 1’47†& 1.6E\(-\)3 & \(-\)7.3E\(-\)4 & \(-\)8.6E\(-\)2 & \(-\)8.75E\(-\)2 \\ FEC & 15.6†& \(-\)2.7E\(-\)2 & \(-\)1.8E\(-\)3 & \(-\)1.3E\(-\)1 & \(-\)1.4E\(-\)1 \\ JEC & 1’26†& \(-\)2.7E\(-\)2 & \(-\)1.8E\(-\)3 & \(-\)1.3E\(-\)1 & \(-\)1.4E\(-\)1 \\ FMPC & 55†& 1.4E\(-\)3 & \(-\)7.3E\(-\)4 & \(-\)8.6E\(-\)2 & \(-\)8.75E\(-\)2 \\ JMPC & 3’11†& 1.5E\(-\)3 & \(-\)7.3E\(-\)4 & \(-\)8.6E\(-\)2 & \(-\)8.75E\(-\)2 \\ \hline \hline \end{tabular} \end{table} Table V: Example 2, \(m=15\) (\(n=32\)), \(h=1.0\)E\(-\)2of FED and JED are qualitatively correct (\(7.3\mathrm{E}-4,\ -1.9\mathrm{E}-3,\ -1.1\mathrm{E}-2,\)\(-2.8\mathrm{E}-2\)), those of FEC and JEC are all negative (and seemingly inaccurate), and those for FMPD, JMPD, FMPC, JMPC are all nearly identical to one another (\(1.5\mathrm{E}-3,\ -1.9\mathrm{E}-3,\ -1.2\mathrm{E}-2,\ -2.8\mathrm{E}-2\)). ## 5 Conclusions We have proposed schemes to approximate LEs of nonlinear differential equations which do not need the Jacobian matrix. The basic idea is really very simple, and consists in replacing the product of the Jacobian matrix \(f_{x}\) times a matrix \(Z\) by approximate directional derivatives. We gave first order and second order schemes for both discrete and continuous QR methods, and showed the reliability and effectiveness of our choices on two examples. It is our hope that these choices will prove valuable to those interested in approximation of LEs of large dimensional systems, and who have been reluctant to do so because of computational expense. Based on the results of our experiments, we can draw the following conclusions and recommendations for future work. 1. The Jacobian free version of both discrete and continuous QR approaches is considerably less expensive than the counterpart requiring the Jacobian, and it is at least equally accurate. Savings depend on whether one uses the discrete or continuous QR approach, on the dimension of the problem, and on the number of LEs one needs to monitor. In general, savings will also depend on how expensive it is to compute the Jacobian, and on its structure (which we have not taken into account in Examples 4.1 and 4.2). From our experiments, we observed that for small problems, and in case we need all LEs, savings are on the order of 50%, but already for moderate size problems (i.e., dimension 300), to compute a few LEs, the Jacobian free version may take up to only 1.5% of the time taken by the methods requiring the Jacobian. Needless to say, the Jacobian free versions have the major advantage of not requiring the derivative of the vector field in the first place. 2. For the fixed stepsize low order schemes we have considered, the Jacobian free implementation of the discrete QR approach appears superior to the continuous QR analog. Clearly, the discrete QR approach is consistently less expensive, and this could have been easily anticipated just by looking at the differential equations one needs to solve with the two different approaches; but, perhaps more importantly, it is also at least equally accurate. For example, the Jacobian free "Euler" method works poorly for the continuous QR approach, while it is rather reliable for the discrete QR approach (see especially Example 4.2). Furthermore, the expense needed for the Jacobian free discrete QR method scales favorably with respect to the number \(p\) of desired LEs, while--as \(p\) increases--the continuous QR approach becomes progressively penalized by (dense) matrix multiplications of \((n,p)\times(p,n)\) matrices. Recalling also Remark 1.3, we lean towards recommending the Jacobian free discrete QR approach for large nonlinear systems, at least when a modest number of LEs are desired. In fact, either of the two second order Jacobian free schemes we introduced for the discrete QR approach would be our recommended choice. 3. Our basic schemes to approximate the LEs have been simple low order explicit RK methods, but extensions to other schemes are certainly possible and perhaps warranted. In future work, we may investigate higher order schemes as well as adapt our choices to different basic discretizations. We stress once more that our choices pertain exclusively to the approximation of the LEs, and integration for the trajectory can be performed with any other appropriate scheme. 4. Finally, we observe that all Jacobian free second order schemes we derived can be implemented with variable stepsizes, by monitoring local errors with respect to the Jacobian free Euler method (and no extra evaluations of \(f\) are required). We also leave to future development the implementation of error control and variable time stepping strategies. ## Acknowledgments This work was supported in part under NSF Grant DMS-9973266. The author would like to thank Erik Van Vleck for many fruitful discussions on Lyapunov exponents. ## References * [1] Adrianova, L. Ya. (1995). _Introduction to Linear Systems of Differential Equations_, Translations of Mathematical Monographs, Vol. 146, AMS, Providence, R.I. * [2] Benettin, G., Galgani, L., Giorgilli, A., and Strelcyn, J.-M. (1980). Lyapunov exponents for smooth dynamical systems and for Hamiltonian systems; a method for computing all of them. Part 1: Theory, and... Part 2: Numerical applications. _Meccanica_**15**, 9-20, 21-30. * [3] Bridges, T., and Reich, S. (2001). Computing Lyapunov exponents on a Stiefel manifold. _Physica D_**156**, 219-238. * [4] Brown, P., and Hindmarsh, A. (1986). Matrix free methods for stiff systems of ODES. _SIAM J. Numer. Anal._**23**, 610-638. * [5] Brown, P., and Hindmarsh, A. (1989). Reduced storage matrix methods in stiff ODE systems. _Appl. Math. Comp._**31**, 40-91. * [6] Cliffe, K. A., Spence, A., and Tavener, S. (2000). The numerical analysis of bifurcation problems with application to fluid mechanics. _Acta Numerica_, 39-131. * [7] Dieci, L., and Lopez, L. Lyapunov exponents on quadratic groups, submitted. * [8] Dieci, L., Russell, R. D., and Van Vleck, E. S. (1997). On the computation of Lyapunov exponents for continuous dynamical systems. _SIAM J. Numer. Anal._**34**, 402-423. * [9] Dieci, L., and Van Vleck, E. S. (2001). Orthonormal Integrators Based on Householder and Givens Transformations, submitted. * [10] Dieci, L., and Van Vleck, E. S. (2002). Lyapunov spectral intervals: Theory and computation, to appear in _SIAM J. Numer. Anal._ * [11] Dressler, U. (1988). Symmetry property of the Lyapunov spectra of a class of dissipative dynamical systems with viscous damping. _Phys. Rev. A_**38**(4), 2103-2109. * [12] Eckmann, J. P., and Ruelle, D. (1985). Ergodic theory of chaos and strange attractors. _Rev. Modern Phys._**57**, 617-656. * [13] Geist, K., Parlitz, U., and Lauterborn, W. (1990). Comparison of different methods for computing Lyapunov exponents. _Prog. Theor. Phys._**83**, 875-893. * [14] Golub, G. H., and Van Loan, C. F. (1989). _Matrix Computations_, The Johns Hopkins University Press, 2nd ed. * [15] Gupalo, D., Kaganovich, A. S., and Cohen, E. G. D. (1994). Symmetry of Lyapunov spectrum. _J. Statist. Phys._**74**, 1145-1159. * [16] Hairer, E., Nersett, S. P., and Wanner, G. (1993). _Solving Ordinary Differential Equations I_, Springer-Verlag, Berlin/Heidelberg, 2nd ed. * [17] Johnson, R. A., Palmer, K. J., and Sell, S. (1987). Ergodic properties of linear dynamical systems. _SIAM J. Math. Anal._**18**, 1-33. * [18] Lyapunov, A. (1949). _Problem general de la stabilite du mouvement_, Annals of Mathematics Studies, Vol. 17, Princeton University Press. * [19] Millionshchikov, V. M. (1971). Linear systems of ordinary differential equations. _Differ. Uravn._**7**-3, 387-390. * [20] Oseledec, V. I. (1968). A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems. _Trans. Moscow Math. Soc._**19**, 197-231. * [21] Perron, O. (1930). Die Ordnungszahlen Linearer Differentialgleichungssysteme. _Math. Z._**31**, 748-766. * [22] Wolf, A., Swift, J. B., Swinney, H. L., and Vastano, J. A. (1985). Determining Lyapunov exponents from a time series. _Physica D_**16**, 285-317. # Exponential Time Differencing for Stiff Systems S. M. Coxlabel=e1]stephen.cox@nottingham.ac.uk, paul.matthews@nottingham.ac.uk In problems where the boundary conditions are not periodic, a basis other than Fourier modes may be appropriate (e.g., Chebyshev polynomials) and the linearized system may no longer be diagonal; in this case, the stiffness problem is exacerbated [20]. The numerical method treated in this paper is the so-called "exponential time differencing" (ETD) scheme [9, 17, 18], which involves exact integration of the governing equations followed by an approximation of an integral involving the nonlinear terms. It arose originally in the field of computational electrodynamics [19], where the problem of computing the electric and magnetic fields in a box with absorbing boundaries is stiff (essentially because of the large value of \(c\), the speed of light). For that problem, standard, explicit time-stepping techniques require an extremely small time step in order to be stable, and although implicit schemes with less stringent constraints on the time step are available, they are costly (or infeasible) to implement in three dimensions [9]. Explicit "exponential time differencing" [9, 17, 18] with first-order accuracy has been used widely in computational electrodynamics. This original first-order explicit scheme has since been extended to implicit and explicit schemes of arbitrary order, and the stability of such schemes has been discussed in some detail [1]. To develop ETD methods further, in this paper we derive new, more accurate (Runge-Kutta) ETD methods and provide a more succinct derivation of ETD methods than previously given. We also apply ETD methods to various PDEs and compare them with alternative time-stepping methods to illustrate the superior performance of ETD schemes, and show how ETD methods can be applied to systems of ODEs whose linear part is not diagonal. All ETD schemes discussed in this paper are explicit. The structure of this paper is as follows. In Section 2, we describe the first- and second-order-accurate exponential time differencing scheme and give a derivation of ETD schemes of arbitrary order. We then describe the new Runge-Kutta ETD methods. Other schemes for stiff systems are also described for comparison, and in Section 3 we discuss the stability of the various schemes. In Section 4, we compare the second- and fourth-order ETD schemes with other methods, for several stiff problems. Our primary interest lies in solving PDEs, and we give comparisons for both dissipative and dispersive PDEs. However, to illustrate the behavior of the method, we also provide simpler and more detailed examples in the form of a single, linear, inhomogeneous ODE with either a real or an imaginary linear part. A final set of examples concerns problems in which the linearized system is nondiagonal, and shows how the method can readily be adapted to this important case. We summarize our results in Section 5. ## 2 Derivation of Methods When a PDE is discretized using a Fourier spectral method, a stiff system of coupled ODEs for the Fourier coefficients is obtained. The linear part of this system is diagnonal, while the nonlinear terms are usually evaluated by transforming to physical space, evaluating the nonlinear terms at grid points and then transforming back to spectral space. In stiff systems, solutions evolve on two time scales. If the stiffness is due to rapid exponential decay of some modes (as with a dissipative PDE), then there is a rapid approach to a "slow manifold," followed by slower evolution along the slow manifold itself [2]. If,by contrast, the stiffness is due to rapid oscillations of some modes (as with a dispersive PDE), then the solution rapidly oscillates about its projection on the slow manifold; it is this projection which evolves slowly. In general, stiffness may have features of both rapid decay and rapid oscillation. Although our primary interest lies in solving PDEs, it is clearer and more instructive first to describe ETD methods in the context of a simple model ODE for the evolution of a single Fourier mode. Since the linear operator in a Fourier basis is diagonal, the extension of the method to the system of ODEs for the mode amplitudes is then immediate. The model ODE is \[\dot{u}=cu+F(u,t), \tag{1}\] where \(c\) is a constant and \(F(u,t)\) represents nonlinear and forcing terms. For the high-order Fourier modes, \(c\) is large and negative (for dissipative PDEs) or large and imaginary (for dispersive PDEs). A suitable time-stepping method for (1) should be able to handle the stiffness caused by the large values of \(|c|\) without requiring time steps of order \(|1/c|\). However, since the coefficients \(c\) span a wide range of values when all Fourier modes are considered, the time-stepping method should also be applicable to small values of \(|c|\). Finally, we require that the term \(F(u,t)\) be handled explicitly, since fully implicit methods are too costly for large-scale PDE simulations. When \(|c|\gg 1\), solutions of (1) generally consist of two elements: a fast phase in which \(u=O(1)\) and \(\mathrm{d}/\mathrm{d}t=O(c)\), and a "slow manifold" on which \(u=O(1/c)\) and \(\mathrm{d}/\mathrm{d}t=O(1)\); if \(Re(c)<0\), solutions are attracted to the slow manifold. The slow manifold can be expressed as an asymptotic series in powers of \(1/c\) as \[u\sim-\frac{F}{c}-\frac{1}{c^{2}}\frac{\mathrm{d}F}{\mathrm{d}t}-\frac{1}{c^{ 3}}\frac{\mathrm{d}^{2}F}{\mathrm{d}t^{2}}\cdots, \tag{2}\] and forms the basis of "nonlinear Galerkin" methods [3, 13]. Since initial conditions do not generally lie on the slow manifold, a numerical method should ideally give stable, highly accurate solutions during both the fast and slow phases. ### Exponential Time Differencing To derive the exponential time differencing (ETD) methods [1], we begin by multiplying (1) through by the integrating factor \(e^{-ct}\), then integrating the equation over a single time step from \(t=t_{n}\) to \(t=t_{n+1}=t_{n}+h\) to give \[u(t_{n+1})=u(t_{n})e^{ch}+e^{ch}\int_{0}^{h}e^{-c\tau}F(u(t_{n}+\tau),t_{n}+ \tau)\,\mathrm{d}\tau. \tag{3}\] This formula is _exact_, and the essence of the ETD methods is in deriving approximations to the integral in this expression. We denote the numerical approximation to \(u(t_{n})\) by \(u_{n}\) and write \(F(u_{n},t_{n})\) as \(F_{n}\). The simplest approximation to the integral in (3) is that \(F\) is constant, \(F=F_{n}+O(h)\), between \(t=t_{n}\) and \(t=t_{n+1}\), so that (3) becomes the scheme **ETD1**, given by \[u_{n+1}=u_{n}e^{ch}+F_{n}(e^{ch}-1)/c,\]which has a local truncation error \(h^{2}\dot{F}/2\). This version of the exponential time differencing method has been applied in computational electrodynamics [9, 17, 19], but (4) is rarely mentioned outside of this field in the numerical analysis literature (the notable exception being [1]). Note that for small \(|c|\), (4) approaches the forward Euler method, while for large \(|c|\) the first term in the series (2) is recovered. If instead of assuming that \(F\) is constant over the interval \(t_{n}\leq t\leq t_{n+1}\), we use the higher-order approximation that \[F=F_{n}+\tau(F_{n}-F_{n-1})/\,h+O(h^{2}), \tag{5}\] we arrive at the numerical scheme **ETD2** (cf. [1]) given by \[u_{n+1}=u_{n}e^{ch}+F_{n}((1+hc)e^{ch}-1-2hc)/\,hc^{2}+F_{n-1}(-e^{ch}+1+hc)/\, hc^{2}, \tag{6}\] which has a local truncation error of \(5h^{3}\ddot{F}/12\). Note that the apparent divergence of the coefficients in (6) as \(c\to 0\) is illusory; in fact (6) becomes the second-order Adams-Bashforth method in this limit. For large \(|c|\), (6) gives the first two terms of (2). For completeness we derive concisely in the next section ETD schemes of arbitrary order (cf. [1]). The availability of high-order ETD schemes represents an important advantage of these methods over standard linearly implicit methods, for which the order is limited by stability constraints. We now note two practical points that must be borne in mind when applying the ETD methods to PDEs. First, there is often some mode or modes with \(c=0\) (as is the case with (58) and (59) later). In that event, the explicit formulae for the coefficients of \(F_{n}\), \(F_{n-1}\), etc., in the equivalent of (4) or (6) cannot be used directly, since they involve division by zero. Instead, the limiting form of the coefficients as \(c\to 0\) must be used for such a mode. Another practical consideration is that care must be taken in the evaluation of the coefficients for modes where \(|ch|\) is small, to avoid rounding errors arising from the large amount of cancellation in the coefficients. This becomes increasingly important as the order of the method is raised; in some cases the Taylor series for the coefficients should be used rather than the explicit formulae themselves, when \(|ch|\ll 1\). #### 2.1.1 Exponential Time Differencing for Arbitrary Order Explicit and implicit ETD schemes of arbitrary order have been derived elsewhere [1], but the derivation is rather involved and does not give explicit formulae for the coefficients. Here we give a more straightforward derivation of the explicit methods, based on a polynomial approximation of the integrand in (3). Let \(G_{n}(\tau)=F(u(t_{n}+\tau),\,t_{n}+\tau)\). We have used above two approximations for \(G_{n}\), one constant and one linear in \(\tau\). In general we seek approximations to \(G_{n}\) that are polynomials in \(\tau\), valid on the interval \(0\leq\tau\leq h\), using information about \(F\) at the \(n\)th and previous time steps. To derive a numerical scheme with local truncation error of order \(h^{s+1}\), we note [8] that the approximating polynomial \(\sum_{m=0}^{r-1}G_{n}^{(m)}\tau^{m}\) of degree \(s-1\) can be written as \[\left[1-\binom{-\tau/\,h}{1}\nabla+\binom{-\tau/\,h}{2}\nabla^{2}+\cdots+(-1) ^{s-1}\binom{-\tau/\,h}{s-1}\nabla^{s-1}\right]G_{n}(0), \tag{7}\]where \(\nabla\) is the backwards difference operator, such that \[\nabla G_{n}(0)=G_{n}(0)-G_{n-1}(0),\quad\nabla^{2}G_{n}(0)=G_{n}(0)-2G_{n-1}(0)+G _{n-2}(0), \tag{8}\] etc., and \[m!\binom{-\tau/h}{m}=(-\tau/h)(-\tau/h-1)\cdots(-\tau/h-m+1), \tag{9}\] for \(m=1,\ldots,s-1\). It then follows from (3) that our approximation to \(u_{n+1}\) satisfies \[u_{n+1}-u_{n}e^{ch} = e^{ch}\int_{0}^{h}e^{-c\tau}\left[1-\binom{-\tau/h}{1}\nabla+ \cdots+(-1)^{s-1}\binom{-\tau/h}{s-1}\nabla^{s-1}\right]G_{n}(0)\,\mathrm{d}\tau \tag{10}\] \[= e^{ch}\sum_{m=0}^{s-1}\int_{0}^{h}(-1)^{m}e^{-c\tau}\left(\frac{ -\tau/h}{m}\right)\mathrm{d}\tau\nabla^{m}G_{n}(0)\] \[= h\sum_{m=0}^{s-1}(-1)^{m}\int_{0}^{1}e^{ch(1-\lambda)}\left( \frac{-\lambda}{m}\right)\mathrm{d}\lambda\nabla^{m}G_{n}(0)\] \[= h\sum_{m=0}^{s-1}g_{m}\nabla^{m}G_{n}(0),\] where \[g_{m}=(-1)^{m}\int_{0}^{1}e^{ch(1-\lambda)}\binom{-\lambda}{m}\,\mathrm{d}\lambda. \tag{11}\] It remains then to calculate the \(g_{m}\). This is straightforwardly accomplished by introducing the generating function \[\Gamma(z)=\sum_{m=0}^{\infty}g_{m}z^{m}, \tag{12}\] which is readily found to be \[\Gamma(z) = \int_{0}^{1}e^{ch(1-\lambda)}\sum_{m=0}^{\infty}\binom{-\lambda}{ m}(-z)^{m}\,\mathrm{d}\lambda \tag{13}\] \[= \int_{0}^{1}e^{ch(1-\lambda)}(1-z)^{-\lambda}\,\mathrm{d}\lambda\] \[= \frac{e^{ch}(1-z-e^{-ch})}{(1-z)(ch+\log(1-z))},\] provided the order of the sum and integral may be interchanged. A recurrence relation for the \(g_{m}\) can be found by rearranging (13) to the form \[(ch+\log(1-z))\Gamma(z)=e^{ch}-(1-z)^{-1} \tag{14}\]and expanding as a power series in \(z\) to give \[\left(ch-z-\frac{1}{2}z^{2}-\frac{1}{3}z^{3}-\cdots\right)(g_{0}+g_{1}z+g_{2}z^{2 }+\cdots)=e^{ch}-1-z-z^{2}-z^{3}-\cdots. \tag{15}\] Thus, by equating like powers of \(z\) we find that \[chg_{0}=e^{ch}-1, \tag{16}\] \[chg_{m+1}+1=g_{m}+\frac{1}{2}g_{m-1}+\frac{1}{3}g_{m-2}+\cdots+\frac{g_{0}}{m+ 1}=\sum_{k=0}^{m}\frac{g_{k}}{m+1-k}, \tag{17}\] for \(m\geq 0\). For example, \[g_{0}=\frac{e^{ch}-1}{ch}\quad\text{and}\quad g_{1}=\frac{g_{0}-1}{ch}=\frac{e ^{ch}-1-ch}{c^{2}h^{2}}, \tag{18}\] which give rise to the scheme (6). Having obtained the \(g_{m}\), the ETD scheme (10) is given explicitly as \[u_{n+1}=u_{n}e^{ch}+h\sum_{m=0}^{s-1}g_{m}\sum_{k=0}^{m}(-1)^{k}\binom{m}{k}F_ {n-k}. \tag{19}\] ### Exponential Time Differencing Method with Runge-Kutta Time Stepping The ETD methods described above are of multistep type, requiring \(s\) previous evaluations of the nonlinear term \(F\). Such methods are often inconvenient to use, since initially only one value is available. This problem can be avoided by the use of Runge-Kutta (RK) methods, which also typically have the advantages of smaller error constants and larger stability regions than multistep methods. In this section we obtain ETD methods of RK type of orders 2, 3, and 4. As usual for RK methods, these are not unique. #### 2.2.1 Second-Order Runge-Kutta ETD Method A second-order ETD method of RK type, analogous to the "improved Euler" method, is as follows. First, the step (4) is taken to give \[a_{n}=u_{n}e^{ch}+F_{n}(e^{ch}-1)/c. \tag{20}\] Then the approximation \[F=F(u_{n},t_{n})+(t-t_{n})(F(a_{n},t_{n}+h)-F(u_{n},t_{n}))/h+O(h^{2}) \tag{21}\] is applied on the interval \(t_{n}\leq t\leq t_{n+1}\), and is substituted into (3) to yield the scheme **ETD2RK** given by \[u_{n+1}=a_{n}+(F(a_{n},t_{n}+h)-F_{n})(e^{ch}-1-hc)/hc^{2}. \tag{22}\] The truncation error per step for this method is \(-h^{3}\ddot{F}/12\); note that this is smaller by a factor of 5 than that of ETD2. #### 2.2.2 Third- and Fourth-Order Runge-Kutta ETD Methods A third-order ETD RK scheme can be constructed in a similar way, analogous to the classical third-order RK method (see, for example, [10]): **ETD3RK** is given by \[a_{n} = u_{n}e^{ch/2}+\left(e^{ch/2}-1\right)F(u_{n},t_{n})/c, \tag{23}\] \[b_{n} = u_{n}e^{ch}+(e^{ch}-1)(2F(a_{n},t_{n}+h/2)-F(u_{n},t_{n}))/c,\] (24) \[u_{n+1} = u_{n}e^{ch}+\{F(u_{n},t_{n})[-4-hc+e^{ch}(4-3hc+h^{2}c^{2})]\] (25) \[+4F(a_{n},t_{n}+h/2)[2+hc+e^{ch}(-2+hc)]\] \[+F(b_{n},t_{n}+h)[-4-3hc-h^{2}c^{2}+e^{ch}(4-hc)]]/h^{2}c^{3}.\] The terms \(a_{n}\) and \(b_{n}\) approximate the values of \(u\) at \(t_{n}+h/2\) and \(t_{n}+h\), respectively. The final formula (25) is the quadrature formula for (3) derived from quadratic interpolation through the points \(t_{n}\), \(t_{n}+h/2\), and \(t_{n}+h\). A straightforward extension of the standard fourth-order RK method yields a scheme which is only third order. However, by varying the scheme and introducing further parameters, a fourth-order scheme **ETD4RK** is obtained: \[a_{n} = u_{n}e^{ch/2}+\left(e^{ch/2}-1\right)F(u_{n},t_{n})/c, \tag{26}\] \[b_{n} = u_{n}e^{ch/2}+\left(e^{ch/2}-1\right)F(a_{n},t_{n}+h/2)/c,\] (27) \[c_{n} = a_{n}e^{ch/2}+\left(e^{ch/2}-1\right)(2F(b_{n},t_{n}+h/2)-F(u_{n },t_{n}))/c,\] (28) \[u_{n+1} = u_{n}e^{ch}+\{F(u_{n},t_{n})[-4-hc+e^{ch}(4-3hc+h^{2}c^{2})]\] (29) \[+2(F(a_{n},t_{n}+h/2)+F(b_{n},t_{n}+h/2))[2+hc+e^{ch}(-2+hc)]\] \[+F(c_{n},t_{n}+h)[-4-3hc-h^{2}c^{2}+e^{ch}(4-hc)]\}/h^{2}c^{3}.\] The computer algebra package Maple was used to confirm that this method is indeed fourth order. ### Standard Integrating Factor Methods: IFAB2 and IFRK2 Standard integrating factor (IF) methods [3, 4, 6, 21] are obtained by rewriting (1) as \[\frac{\mathrm{d}}{\mathrm{d}t}u^{-ct}=F(u,t)e^{-ct} \tag{30}\] and then applying a time-stepping scheme to this equation. If we use the second-order Adams-Bashforth method we obtain the IF method \[\textbf{IFAB2}\quad u_{n+1}=u_{n}e^{ch}+\frac{3h}{2}F_{n}e^{ch}-\frac{h}{2}F_{ n-1}e^{2ch}. \tag{31}\] The local truncation error for this method is \[\frac{5}{12}h^{3}\frac{\mathrm{d}^{2}(Fe^{-ct})}{\mathrm{d}t^{2}}\sim\frac{5} {12}h^{3}c^{2}F\quad\text{as $|c|\to\infty$}. \tag{32}\]Applying instead the second-order Runge-Kutta method to (30), we find \[\mathbf{IFRK2}\quad u_{n+1}=u_{n}e^{ch}+\frac{h}{2}(F_{n}e^{ch}+F((u_{n}+hF_{n})e^{ ch},t_{n}+h)), \tag{33}\] with a truncation error of the same order as IFAB2. Although they are commonly used, integrating factor methods have a number of weaknesses. Unlike most standard methods and the ETD methods described above, the fixed points of IF methods are not the same as the fixed points of the original ODE. A second important drawback [3, 7] is the large error constants; the errors in the above IF methods are greater than those of the ETD methods by a factor of order \(c^{2}\gg 1\). ### Linearly Implicit Schemes Finally, we consider two "linearly implicit" (LI) or "semi-implicit" schemes--these are generally regarded as being very well suited to systems such as (1) [3, 6, 7, 22]. Treating the linear terms with the second-order Adams-Moulton method (trapezium rule) and the nonlinear terms with the second-order Adams-Bashforth method gives \[\mathbf{AB2AM2}\quad u_{n+1}=u_{n}+\frac{h}{2}(cu_{n}+cu_{n+1})+\frac{3h}{2}F_ {n}-\frac{h}{2}F_{n-1}. \tag{34}\] Note that in this case the formulae for the two methods can simply be added together. The second-order backward difference formula can also be combined with AB2, although not in such a straightforward way. The resulting method, obtained by the method of undetermined coefficients and Taylor expansion, is \[\mathbf{AB2BD2}\quad u_{n+1}=(4u_{n}-u_{n-1}+4hF_{n}-2hF_{n-1})/(3-2hc). \tag{35}\] This method has been used previously for a spectral simulation of the Cahn-Hilliard equation [22]. The truncation errors for the linearly implicit methods (34) and (35) have the same scaling. For solutions that lie off the slow manifold, the error per step is of order \(c^{3}h^{3}\) for large \(|c|\), but for solutions on the slow manifold the error is smaller, of order \(h^{3}\). ## 3 Stability In this section we compare the stability of several of the second-order methods described above. The general approach for stability analysis of a numerical method that uses different methods for the linear and nonlinear parts of the equation is as follows (see for example, [1, 6]). For the nonlinear, autonomous ODE \[\dot{u}=cu+F(u), \tag{36}\] we suppose that there is a fixed point \(u_{0}\), so that \(cu_{0}+F(u_{0})=0\). Linearizing about this fixed point leads to \[\dot{u}=cu+\lambda u, \tag{37}\] where \(u\) is now the perturbation to \(u_{0}\) and where \(\lambda=F^{\prime}(u_{0})\). If (36) represents a system of ODEs, then \(\lambda\) is a diagonal or block diagonal matrix containing the eigenvalues of \(F\). The fixed point \(u_{0}\) is stable if \(Re(c+\lambda)<0\) for all eigenvalues \(\lambda\). When a second-order numerical method, for example, ETD2, is applied to (36), the linearization of the nonlinear term in the numerical method leads to a recurrence relation involving \(u_{n+1}\), \(u_{n}\), and \(u_{n-1}\). This is equivalent to applying the method to (37), with the term \(\lambda u\) regarded as the nonlinear term. Note that an implicit assumption in this approach is that the fixed points of the numerical method are the same as those of the ODE. This is true for ETD and LI methods, but not for the IF methods; it follows that the meaning of the stability analysis for IF methods is not clear. In general, both \(c\) and \(\lambda\) in (37) are complex, so the stability region for these methods is four dimensional. In order to plot two-dimensional stability regions, previous authors have used the complex \(\lambda\)-plane, assuming \(c\) to be fixed and real [1], or have assumed that both \(c\) and \(\lambda\) are purely imaginary [6]. An alternative approach, used below, is to concentrate on the case in which \(c\) and \(\lambda\) are real. Consider first the method AB2AM2 applied to (37). This leads to \[u_{n+1}=u_{n}+\frac{h}{2}(cu_{n}+cu_{n+1})+\frac{3h}{2}\lambda u_{n}-\frac{h}{ 2}\lambda u_{n-1}. \tag{38}\] After defining \(r=u_{n+1}/u_{n}\), \(x=\lambda h\), \(y=ch\), we find the following quadratic equation for the factor \(r\) by which the solution is multiplied after each step: \[(2-y)r^{2}-(2+3x+y)r+x=0. \tag{39}\] With the assumption that \(x\) and \(y\) are real, it can be shown that \(r\) is real, so that the stability boundaries correspond to \(r=1\) and \(r=-1\) in (39). These correspond to the lines \(x+y=0\) and \(x=-1\) in the \(x\), \(y\) plane, respectively. Similarly, for the method AB2BD2, the quadratic for \(r\) is \[(3-2y)r^{2}-4(1+x)r+2x+1=0 \tag{40}\] and the stability boundaries are the lines \(x+y=0\) and \(y=4+3x\). For ETD2, the equation for \(r\) is \[y^{2}r^{2}-(y^{2}e^{y}+x[(1+y)e^{y}-1-2y])r+x(e^{y}-1-y)=0 \tag{41}\] and the stability boundaries are the lines \(x+y=0\) and \[x=\frac{-y^{2}(1+e^{y})}{ye^{y}+2e^{y}-2-3y}. \tag{42}\] Finally, for the ETD2RK method, we have \[r=e^{y}+\frac{x}{y}(e^{y}-1)+\frac{x}{y^{3}}(x+y)(e^{y}-1)(e^{y}-1-y) \tag{43}\] and the stability region is bounded by two lines on which \(r=1\), which are \(x+y=0\) and \(x=-y^{2}/(e^{y}-1-y)\). The stability regions for these four methods are shown in Fig. 1. Note that for all four methods the stability region includes the negative \(y\)-axis, and the width of this region increases as \(|y|\) increases. The right-hand boundary is the same for all four methods, corresponding simply to the true stability boundary of (37). For both ETD2 and AB2BD2 the left-hand boundary is parallel to \(y=3x\) as \(y\to-\infty\), while for ETD2RK, which has the largest stability region, it is parallel to \(y=x\). An alternative way of presenting the stability regions is in the complex \(x\) plane, at a fixed value of \(y\) that is real and negative. The boundaries of the stability regions are obtained by substituting \(r=e^{i\theta}\) into the formulae (39)-(41) and (43) and then by solving for \(x\). Figure 2 shows the stability regions plotted in this way for the same four methods, with \(y=-20\). For each method, the boundary of the stability region passes through the point \(x=-y\). As in the purely real case, AB2AM2 has the smallest stability region and ETD2RK has the largest. In the limit \(y\to-\infty\), the stability region for ETD2RK simplifies to the disc \(|x|<|y|\). In the same limit, the boundaries of the stability regions for ETD2 and AB2BD2 become \(x=ye^{2i\theta}/(1-2e^{i\theta})\) and that of AB2AM2 becomes \(x=-y(e^{2i\theta}+e^{i\theta})/(3e^{i\theta}-1)\). Note that for AB2AM2 the radius of the stability region does not grow linearly with \(y\) at \(\theta=\pi\), as is also apparent from Fig. 1. ## 4 Numerical Examples and Experiments In this section the second- and fourth-order ETD methods and the new Runge-Kutta ETD methods described above in Sections 2.1 and 2.2 are compared with the standard methods described in Sections 2.3 and 2.4. Such tests do not seem previously to have been carried out. Our primary goal is to test the ETD methods on dissipative and dispersive partial differential equations, in which the high-wavenumber modes experience rapid decay and rapid oscillation, respectively. As a precursor to these PDE tests, we begin by examining model ordinary differential equations for which the linear part gives either rapid decay or Figure 1: Stability regions (shaded) in the \(x\), \(y\) plane for four methods. oscillation. In these simple models more analytical progress is possible. In order to make a fair comparison between different methods, most examples will focus on the second-order formulae; however, in applications, higher-order schemes may be more appropriate. ### A Model Ordinary Differential Equation with Rapid Decay For our first comparison between the different methods, we consider the model ODE \[\dot{u}=cu+\sin t,\quad u(0)=u_{0}, \tag{44}\] where we choose \(c=-100\) to generate the rapid linear decay characteristic of stiff systems, compared with the \(O(1)\) time scale of the forcing. For subsequent evaluation of the different numerical schemes, we note that the exact solution is \[u(t)=u_{0}e^{ct}+\frac{e^{ct}-c\sin t-\cos t}{1+c^{2}}. \tag{45}\] Equation (44) is one of the simplest possible ODEs of the required form (1), with the key properties of a stiff linear part and a forcing term that does not vary rapidly. For the differential equation (44), the methods under consideration are ETD2, ETD2RK, IFAB2, IFRK2, AB2AM2, and AB2BD2. Note that if the forcing term \(F\) in (1) is a constant, then the ETD methods are exact, which makes comparisons rather unfair! When \(c<0\) and \(|c|\gg 1\), the behavior of (44) can be split into a fast phase, during which the solution rapidly approaches the slow manifold on which \(u\sim-(\sin t)/c-(\cos t)/c^{2}+\cdots\), and then a phase in which the solution moves along this slow manifold. It is necessary Figure 2: Stability regions (interior of closed curves) in the complex \(x\) plane with \(y=-20\). The four methods are AB2AM2 (dashed), ETD2 (solid), AB2BD2 (dash-dot), and ETD2RK (dotted). to seek numerical methods that capture both of these phases accurately, and work well when the time step \(h\) is of the order of \(1/c\). In evaluating the different numerical schemes, a useful feature of (44) is that the recurrence relations resulting from the various numerical methods can be solved exactly (see Section 4.1.1 below). However, a disadvantage of this equation from the point of view of distinguishing between the various numerical schemes is that initial conditions tend to get lost as the solution becomes phase-locked to the forcing term. So poor numerical schemes are "helped along" by the forcing, and can recover from a bad start. Below, in Sections 4.2 and 4.5, we treat systems with self-sustained oscillations, which do not have this drawback; these later problems provide a more demanding test of the various numerical schemes. #### 4.1.1 Analytical Solution to the Numerical Schemes Used to Solve (44) In evaluating the various numerical schemes, it is helpful to write down and solve the recurrence relations corresponding to each scheme. In solving exactly the various numerical schemes, it is useful to adopt a formulation in complex variables. The numerical schemes ETD2, ETD2RK, IFAB2, IFRK2, and AB2AM2 for (44) then all give rise to recurrence relations of the form \[w_{n+1}=\alpha w_{n}+i\beta\gamma^{n}, \tag{46}\] where \(u_{n}=\text{Re}(w_{n})\). Table I gives the values of \(\alpha\), \(\beta\), and \(\gamma\) for each numerical scheme. The solution to (46), subject to the initial value \(w_{0}=u_{0}\), is \[w_{n}=\alpha^{n}w_{0}+\frac{i\beta(\gamma^{n}-\alpha^{n})}{\gamma-\alpha}. \tag{47}\] It is possible to use this formula to evaluate the accuracy of the various numerical schemes (apart from AB2BD2). To do so, we note that the exact solution (45) may be written as \(u(t)=\text{Re}(w(t))\), where \[w(t)=e^{ct}w_{0}-\frac{i(e^{-it}-e^{ct})}{i+c}. \tag{48}\] \begin{table} \begin{tabular}{l c c c} \hline & \(\alpha\) & \(\beta\) & \(\gamma\) \\ \hline ETD2 & \(e^{\alpha}\) & \(\frac{(1+ch)e^{\alpha}-(1+2ch)}{hc^{2}}+\frac{-e^{\alpha}+(1+ch)}{hc^{2}}e^{ \alpha}\) & \(e^{-\alpha}\) \\ ETD2RK & \(e^{\alpha}\) & \(\frac{(ch-1)e^{\alpha}+1}{hc^{2}}+\frac{e^{\alpha}-(1+ch)}{hc^{2}}e^{- \alpha}\) & \(e^{-\alpha}\) \\ IFAB2 & \(e^{\alpha}\) & \(\frac{1}{2}he^{\alpha}(3-e^{\alpha}e^{\alpha})\) & \(e^{-\alpha}\) \\ IFRK2 & \(e^{\alpha}\) & \(\frac{1}{2}h(e^{\alpha}+e^{-\alpha})\) & \(e^{-\alpha}\) \\ AB2AM2 & \(\frac{2+ch}{2-ch}\) & \(\frac{h(3-e^{\alpha})}{2-ch}\) & \(e^{-\alpha}\) \\ \hline \end{tabular} \end{table} Table I: The Values of \(\alpha\), \(\beta\), and \(\gamma\) in the Recurrence Relation (46) Corresponding to Various Numerical SchemesFor AB2BD2, the recurrence relation (35) may be written as \[w_{n+1}=\frac{4w_{n}-w_{n-1}}{3-2ch}+i\beta\gamma^{n}, \tag{49}\] where as above \(u_{n}=\mbox{Re}(w_{n})\), the parameters in this equation being \(\beta=2h(2-e^{ih})/(3-2ch)\) and \(\gamma=e^{-ih}\). The solution to (49) is then \[w_{n}=X\xi^{n}+Y\eta^{n}+Z\gamma^{n}, \tag{50}\] where \[\xi=\frac{2+\sqrt{1+2ch}}{3-2ch}\quad\mbox{and}\quad\eta=\frac{2-\sqrt{1+2ch}} {3-2ch}. \tag{51}\] The remaining coefficients in (50) satisfy \[(\gamma-\xi)(\gamma-\eta)Z=i\beta\gamma \tag{52}\] with \[X = (\eta(w_{0}-Z)+\gamma Z-w_{1})/(\eta-\xi) \tag{53}\] \[Y = (\xi(Z-w_{0})-\gamma Z+w_{1})/(\eta-\xi). \tag{54}\] Note that for AB2BD2, two starting values are needed. In situations where an exact solution is known, this may be used to generate the second starting value; otherwise a Runge-Kutta scheme, for instance, may be used at start-up. Figure 3 shows the magnitude of the relative error \((u_{\rm num}-u_{\rm exact})/u_{\rm exact}\) in the numerical solution to (44), with \(u_{0}=1\) (off the slow manifold), at \(t=\pi/2\) for the six methods above. Figure 3: Magnitude of the relative error at \(t=\pi/2\) in (44) with \(u_{0}=1\) for six methods. It is a straightforward matter to calculate, using the results above, this relative error in the limit as \(h\to 0\). In this limit there is a clear ranking of the various schemes--the results are shown in Table II. Note that for each method the relative error is smaller by a factor of \(c\) than that expected from the truncation error per step; this is because the errors introduced are exponentially damped as the calculation proceeds. The most striking feature of Fig. 3 and Table II is the very poor performance of the integrating factor methods IFAB2 and IFRK2; the error in these methods is a factor of approximately \(10^{3}\)-\(10^{4}\) greater than that of the other methods. This is because the truncation error per step in these methods is of order \(c^{2}h^{3}\), while for the other methods it is of order \(h^{3}\ll c^{2}h^{3}\). Other authors have pointed out the weakness of integrating factor methods [3, 6, 7]. The relative error for small \(h\) is shown in Fig. 4 for different values of \(c\): The poor performance of the integrating factor methods is again illustrated in this figure. The LI Figure 4: In the limit as \(h\to 0\), the relative error at \(t=\pi/2\) in the numerical solution to (44) is given by \(kh^{2}\). The constant \(|k|\) is plotted as a function of \(c\) for various numerical schemes. Initial conditions on (\(u_{0}=0\)) and off (\(u_{0}=1\)) the slow manifold give the first and second figures, respectively. When \(-c\gg 1\), the value of \(k\) for the various numerical schemes is, in both cases: IFAB2 (\(-5c^{2}/12\)); IFRK2 (\(c^{2}/12\)); AB2BD2 (1); AB2AM2 (\(1/2\)); ETD2 (\(5/12\)); ETD2RK (\(-1/12\)). methods are almost as accurate as the ETD methods. However, this is due to fortuitous damping of the errors that arise in the initial fast phase; if the results are compared at an earlier time, the ETD methods show considerably greater accuracy than the LI methods. The best of the methods considered, for all values of \(h\), is ETD2RK. The error is smaller than that of ETD2 by a factor of 5, which is consistent with the truncation errors for the two methods. However, it should be remembered that this method requires twice as much CPU time, so when this is taken into account the accuracy is improved only by a factor of 5/4. The advantages of ETD methods become greater if we consider methods of fourth order. Because of the second Dahlquist stability barrier, LI methods are not A-stable if their order is greater than two (see, for example, [5] or [10]). The stability region for the fourth-order Adams-Moulton method AM4 does not include the entire negative real axis, so this method is not suitable for (44) except for very small \(h\). For the fourth-order backward difference method BD4, the stability region does include the negative real axis, so a fourth-order linearly implicit method AB4BD4 can be constructed for (1); \[{\bf AB4BD4} u_{n+1}=(48u_{n}-36u_{n-1}+16u_{n-2}-3u_{n-3}+48hF_{n} \tag{55}\] \[-72hF_{n-1}+48hF_{n-2}-12hF_{n-3})/(25-12hc).\] A comparison of four fourth-order methods is shown in Fig. 5. The methods considered are ETD4, obtained from (19), ETD4RK (29), the integrating factor method IFAB4 using the standard fourth-order Adams-Bashforth formula, and AB4BD4. The integrating factor method has errors larger than the other two methods by a factor of \(c^{4}\). The errors of AB4BD4 are about twice those of ETD4. A further advantage of ETD4 over AB4BD4 is that it requires Figure 5: Magnitude of the relative error at \(t=\pi/2\) in (44) with \(u_{0}=1\) for four fourth-order methods. Accuracy is ultimately limited by machine precision (here double-precision arithmetic is used). half the storage (a significant factor for PDE applications) since it uses only previous values of \(F\), whereas AB4BD4 requires previous values of both \(F\) and \(u\). But the most accurate method is ETD4RK, with an error smaller than that of ETD4 by a factor of almost 400. ### A Model Ordinary Differential Equation with Rapid Oscillation Stiffness may also be the result of rapid oscillations generated by the linear terms. As an illustrative example we consider the initial-value problem \[\dot{u}=icu+e^{it},\quad u(0)=u_{0}, \tag{56}\] which has the exact solution \[u(t)=u_{0}e^{ict}+\frac{e^{it}-e^{ict}}{i(1-c)}, \tag{57}\] where our interest is in the stiff case for which the real parameter \(c\) satisfies \(c\gg 1\). Our analysis of the various numerical schemes follows that given above in Section 4.1. By solving exactly the difference equations that correspond to the various numerical schemes identified above, we are able to calculate the absolute and relative errors in advancing the solution to time \(t=T\) using the time step \(h\). For IFAB2, IFRK2, ETD2, and ETD2RK, the results for large \(c\) are shown in Table III, for initial conditions on and off the slow manifold (this manifold corresponds to \(u_{0}=-i/(1-c)\), so that the rapid oscillations in (57) are removed). The absolute error is then \(k_{1}h^{2}(e^{iT}-e^{iTc})\), where \(k_{1}\) is given in the table. The relative error for initial conditions on the slow manifold is \(k_{2}h^{2}(e^{iT}-e^{iTc})e^{-iT}\) and off the slow manifold is \(k_{3}h^{2}(e^{iT}-e^{iTc})e^{-iTc}\); again \(k_{2}\) and \(k_{3}\) are given in Table III. The corresponding large-\(c\) results for AB2AM2 and AB2BD2 are shown in Table IV. The absolute error and relative error are denoted by \(\epsilon_{a}h^{2}\) and \(\epsilon_{r}h^{2}\), respectively, with superscripts "on" or "off" indicating whether the initial condition lies on or off the slow manifold. For initial conditions on the slow manifold, the results are very similar to those for (44): The ETD methods are slightly more accurate than the LI methods, which are more accurate than the IF methods by a factor of \(c^{2}\). However, for initial conditions off the slow manifold, the ETD methods are more accurate by a factor of \(c^{2}\) than IF methods, which in turn are more accurate than LI methods by a factor \(c^{2}\) (this strongly situation-dependent performance of LI and IF methods is discussed by Boyd [3], p. 269). \begin{table} \begin{tabular}{c c c c c} \hline & IFAB2 & IFRK2 & ETD2 & ETD2RK \\ \hline \(k_{1}\) & \(\frac{5}{12}ic\) & \(-\frac{1}{12}ic\) & \(5i/(12c)\) & \(-i/(12c)\) \\ \(k_{2}\) & \(\frac{5}{12}c^{2}\) & \(-\frac{1}{12}c^{2}\) & \(\frac{5}{12}\) & \(-\frac{1}{12}\) \\ \(k_{3}\) & \(Sic/(12u_{0})\) & \(-ic/(12u_{0})\) & \(5i/(12u_{0}c)\) & \(-i/(12u_{0}c)\) \\ \hline \end{tabular} _Note_. The absolute error is proportional to \(k_{1}\). The relative errors on and off the slow manifold are proportional to \(k_{2}\) and \(k_{3}\) respectively. \end{table} Table III: **Errors in the Numerical Solution to (56) for Large \(c\)**The relative errors for (56) with \(c=100\) and the initial condition \(u_{0}=1\) (off the slow manifold) are shown in Fig. 6. Note that the ETD methods are the most accurate, by a factor of order \(c^{2}=10^{4}\), for a very wide range of \(h\). ### A Dissipative Partial Differential Equation In this section we apply exponential time differencing methods to the (dissipative) PDE \[\frac{\partial u}{\partial t}=-2\frac{\partial^{2}u}{\partial x^{2}}-\frac{ \partial^{4}u}{\partial x^{4}}-u\,\frac{\partial u}{\partial x}, \tag{58}\] which is the well-known Kuramoto-Sivashinsky equation [12]. The boundary conditions are periodic, with spatial period \(2\pi\); the initial condition chosen is \(u(x,\,0)=0.03\sin x\). The Figure 6: Magnitude of the relative error at \(t=\pi/2\) in (56) with \(u_{0}=1\) for six methods. fourth-derivative term makes the linear part of (58) extremely stiff, with rapid linear decay of the high-wavenumber modes, so that standard explicit methods are impractical. Indeed, the motivation for our interest in ETD methods came from PDEs similar to (58) but with six derivatives in the linear term [14, 15]. The PDE (58) was solved with a pseudospectral method using 32 grid points without dealiasing, and using double precision arithmetic for \(0\leq t\leq 6\). The time stepping was carried out in spectral space, so that the linear parts of the evolution equations for each Fourier mode are uncoupled and the ETD, IF, and LI methods can be straightforwardly applied. The relative error in \(\int u^{2}\,\mathrm{d}x\) at \(t=6\) is plotted in Fig. 7 for four second-order methods. The error was measured by comparison with the "true" solution determined numerically using a fourth-order method with a very small time step. In applying the ETD methods to (58), we have used the limiting form of the coefficients for the mode with zero wavenumber, in order to avoid division by zero. However, we did not find it necessary to replace the explicit formulae for the coefficients by their Taylor series for the modes with \(|ch|\ll 1\). The results in Fig. 7 are qualitatively similar to those for (44), showing that (44) is a good model problem for dissipative PDEs. The most accurate method is ETD2, with errors lower than those of AB2AM2 by a factor of 1.7 for a wide range of \(h\). The least accurate method is the standard integrating factor method IFRK2. The time-dependence of the relative error is shown in Fig. 8, for the same four methods, with a fixed time step of \(h=0.01\). In the initial phase of exponential growth, both of the LI methods perform poorly, compared with the IFRK2 and ETD2 methods. In the later stage of nonlinear equilibration, IFRK2 is the least accurate, while the other three methods have exponentially decaying errors. At all times the ETD2 method is the most accurate. Figure 7: Magnitude of the relative error at \(t=6\) for the Kuramoto–Sivashinsky equation (58) for four methods. ### A Dispersive Partial Differential Equation We now turn to the KdV equation \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+\frac{\partial^{3}u} {\partial x^{3}}=0, \tag{59}\] which is becoming a standard test for spectral solvers [6, 16, 21]. The stiffness results from the term \(u_{xx,x}\) and manifests itself in rapid linear oscillation of the high-wavenumber modes. The computations are spatially \(2\pi\) -periodic and follow a soliton solution, \(u=f(x-ct)\), where \(f(x)=3c\;\mathrm{sech}^{2}(c^{1/2}x/2)\) for one period, i.e., up to \(t=2\pi/c\), with \(c=625\). The method uses 256 grid points and is de-aliased using the usual \(2/3\) rule, since in (59), unlike (58), there are no dissipation terms to remove the aliasing errors. The relative error plotted in Fig. 9 is \((\Sigma(u_{j}-f_{j})^{2}/\Sigma f_{j}^{2})^{1/2}\), i.e., the scaled 2-norm of the error. The results for the second-order methods are qualitatively similar to those for the previous example, with the ETD2 method giving the most accurate results. To avoid the rounding errors discussed in Section 4.3, the coefficients in (6) were computed using the four-term Taylor series for those Fourier modes with \(|ch|<10^{-4}\). If higher-order methods are required for a dispersive problem such as (59), the backward difference methods and Adams-Moulton methods cannot be used because their stability regions do not include the entire imaginary axis. This problem can be overcome by more elaborate methods such as using AM2 for the linear terms of high wavenumber, and AB4 for the nonlinear terms and the linear terms of low wavenumber [6]. If fourth-order methods are to be used for all modes, the only possibilities are IF or ETD methods. In view of the initialization problem referred to in Section 2.2, RK methods are most suitable for this problem, so results for ETD4RK (29) and IFRK4 (the integrating factor method combined Figure 8: Magnitude of the relative error as a function of \(t\) for the Kuramoto–Sivashinsky equation (58), with \(h=0.01\). with the standard fourth-order RK scheme) are shown in Fig. 9. The ETD method shows a tenfold improvement over the IF method. ### Nondiagonal Systems Exponential time differencing schemes have hitherto been applied only to systems whose linear parts are diagonal (such as arise from solving partial differential equations using a Fourier spectral method). In this case, the expressions for time stepping the amplitude of each mode can be determined independently. However, ETD methods can also readily be generalized to nondiagonal systems (such as arise from solving PDEs using finite differences or Chebyshev methods, for instance). The details of the generalization depend crucially on whether the linear operator has any zero eigenvalues, as demonstrated below in Sections 4.5.1 and 4.5.3. Since ETD methods require the calculation of the matrix exponential \(e^{Lh}\), and since the calculation of this matrix is nontrivial when \(L\) is nondiagonal, the question arises as to whether ETD methods are suitable in the case of nondiagonal \(L\). However, the matrix exponential need be calculated only once, at the start of the integration, and so the computational overhead involved is not large. We begin our discussion of nondiagonal systems with a model system to illustrate the concepts; in particular we discuss the additional difficulties associated with zero eigenvalues. We then apply ETD to a nondiagonal problem arising in fluid mechanics. #### 4.5.1 Nondiagonal System with No Zero Eigenvalues To illustrate how ETD can be generalized to a system whose linear part is nondiagonal with no zero eigenvalues, we consider the following autonomous system of two ordinary Figure 9: Error after one soliton period for the KdV equation (59) for six methods. differential equations \[\dot{u} = -v(1-\lambda r^{2})+cu(1-r^{2}), \tag{60}\] \[\dot{v} = u(1-\lambda r^{2})+cv(1-r^{2}), \tag{61}\] where \(r^{2}=u^{2}+v^{2}\) and \(c>0\). The behavior of this system is more readily observed by using the amplitude \(r\) and phase \(\theta\), such that \(u=r\cos\theta\) and \(v=r\sin\theta\). Then \[\dot{r} = cr\,(1-r^{2}), \tag{62}\] \[\dot{\theta} = 1-\lambda r^{2}, \tag{63}\] so that (unless \(r\) is initially zero) \(r\,\rightarrow\,1\) and \(\dot{\theta}\,\rightarrow\,1-\lambda\) at large time. The exact solution to (62) and (63), i.e., \[r^{2}(t) = \frac{r_{0}^{2}}{r_{0}^{2}(1-e^{-2ct})+e^{-2ct}}, \tag{64}\] \[\theta(t) = \theta_{0}+(1-\lambda)t\,-\,\frac{\lambda}{2c}\,\log\bigl{(}r_{0 }^{2}(1-e^{-2ct})+e^{-2ct}\bigr{)}, \tag{65}\] where \(r(0)=r_{0}\) and \(\theta(0)=\theta_{0}\), is useful in calculating the amplitude and phase errors in the numerical solutions. To implement the nondiagonal ETD methods, we first write (60) and (61) in the vector form \[\dot{\boldsymbol{u}}=L\boldsymbol{u}+\boldsymbol{F}, \tag{66}\] where \[\boldsymbol{u}=\begin{pmatrix}u\\ v\end{pmatrix},\qquad L=\begin{pmatrix}c&-1\\ 1&c\end{pmatrix},\qquad\boldsymbol{F}=\begin{pmatrix}(\lambda v-cu)r^{2}\\ -(\lambda u+cv)r^{2}\end{pmatrix}. \tag{67}\] Note that the matrix \(L\) is nondiagonal. Although in this case we could diagonalize \(L\) by a suitable change of variables, in applications this may not be a desirable procedure and it may be more convenient to solve the problem in the original variables, for which the linear operator is nondiagonal. By analogy with (3), we then multiply (66) by \(e^{-Lt}\) and integrate from \(t=t_{n}\) to \(t_{n+1}\) to give the exact result \[\boldsymbol{u}_{n+1}=e^{Lh}\boldsymbol{u}_{n}+e^{Lh}\int_{0}^{h}e^{-L\tau} \boldsymbol{F}(t_{n}+\tau)\,\mathrm{d}\tau. \tag{68}\] Then for ETD1 we approximate the final term in (68) as \[e^{Lh}\int_{0}^{h}e^{-L\tau}\boldsymbol{F}(t_{n}+\tau)\,\mathrm{d}\tau=e^{Lh} \int_{0}^{h}e^{-L\tau}\boldsymbol{F}_{n}\,\mathrm{d}\tau+O(h^{2})=M_{1} \boldsymbol{F}_{n}+O(h^{2}), \tag{69}\] where \[M_{1}=L^{-1}(e^{Lh}-I), \tag{70}\]and where \(I\) is the \(2\times 2\) identity matrix. For ETD2 we adopt the approximation \[\int_{0}^{h}e^{-L\tau}\mathbf{F}(t_{n}+\tau)\,\mathrm{d}\tau=\int_{0}^{h}e^{-L\tau}( \mathbf{F}_{n}+(\mathbf{F}_{n}-\mathbf{F}_{n-1})\tau/h)\,\mathrm{d}\tau+O(h^{3}) \tag{71}\] and note that \[e^{Lh}\int_{0}^{h}\tau e^{-L\tau}\,\mathrm{d}\tau=M_{2}\equiv L^{-2}[e^{Lh}-(I+ Lh)]. \tag{72}\] Thus, \[e^{Lh}\int_{0}^{h}e^{-L\tau}\mathbf{F}(t_{n}+\tau)\,\mathrm{d}\tau=M_{1}\mathbf{F}_{n}+ M_{2}(\mathbf{F}_{n}-\mathbf{F}_{n-1})/h+O(h^{3}). \tag{73}\] In this particular example, all the necessary matrices can be computed exactly. The useful matrices are \[e^{Lh} =\begin{pmatrix}e^{ch}\cos h&-e^{ch}\sin h\\ e^{ch}\sin h&e^{ch}\cos h\end{pmatrix}, \tag{74}\] \[M_{1} =\frac{1}{1+c^{2}}\begin{pmatrix}-c+e^{ch}(c\cos h+\sin h)&-1+e^ {ch}(\cos h-c\sin h)\\ 1-e^{ch}(\cos h-c\sin h)&-c+e^{ch}(c\cos h+\sin h)\end{pmatrix},\] (75) \[M_{2} =\begin{pmatrix}\mu_{d}&\mu_{o}\\ -\mu_{o}&\mu_{d}\end{pmatrix}, \tag{76}\] where \[\mu_{d} =\frac{1-c^{2}-hc-hc^{3}+(2c\sin h-(1-c^{2})\cos h)e^{hc}}{(c^{2}+ 1)^{2}}, \tag{77}\] \[\mu_{o} =\frac{-2c-h-hc^{2}+(2c\cos h+(1-c^{2})\sin h)e^{hc}}{(c^{2}+1)^ {2}}. \tag{78}\] The scheme ETD1 is thus \[\mathbf{u}_{n+1}=e^{Lh}\mathbf{u}_{n}+M_{1}\mathbf{F}_{n} \tag{79}\] and ETD2 is \[\mathbf{u}_{n+1}=e^{Lh}\mathbf{u}_{n}+M_{1}\mathbf{F}_{n}+M_{2}(\mathbf{F}_{n}-\mathbf{F}_{n-1})/h. \tag{80}\] The scheme ETD2 may be extended to give a Runge-Kutta scheme ETD2RK by first calculating \[\mathbf{a}_{n}=e^{Lh}\mathbf{u}_{n}+M_{1}\mathbf{F}_{n} \tag{81}\] and then taking the time step using \[\mathbf{u}_{n+1}=\mathbf{a}_{n}+M_{2}(\mathbf{F}(\mathbf{a}_{n},t_{n}+h)-\mathbf{F}_{n})/h. \tag{82}\]In order to evaluate the schemes ETD1, ETD2, and ETD2RK, we also introduce the more successful competitors from above: AB2AM2 and IFRK2. The AB2AM2 scheme is derived from the formula \[\boldsymbol{u}_{n+1}-\boldsymbol{u}_{n}=\frac{1}{2}hL(\boldsymbol{u}_{n+1}+ \boldsymbol{u}_{n})+\frac{3}{2}h\boldsymbol{F}_{n}-\frac{1}{2}h\boldsymbol{F}_ {n-1}, \tag{83}\] from which it follows that \[\boldsymbol{u}_{n+1}=\bigg{(}I-\frac{1}{2}hL\bigg{)}^{-1}\bigg{(}I+\frac{1}{2} hL\bigg{)}\boldsymbol{u}_{n}+\frac{1}{2}h\bigg{(}I-\frac{1}{2}hL\bigg{)}^{-1}(3 \boldsymbol{F}_{n}-\boldsymbol{F}_{n-1}). \tag{84}\] For the standard Runge-Kutta integrating factor method IFRK2 we first calculate \[\boldsymbol{b}=e^{Lh}(\boldsymbol{u}_{n}+h\boldsymbol{F}_{n}) \tag{85}\] and then take the time step with \[\boldsymbol{u}_{n+1}=e^{Lh}\bigg{(}\boldsymbol{u}_{n}+\frac{1}{2}h\boldsymbol{ F}_{n}\bigg{)}+\frac{1}{2}h\boldsymbol{F}(\boldsymbol{b},t_{n}+h). \tag{86}\] #### 4.5.2 Evaluation of Numerical Schemes In this section we describe numerical solutions to (66) using ETD1, ETD2, ETD2RK, IFRK2, and AB2AM2. For the computations we take the initial condition \[u(0)=2,\quad v(0)=1 \tag{87}\] and the parameter values \(c=100\), \(\lambda=\frac{1}{2}\). We evaluate the various schemes by comparing their predicted amplitude and phase with the exact values of these quantities at time \(t=1\), although other parameter values and end-times give similar results. The results are summarized in Fig. 10, where it is seen that ETD2, ETD2RK, and AB2AM2 are the best schemes. In particular, the integrating factor scheme IFRK2 is particularly poor in its calculation of the amplitude (being out-performed by even the first-order scheme ETD1 in the range of time steps considered). While the best at capturing the amplitude of the evolving solution, ETD2RK is rather poorer at calculating the phase. An error analysis reveals that there is no simple factor-of-5 difference between the schemes ETD2 and ETD2RK in this system. #### 4.5.3 Nondiagonal System with Zero Eigenvalues When a nondiagonal linearized system has one or more zero eigenvalues, the methods ETD1, ETD2, and ETD2RK cannot be used as presented above, because the matrix \(L\) has no inverse. However, they may be readily generalized as follows. When \(L\) has one or more zero eigenvalues, we define a pseudo-inverse of \(L\) to be \(L^{\dagger}=V^{-1}\Lambda^{\dagger}(U^{T})^{-1}\), where \(L=U^{T}\Lambda V\) is the singular value decomposition of \(L\). Here \(\Lambda^{\dagger}\) is the diagonal matrix obtained from \(\Lambda\) by taking the reciprocal of all the nonzero diagonal elements (and leaving all the zero diagonal elements as they stand). The expression \(e^{Lh}\int_{0}^{h}e^{-L\tau}\,\mathrm{d}\tau\) is then given by \[M_{1}\equiv L^{\dagger}(e^{Lh}-I)+he^{Lh}(I-L^{\dagger}L), \tag{88}\]while \(e^{Lh}\int_{0}^{h}\tau e^{-L\tau}\,\,{\rm d}\tau\) is given by \[M_{2}\equiv L^{\,\dagger 2}[e^{Lh}-(I+Lh)]+\frac{1}{2}h^{2}e^{Lh}(I-L^{\,\dagger}L). \tag{89}\] The numerical schemes ETD1, ETD2, and ETD2RK are then given by (79), (80), and (82), with \(M_{1}\) and \(M_{2}\) as in (88) and (89), respectively. #### 4.5.4 Application to a Partial Differential Equation A good example of a physical partial differential equation that gives rise to a nondiagonal matrix \(L\) is the Berman equation \[f_{yyt}=R^{-1}f_{yyyy}+f_{y}f_{yy}-ff_{yyy}, \tag{90}\] subject to the boundary conditions \[f(-1,t)=f_{y}(-1,t)=f_{y}(1,t)=0,\quad f(1,t)=1\quad(t\geq 0) \tag{91}\] and the initial condition \(f(y,0)=f_{0}(y)\). This initial boundary-value problem arises in calculating the flow of a viscous fluid in the channel \(-1\leq y\leq 1\), driven by uniform withdrawal Figure 10: The magnitude of the relative error at \(t=1\) in the numerical solution to (66), with \(u_{0}=2\) and \(v_{0}=1\), and with \(c=100\) and \(\lambda=\frac{1}{2}\). The first figure shows the amplitude error (i.e., the error in \(r\)), and the second the corresponding phase error (in \(\theta\)). of fluid through the porous walls of the channel [11]; the Reynolds number \(R\) is a dimensionless measure of the withdrawal speed. The solution to (90) and (91), after transients have decayed, can exhibit self-sustained oscillations with intricate spatial and temporal behavior and a rich analytical structure. Our numerical scheme to solve (90) and (91) proceeds by first rendering the boundary conditions homogeneous by writing \(f(y,t)=p(y)+g(y,t)\), where \(p(y)=-(y-2)(y+1)^{2}/4\) satisfies \(p(-1)=p^{\prime}(-1)=p^{\prime}(1)=p(1)-1=0\) and \(p^{\prime\prime\prime}(y)=0\). We then solve the resulting forced PDE for \(g\) by Chebyshev collocation (although clearly a variety of other methods could be used). With this method, the vector \(\boldsymbol{g}=(g_{1},\ldots,g_{N-1})^{t}\) of values of \(g\) at the interior collocation points satisfies an evolution equation of the form \[\dot{\boldsymbol{g}}=L\boldsymbol{g}+\boldsymbol{n}, \tag{92}\] where \(L\) is a nondiagonal matrix, readily found in terms of Chebyshev differentiation matrices, and \(\boldsymbol{n}\) represents terms generated by \((p^{\prime}+g_{y})(p^{\prime\prime}+g_{yy})-(p+g)(p^{\prime\prime\prime}+g_{ yy})\). The matrix \(L\) is full, which makes computation of \(e^{Lh}\), required for the implementation of ETD, time consuming. However, \(e^{Lh}\) need be computed only _once_, prior to the time-stepping loop, and in our numerical simulations of (90) we have found the computational expense of calculating \(e^{Lh}\) to be trivial compared with that of generating the time evolution of the solution. ## 5 Conclusions We have developed and tested a class of numerical methods for systems with stiff linear parts, based on combining exponential time differencing for the linear terms with a method similar to Adams-Bashforth for the nonlinear terms. The ETD method is straightforward to apply and can be extended to arbitrary order (cf. [1]). As the stiffness parameter tends to zero, the ETD method approaches the Adams-Bashforth method of the same order; as the stiffness parameter tends to infinity, the nonlinear Galerkin method [13] is recovered. In addition to these multistep ETD methods, we have derived new Runge-Kutta forms of the ETD method, of second, third, and fourth order. These are easier to use than the high-order multistep forms, since they do not require initialization, and are more accurate. These ETD methods have good stability properties and are widely applicable to dissipative PDEs and nonlinear wave equations. They are particularly well suited to Fourier spectral methods, which have diagonal linear part. We have carried out extensive tests of ETD methods, comparing them with linearly implicit and integrating factor methods. For all the examples tested, the ETD methods are more accurate than either LI or IF methods. For solutions which follow a slow manifold, the second-order ETD method is slightly more accurate than the LI methods, and has the advantage that it readily generalizes to higher order, whereas LI methods do not. But for solutions off the slow manifold, the results of Section 4.2 show that second-order ETD methods are more accurate than LI methods by the fourth power of the stiffness parameter \(c\gg 1\), and more accurate than integrating factor methods by a factor \(c^{2}\). Like integrating factor methods, the ETD methods solve the linear parts exactly. However, ETD methods avoid the major drawback of IF methods, which is the introduction of the fast time scale into the nonlinear terms, leading to large error constants. ## References * [1] G. Beylkin, J. M. Keiser, and L. Vozovoi, A new class of time discretization schemes for the solution of nonlinear PDEs, _J. Comput. Phys._**147**, 362 (1998). * [2] J. P. Boyd, Eight definitions of the slow manifold: Seiches, pseudoseiches and exponential smallness, _Dyn. Atmos. Oceans_**22**, 49 (1995). * [3] J. P. Boyd, _Chebyshev and Fourier Spectral Methods_ (Dover, New York, 2001). * [4] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, _Spectral Methods in Fluid Dynamics_, Springer Series in Computational Physics (Springer-Verlag, Berlin, 1988). * [5] B. Fornberg, _A Practical Guide to Pseudospectral Methods_ (Cambridge Univ. Press, Cambridge, UK, 1995). * [6] B. Fornberg and T. A. Driscoll, A fast spectral algorithm for nonlinear wave equations with linear dispersion, _J. Comput. Phys._**155**, 456 (1999). * [7] B. Garcia-Archilla, Some practical experience with the time integration of dissipative equations, _J. Comput. Phys._**122**, 25 (1995). * [8] P. Henrici, _Discrete Variable Methods in Ordinary Differential Equations_ (Wiley, New York, 1962). * [9] R. Holland, Finite-difference time-domain (FDTD) analysis of magnetic diffusion, _IEEE Trans. Electromagn. Compat._**36**, 32 (1994). * [10] A. Iserles, _A First Course in the Numerical Analysis of Differential Equations_ (Cambridge Univ. Press, Cambridge, UK, 1996). * [11] J. R. King and S. M. Cox, Asymptotic analysis of the steady-state and time-dependent Berman problem, _J. Eng. Math._**39**, 87 (2001). * [12] Y. Kuramoto and T. Tsuzuki, Persistent propagation of concentration waves in dissipative media far from thermal equilibrium, _Prog. Theor. Phys._**55**, 356 (1976). * [13] M. Marion and R. Temam, Nonlinear Galerkin methods, _SIAM J. Numer. Anal._**26**, 1139 (1989). * [14] P. C. Matthews and S. M. Cox, Pattern formation with a conservation law, _Nonlinearity_**13**, 1293 (2000). * [15] P. C. Matthews and S. M. Cox, One-dimensional pattern formation with Galilean invariance near a stationary bifurcation, _Phys. Rev. E_**62**, R1473 (2000). * [16] P. A. Milewski and E. Tabak, A pseudospectral procedure for the solution of nonlinear wave equations with examples from free-surface flows, _SIAM J. Sci. Comp._**21**, 1102 (1999). * [17] P. G. Petropoulos, Analysis of exponential time-differencing for FDTD in lossy dielectrics, _IEEE Trans. Antennas Propagation_**45**, 1054 (1997). * [18] C. Schuster, A. Christ, and W. Fichtner, Review of FDTD time-stepping for efficient simulation of electric conductive media. _Microwave Optical Technol. Lett._**25**, 16 (2000). * [19] A. Taflove, _Computational Electrodynamics: The Finite-Difference Time-Domain Method_, Artech House Antenna Library (Artech House, London, 1995). * [20] L. N. Trefethen, Lax-stability vs. eigenvalue stability of spectral methods, in _Numerical Methods for Fluid Dynamics III_, edited by K. W. Morton and M. J. Baines (Clarendon Press, Oxford, 1988), pp. 237-253. * [21] L. N. Trefethen, _Spectral Methods in Matlab_ (Soc. for Industr. & Appl. Math., Philadelphia, 2000). * [22] J. Z. Zhu, L.-Q. Chen, J. Shen, and V. Tikare, Coarsening kinetics from a variable-mobility Cahn-Hilliard equation: Application of a semi-implicit Fourier spectral method, _Phys. Rev. E_**60**, 3564 (1999). # Transition to chaos in high control parameter of Swift-Hohenberg equation Hanif Nata Wijaya\({}^{1}\) [MISSING_PAGE_POST] \({}^{25}\) Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, BulApart from equation (1), the Swift-Hohenberg equation also has a complex form [3]: \[\partial_{t}u=\epsilon u-(1+\partial_{x}^{2})^{2}u-(1+ib)u^{3}, \tag{3}\] where \(b\) is the imaginary constant that plays a role in the dynamics that emerge. For the real version of the equation, there are two possibilities for the solution \(u\): it either decays or grows over time. Linear stability analysis was carried out to determine the behavior of the solution \(u\). First, note that the ground state solution \(u_{b}\) is a steady state or has no spatial structure, namely \(u_{b}=0\). We then apply a small perturbation \(u_{p}=u-u_{b}\) to check if \(u\) decays or grows. Let \(u_{p}\) evolve based on the equation [2]: \[\partial_{t}u_{p}=\hat{N}(u_{b}+u_{p})-\hat{N}(u_{b}), \tag{4}\] where \(\hat{N}\) is an operator that is a function of \(u(x,t)\) and is defined from the right-hand term of equation (1): \[\hat{N}(u)=\epsilon u-u-2\partial_{x}^{2}-\partial_{x}^{4}u-u^{3}. \tag{5}\] Equation (4) becomes: \[\partial_{t}u_{p} = (\epsilon-1)(u_{b}+u_{p})-\partial_{x}^{4}(u_{b}+u_{p})-2 \partial_{x}^{2}(u_{b}+u_{p})-(u_{b}+u_{p})^{3} \tag{6}\] \[-(\epsilon-1)(u_{b})-\partial_{x}^{4}(u_{b})-2\partial_{x}^{2}(u _{b})-u_{b}^{3}.\] By linearizing equation (6) and using \(u_{b}=0\), the above equation becomes: \[\partial_{t}u_{p}=(\epsilon-1-2\partial_{x}^{2}-\partial_{x}^{4})u_{p}, \tag{7}\] which can be rewritten as: \[\partial_{t}u_{p}=\epsilon u_{p}-(1+\partial_{x}^{2})^{2}u_{p}. \tag{8}\] The above equation is analogous to the linear form of the Swift-Hohenberg equation (1) with the solution of an exponential form over time and space: \[u_{p}=Ae^{(\sigma t+\alpha x)}, \tag{9}\] where \(A\) is a constant and \(\sigma\) is the growth rate. Equation (9) is substituted into equation (8) to obtain the growth rate as a function of the control parameter \(\epsilon\) and \(\alpha\): \[\sigma=\epsilon-(\alpha^{2}+1)^{2}. \tag{10}\] The constant \(\alpha\) is determined by assuming that \(u_{p}\) has periodic boundary conditions with period \(L\). Equation (9) will be periodic if: \[e^{\alpha x}=e^{\alpha(x+L)}. \tag{11}\] Equation (11) is fulfilled if \(e^{\alpha L}=1\), so \(\alpha=\frac{i2\pi m}{L}\) where \(m\) is an integer. This can be simplified by setting \(\alpha=iq\) and \(q=\frac{2\pi m}{L}\), which is the wave number. The relationship between growth rate \(\sigma\) and wave number \(q\) is: \[\sigma=\epsilon-(1-q^{2})^{2}. \tag{12}\] The growth rate \(\sigma\) is maximum when the wave number \(q\) is equal to the critical wave number \(q_{c}=1\). When the control parameter is negative, the growth rate will also be negative, indicating system stability. When the control parameter is zero, the growth rate is zero at \(q=q_{c}=1\), indicating marginal stability of the system. When the control parameter is positive, the growth rate is positive for a certain range of wave numbers, leading to system instability. The plot of equation (12) for negative, zero, and positive \(\epsilon\) is shown in Figure 1. For \(\epsilon=0.2\), there exists a small range of wave numbers around the critical wave number that contribute to the positive growth rate. An initial disturbance will grow, leading to instability. The existence of several unstable wave numbers will result in a solution in the form of a spatial periodic pattern (rolls) at the onset of the instability [4]. In general, the transition from an ordered state to a chaotic state on various experimental systems and mathematical models occur through a scenario of intermittency or spatiotemporal intermittency [5, 6, 7]. The idea of the intermittency scenario was coined by Pomeau and Manneville (1980) when studying the Lorenz model [6]. They varied a control parameter \(r\) of the Lorenz model. When the control parameter is below the critical value \(r_{c}=166.06\) the dynamic shows periodic (regular) behavior. In the spatiotemporal intermittency (STI) scenario, the previously orderly or laminar state will become a chaotic state preceded by regular and chaotic states that appear simultaneously or coexist at certain control parameter values. On a time series graph, this is characterized by the presence of regular dynamics, such as periodic dynamics, which are then punctuated by an explosion (burst). This scenario towards chaos has been observed in the modified Swift-Hohenberg equation [7] \[\partial_{t}u=\epsilon u-(\partial_{xx}+1)^{2}\,u-u\partial_{x}u, \tag{13}\] that is equivalent to the damped Kuramoto-Sivashinsky equation [12, 13, 14] \[\partial_{t}u+\eta u+\partial_{xx}u+\partial_{xxxx}u+2u\partial_{x}u=0. \tag{14}\] The dynamics in the above equation is determined by two parameters, namely the system size \(D\) and the parameter \(\eta\). For a fixed value of \(D\), the control parameter is \(\eta\). For the equation (14) an ordered state occurs at large \(\eta\). By decreasing the value of \(\eta\) until the system becomes chaotic. Apart from modifications to the nonlinear term, there is research that adds a noise term, with which emergence bifurcation can be obtained [9, 10]. Research on original Swift-Hohenberg has mainly been carried out at low control parameter. This research will focus on the influence of using high control parameter and focus on the transition to chaotic dynamics. ## 2. Implementation of the Exponential Time Differencing (ETD) Scheme in the Swift-Hohenberg equation To obtain the solution \(u\), first we multiply it by the integrating factor \(e^{-\pounds t}\) then we integrate between the limits \(t_{n}\) to \(t_{n+1}\). The distance between two points in the domain \(t\) is \(h=t_{n+1}-t_{n}\). \[d_{t}e^{-\pounds t}u=e^{-\pounds t}\dot{u}-\pounds ue^{-\pounds t} \tag{15}\] Figure 1. Dispersion relation of growth rate \(\sigma\) as a function of \(q\) obtained by linear stability analysis of the Swift-Hohenberg equation which fulfills equation (12). The unstable region \(\sigma>0\) occurs when the control parameter is positive [2]. \[\int_{t_{n}}^{t_{n+1}}d\;e^{-\pounds t}u=\int_{t_{n}}^{t_{n+1}}e^{-\pounds t}H(u,t) \;dt. \tag{16}\] By operating the integral of the left side of the equation we get \[\left[ue^{-\pounds t}\right]_{t_{n}}^{t_{n+1}}=\int_{t_{n}}^{t_{n+1}}e^{-\pounds t }H(u,t)\;dt. \tag{17}\] Where \(t_{n}=0\), \(t_{n+1}=h\), and by introducing the new time variable \(\tau\) in the integration as the time lag (substituting \(H(u(t_{n}+\tau),t_{n})\) to \(H(u,t)\)) we get \[e^{-\pounds t_{n+1}}u_{t_{n+1}}-e^{-\pounds 0}u_{t_{n}}=\int_{0}^{h}e^{-\pounds \tau}H(u(t_{n}+\tau),t_{n}+\tau)d\tau. \tag{18}\] Considering \(e^{-\pounds 0}=1\) and equating the above equation we get \[u_{t_{n+1}}=\frac{u_{t_{n}}+\int_{0}^{h}e^{-\pounds\tau}H(u(t_{n}+\tau),t_{n} +\tau)d\tau}{e^{-\pounds t_{n+1}}} \tag{19}\] \[u_{t_{n+1}}=u_{t_{n}}e^{\pounds h}+e^{\pounds h}\int_{0}^{h}e^{-\pounds\tau}H( u(t_{n}+\tau),t_{n}+\tau)\;d\tau \tag{20}\] Equation (20) is a recurrence relation between \(u_{t_{n+1}}\) and \(u_{t_{n}}\). The order in the ETD scheme depends on the integrand \(H(u(t_{n}+\tau),t_{n}+\tau)\) used in the calculation. ETD Scheme 1 is obtained by assuming the value of \(H(u(t_{n}+\tau),t_{n}+\tau)\) in equation (20) is constant, denoted as \(H_{n}\). Thus, we obtain the ETD 1 scheme \[u_{n+1}=e^{\pounds h}u_{n}+e^{\pounds h}\int_{0}^{h}e^{-\pounds\tau}H_{n}\;d\tau \tag{21}\] \[=e^{\pounds h}u_{n}+e^{\pounds h}\left[\frac{H_{n}e^{-\pounds h}}{-\pounds}- \left(\frac{H_{n}}{-\pounds}\right)e^{-\pounds 0}\right]=e^{\pounds h}u_{n}+ \left[\frac{H_{n}e^{\pounds h}}{\pounds}-\frac{H_{n}}{\pounds}\right] \tag{22}\] \[u_{n+1}=e^{\pounds h}u_{n}+\frac{H_{n}}{\pounds}\left(e^{\pounds h}-1\right) \tag{23}\] A higher order ETD scheme can be obtained by changing the integrand value which is not a constant along the interval \(t_{n}\leq t\leq t_{n+1}\). Second order exponential time differencing (ETD2) scheme is obtained by assuming that the value \(H(u(t_{n}+\tau),t_{n}+\tau)\) is \[H=H_{n}+\frac{\tau}{h}\left(H_{n}-H_{n-1}\right). \tag{24}\] So we get the ETD2 scheme \[u_{n+1}=u_{n}e^{\pounds h}+\frac{H_{n}}{h\pounds^{2}}\left[(h\pounds+1)e^{ \pounds h}-2h\pounds-1\right]+\frac{H_{n-1}}{h\pounds^{2}}(-e^{\pounds h}+h \pounds+1). \tag{25}\] To get a smaller relative error we combine the ETD2 with the fourth-order Runge-Kutta (RK4) method [8]. The ETDRK4 equation is as follows [15]: \[u_{n+1}=u_{n}e^{ch}+H(u_{n},t_{n})[-4-hc+e^{ch}(4-3hc+h^{2}c^{2})]\] \[+2(H(a_{n},t_{n}+h/2)+H(b_{n},t_{n}+h/2))[2+hc+e^{ch}(-2+hc)] \tag{26}\] \[+H(c_{n},t_{n}+h)[-4-3hc-h^{2}c^{2}+e^{ch}(4-hc)]/h^{2}c^{3}\] \[a_{n}=u_{n}e^{\frac{ch}{2}}+(e^{\frac{ch}{2}-1})H(u_{n},t_{n})/c \tag{27}\] \[b_{n}=u_{n}e^{\frac{ch}{2}}+(e^{\frac{ch}{2}-1})H(a_{n},t_{n}+h/2)/c \tag{28}\] \[c_{n}=u_{n}e^{\frac{ch}{2}}+(e^{\frac{ch}{2}-1})(2H(b_{n},t_{n}+h/2)-H(u_{n},t _{n}))/c \tag{29}\] ## 3. Solving the Swift-Hohenberg Equation Using the ETDRK4 Scheme and Spectral Method Equation (1) is a periodic equation whose solution will be calculated by approximating the solution \(u\) using the Fourier series. The approximate solution of \(u\) using the Fourier series is \[u(x,t)=\sum_{m=1}^{N}\hat{u}_{m}e^{ik_{m}x},\quad k_{m}=\frac{m\pi}{L} \tag{30}\] \[\frac{\partial u(x,t)}{\partial t}=\sum_{m=1}^{N}\frac{\partial\hat{u}_{m}}{ \partial t}e^{ik_{m}x} \tag{31}\] \[\frac{\partial^{2}u(x,t)}{\partial x^{2}}=-\sum_{m=1}^{N}k_{m}^{2}\hat{u}_{m}e ^{ik_{m}x} \tag{32}\] \[\frac{\partial^{4}u(x,t)}{\partial x^{4}}=\sum_{m=1}^{N}k_{m}^{4}\hat{u}_{m}e ^{ik_{m}x} \tag{33}\] Substituting equations (30) to (33) into equation (1), we get \[\partial_{t}\hat{u}_{m}=\left[(\epsilon-1)+2k_{m}^{2}-k_{m}^{4}\right]\hat{u}_ {m}-\left[\hat{u}_{m}^{3}e^{(2k_{m}x_{i})}\right] \tag{34}\] \[\partial_{t}\hat{u}_{m}=\left[(\epsilon-1)+2k_{m}^{2}-k_{m}^{4}\right]\hat{u}_ {m}-\hat{\omega}_{n} \tag{35}\] So we obtain ordinary differential equations in Fourier space. The equation (35) has the form of equation (2), so it can be solved using the ETDRK4 method with the value \(\pounds=(\epsilon-1+2k_{m}^{2}-k_{m}^{4})\) and \(N(u,t)=\hat{\omega}_{n}=-\)FFT(IFFT(\(\hat{u})^{3}\)), where FFT is the fast Fourier transform and IFFT is the inverse fast Fourier transform. Using ETDRK4, the nonlinear term in any functional form, including the cubic term, is treated in the wavenumber space. Essentially, we combine ETDRK4 and the spectral method to solve the Swift-Hohenberg equation by transforming the equation from physical space \(x\) to the wavenumber space \(k\) using the spectral method, and then solving the time \(t\) dependence of the resulting equation using ETDRK4. The details of the implementation of ETDRK4 combined with the spectral method on the Swift-Hohenberg equation have been previously described [16]. ## 4. Results and discussions ### Real Swift-Hohenberg Equation In this paper, the size of the system \(D\) and the number of truncations \(W\) in equations (1) remain constant across all simulations, with \((D,W)=(20\pi,512)\), ensuring that the dynamics of equation (1) are solely influenced by the control parameter \(\epsilon\). To observe the dynamic behavior of the Swift-Hohenberg equation, a maximum time of \(t_{\text{max}}=200000\) is utilized, with a time discretization \(h=0.05\). The control parameter values used are in the range \(0\leq\epsilon\leq 23.9\). The dynamics of the Swift-Hohenberg equation are illustrated by the spatiotemporal plot in Figure 2. It demonstrates increasingly irregular and complex behavior as the control parameter is increased. Figure 2 (A) shows regular and constant dynamics for \(\epsilon=1.0\). When the control parameter is increased to \(\epsilon=22.0\), the dynamics remain regular and exhibit a spatially periodic solution. This is predictable considering the contribution of finite wavenumbers, as shown by the range of positive \(q\) in Figure 1. As the control parameter is increased further, the dynamics transition into a periodic state. The spatiotemporal plot in Figure 2 (C) for \(\epsilon=22.4\) shows periodic changesin solution values over time, forming a transverse strip pattern. The Swift-Hohenberg equation continues to exhibit periodic dynamics until the control parameter reaches \(\epsilon=22.6\). More complex dynamics emerge with further increases in the control parameter. Figure 3 shows the spatiotemporal dynamics for the following control parameter values: (a) \(\epsilon=22.7\), (b) \(\epsilon=22.8\), (c) \(\epsilon=23.0\), and (d) \(\epsilon=23.6\). It can be observed that the dynamics become irregular for control parameter values \(\epsilon\geq 23.0\). The dynamics of the Swift-Hohenberg equation for a fixed control parameter value can be described through a time series graph at one point in the variable space. Figure 4 illustrates the dynamics produced by the Swift-Hohenberg equation. The time series graph shows that for a control parameter of \(22\), the dynamics are regular and constant. Figures 4 (b) and (c) show the time series graph for control parameters \(22.4\) and \(22.7\), respectively, exhibiting regular dynamics that appear periodic and quasiperiodic, in harmony with the spatiotemporal graph. This is because the control parameter has reached the critical value, resulting in a Hopf bifurcation that produces periodic dynamics. As the control parameter increases further, the time series graph becomes irregular and displays chaotic dynamics, as shown in Figure 4 (d) with \(\epsilon=23.6\). The transition from regular to chaotic dynamics in the Swift-Hohenberg equation at high control parameters occurs through a scenario of intermittency or spatiotemporal intermittency, similar to that in other systems, as reported in previous research [5, 6, 7]. To further characterize the dynamics of the Swift-Hohenberg equation, a power spectrum analysis is performed. The temporal power spectrum function of a discrete-time series \(u(t_{q})\) is defined as [11] \[S_{j}\equiv S_{\omega_{j}}=|\hat{u}_{x}(\omega_{j})|^{2} \tag{36}\] where \(\hat{u}_{x}(\omega_{j})\) is a temporal discrete Fourier transform, which is the amplitude of each harmonic that forms the time series data. The temporal discrete Fourier transform is Figure 2. Spatiotemporal plot of solution \(u\) of the Swift-Hohenberg equation with spatial discretization \(W=512\), \(\Delta x=20\pi/512\), \(t_{max}=200000\) (A) for control parameter \(\epsilon=1\), (B) \(\epsilon=22.0\), (C) \(\epsilon=22.4\), and (D) \(\epsilon=22.6\). The colored legends on the right of each plots are indicating the value of \(u\). Figure 4. Fluctuation of \(u\) obtained by solving the Swift-Hohenberg equation with spatial discretization \(W=512\), \(\Delta x=20\pi/512\), \(t_{max}=200000\) (A) for control parameter \(\epsilon=22.0\), (B) \(\epsilon=22.4\), (C) \(\epsilon=22.7\), and (D) \(\epsilon=23.6\). Figure 3. Spatiotemporal plot of the solution of the Swift-Hohenberg equation with spatial discretization \(W=512\), \(\Delta x=20\pi/512\), \(t_{max}=200000\) (A) for control parameter \(\epsilon=22.7\), (B) \(\epsilon=22.8\), (C) \(\epsilon=23.0\), and (D) \(\epsilon=23.6\). defined as \[\hat{u}_{x}(\omega_{j})=\frac{1}{Q}\sum_{q=0}^{Q-1}u_{x}(t_{q})e^{-i\omega_{j}t_{q}} \tag{37}\] where \(\omega_{j}=\frac{2\pi j}{Q}\) with \(j=0,1,..,Q-1\) and \(t_{q}=\frac{qT}{Q}\) with \(q=0,1,...,Q-1\). Figure 5(A) illustrates the results of the power spectrum analysis of the Swift-Hohenberg equation with a control parameter of \(\epsilon=22.4\). In the power spectrum graph, there are some peaks, indicating that the dynamics generated by the Swift-Hohenberg equation contain some frequencies. The spatiotemporal diagram with control parameter values below 22.4 also exhibits similar behavior, as shown in Figure 2(A) and (B). Similarly, the time series graph depicts consistent dynamics, as shown in Figure 4(A). Within a certain range of the control parameter, the Swift-Hohenberg equation generates periodic and quasiperiodic dynamics. Periodic dynamics are characterized by the presence of a fundamental frequency along with its harmonic frequencies in the power spectrum analysis. The existence of fundamental frequencies can be seen in the peaks (\(q<0.05\)) of the power spectrum of the pre-chaotic state (see Figure 5(B)) which persist in the chaotic state as suggested by Ruelle-Takens and Newhouse's route to chaos theory [17]. The transition to periodic dynamics occurs when the control parameter reaches a critical value, leading to Hopf bifurcation and the emergence of periodic dynamics. Periodic dynamics occur within the control parameter range \(22.4\leq\epsilon<22.7\). When the control parameter is increased to \(\epsilon=22.4\), the power spectrum graph in Figure 5(A) displays the presence of one fundamental frequency and its harmonic frequencies. In Figure 5(B), with the control parameter set to \(\epsilon=22.7\), two fundamental frequencies appear, indicating quasiperiodic dynamics. This suggests that a control parameter value of 22.7 is a critical point where the second Hopf bifurcation occurs, resulting in quasiperiodic dynamics. Quasiperiodic dynamics continue in the Swift-Hohenberg equation for control parameter in the range of 22.8 to 23.0, revealing some frequencies in the power spectrum graph. Figure 5. Power spectrum graphs of Swift-Hohenberg equation solutions for (A) \(\epsilon=22.4\), (B) \(\epsilon=22.7\), (C) \(\epsilon=22.8\), and (D) \(\epsilon=23.6\). Quasiperiodic dynamics eventually evolve into chaotic dynamics, characterized by a broadband spectrum in the power spectrum analysis. By varying the control parameter from \(\epsilon=0\) up to \(\epsilon=25\), we observed that chaotic dynamics manifest within the control parameter range of \(23.1\leq\epsilon\leq 23.9\). The power spectrum graph of chaotic dynamics exhibits a broadband spectrum, as illustrated in Figure 5 (D). For instance, when the control parameter is set to \(\epsilon=23.6\), the presence of a broadband spectrum indicates chaotic dynamics. To verify the dynamics at \(\epsilon=23.6\), we calculated the Lyapunov exponent [18] (the Python program is available in [19]). The chaotic dynamics at \(\epsilon=23.6\) are confirmed by the positive Lyapunov exponent (\(\lambda>0\)). In the chaotic state, the power spectrum shows a power-law behavior of \(S(\omega)\propto\omega^{-0.81}\), as shown in Figure (6). Thus, we demonstrate that the power spectrum behavior of chaotic dynamics in the Swift-Hohenberg equation is similar to that observed in turbulent flow. ### The Complex Swift-Hohenberg Equation As mentioned earlier, the Swift-Hohenberg equation with complex terms is expressed as follows [3]: \[\partial_{t}u(x,t)=\epsilon u(x,t)-(1+\partial_{x}^{2})^{2}u(x,t)-(1+ib)u(x,t) ^{3}. \tag{38}\] In this study, the parameter \(b\) is varied while \(\epsilon\) is kept constant at \(\epsilon=1\). The system size for the simulation is \(D=300\) with a spatial discretization of \(W=1024\). The integration is carried out using the ETDRK4 method over a period of \(t_{max}=200000\) with a time discretization of \(\delta t=0.05\). The resulting dynamics are observed as the control parameter \(b\) is varied in the range \(-8\leq b\leq 0\). The most straightforward solution occurs when \(b=0\) because the Swift-Hohenberg equation returns to its real form, as depicted in Figure 7 (A). For certain values of the parameter \(b\), the complex Swift-Hohenberg equation exhibits regular dynamics. When \(b=-1\), the spatiotemporal plot remains regular. However, as \(b\) is changed to \(-4\), a small chaotic region emerges alongside predominantly ordered dynamics. Further increasing the absolute value of \(b\) to \(-5\) results in a larger chaotic region with diminishing regular dynamics. Intermittent behavior becomes evident in the spatial coordinate, particularly at \(b=-5.5\), where chaotic areas dominate, and regular areas dwindle. Ultimately, as \(b\leq-7\), the complex Swift-Hohenberg system transitions to spatiotemporal chaos, as depicted in Figure 7, where regular dynamics become challenging to discern, and chaos prevails. The temporal dynamics of a point in space, specifically at \(x=0\), from the complex Swift-Hohenberg equation with parameter values around \(b\approx-5.5\), are illustrated in Figure 8. At a control parameter value of \(b=-5\), the time series graph at \(x=0\) exhibits regular dynamics. As the parameter \(b\) increases, an irregular time series region emerges, as seen in Figures 8 (B), (C), and (D). Within the \(b\) parameter range, regular dynamics coexist Figure 6. Power spectrum \(S_{j}\) of the solution of the Swift-Hohenberg equation for \(\epsilon=23.6\) in the log-log plot. The red line follows \(S(\omega)\propto\omega^{-0.81}\). with irregular or chaotic dynamics. With further increases in the parameter \(b\), the time series graph reveals irregular or chaotic dynamics, as evidenced at \(b=-8\). The spatiotemporal plot in Figure 7 demonstrates spatial intermittency when \(b=-4\), and as \(b\) decreases, intermittency becomes more pronounced. Power spectrum analysis within the \(-8\leq b\leq-6\) parameter range unequivocally confirms the presence of chaotic spatiotemporal dynamics. This is evident from the broadband noise exhibited on the power spectrum graph, a hallmark of chaotic behavior. The Lyapunov exponent value of the solution of the Swift-Hohenberg equation at complex parameter \(-8\leq b\leq-6\) is positive. ## 5. Conclusions Numerical solutions of the original Swift-Hohenberg equation have been obtained at high control parameter values. Chaotic dynamics are observed when the control parameter is sufficiently high, specifically at \(\epsilon=23.6\), as evidenced by a spectrum with significant background noise. Additionally, a quantitative analysis of the dynamics using Lyapunov exponents consistently shows positive values, indicating the chaotic behavior of the field \(u\). Furthermore, we have investigated the Swift-Hohenberg equation with complex terms, primarily at a relatively low control parameter value of \(\epsilon=1.0\), allowing us to focus on variations in the complex constant. The results indicate that for specific ranges of the Figure 7. Spatiotemporal plot of the solution of equation (38) with spatial discretization \(W=512\), \(\Delta x=20\pi/512\), \(t_{max}=200000\), control parameter \(\epsilon=1\), and for \(b=0,-1,-4,-5,-5.5,-8\) from (A) to (F), respectively. complex constant, chaotic dynamics also emerge. Consequently, our study elucidates the transition to chaos in both the real and complex variants of the Swift-Hohenberg equation. ### Acknowledgments This work was partially supported by The Directorate General of Higher Education of Indonesia Contract Number PUPT 119/LPPM/2015 and Number PDUPT 74/LPPM/2018. Figure 8. Fluctuation of \(u\) obtained by solving equation (38) with \(b=-5.0,-5.3,-5.4,-5.5,-8\) in (A)-(E), respectively. Figure 9. Power spectrum of the solution of equation (38) with \(b=-6\) (A), \(b=-7\) (B), and \(b=-8\) (C). ## References * [1] Swift J., Hohenberg, P.C., (1977). Hydrodynamic fluctuations at the convective instability, Phys. Rev. A., 15 (1), pp. 319-328 * [2] Cross, M.C. dan Greenside, H., (2009), Pattern Formation and Dynamics in Nonequilibrium Systems, Cambridge University Press, New York. * [3] Gelens, L., Knobloch, E., (2011), Travelling Waves and Defects in Swift-Hohenberg Equation, Physical Review E 84. * [4] Cross, M.C. dan Hohenberg, P.C., (1993), Pattern Formation Outside Equilibrium, Rev. Mod. Phys. **65**: 851-1112 * [8] Burden, R.L., dan Faires, J.D., (2011), Numerical Analysis Ninth Edition, Brooks/Cole Cengage Learning, Boston. * [9] Elphick, C., Tirapegui, E., Brachet, M. E., Coullet, P., and Iooss, G. (1987). A simple global characterization for normal forms of singular vector fields. Physica D: Nonlinear Phenomena, 29(1-2), 95-127. * Statistical, Nonlinear, and Soft Matter Physics, 87(4). * [11] Argyris, J., Faust, G., Haase, M., Friedrich, R., (2015), An Exploration of Dynamical System and Chaps, Springer-Verlag, Berlin. * [12] Kuramoto, Y., (1978), Progress of Theoretical Physics Supplement, 64, 346-367 * [13] Sivashinky, G.I., (1980), SIAM Journal on Applied Mathematics, 39 (1) 67-82 * [14] Sivashinky, G.I., Acta Astronautica, (1977), 4(11), 1177-1206 * [15] Cox, S.M., Matthews, P.C., (2002), Exponential Time Differencing for Stiff Systems, Journal of Computational Physics., 176, pp. 430-455. * [16] Wijaya, H.N., 2018, Numerical Simulation of The Sift-Hohenberg Equation Using Exponential Time Differencing Runge Kutta 4 and Pseudospectral Methods, Bachelor Thesis, (Unpublished). * [17] Schuster, H. G. and Just, W., (2005). Deterministic Chaos: An Introduction, WILEY-VCH Verlag GmbH & Co. KGaA, 129-130 * [18] Rosenstein, M. T., Collins, J. J., and De Luca, C. J. (1993). A practical method for calculating largest Lyapunov exponents from small data sets. Physica D, 65(1-2), 117-134. * [19][https://cshoel.github.io/nolds/nolds.html](https://cshoel.github.io/nolds/nolds.html) \begin{tabular}{c|c} & Hanif Nata Wijaya graduated from Gadjah Mada University, Yogyakarta, Indonesia, in 2018. His research topic was dynamics of nonlinear equation. Additionally, he pursued professional teacher education organized by the Ministry of Education, Culture, Research, and Technology and graduated in 2023. He is currently becoming a teacher at a public high school in Temanggung, Central Java, Indonesia. \\ \end{tabular} \begin{tabular}{c c} & Fahrudin Nugroho received B.Sc. and M.Sc. from Gadjah Mada University, Yogyakarta, Indonesia, in 2004 and 2007, respectively. He received his Ph.D. from Kyushu University, Japan in 2012. He is a lecturer at Gadjah Mada University. His main research interests are chaos and nonlinear dynamics. \\ \end{tabular} # Lyapunov Exponents for Continuous-Time Dynamical Systems T. M. Janaki and Govindan Rangarajan Department of Mathematics Indian Institute of Science Bangalore 560 012, India Also at Centre for Theoretical Studies; e-mail address: rangaraj@math.iisc.ernet.in December 9, 2003 ###### Abstract In this article, different methods of computing Lyapunov exponents for continuous-time dynamical systems are briefly reviewed. The relative merits and demerits of these methods are pointed out. **1. Preliminaries** The problem of detecting and quantifying chaos in a wide variety of systems is an ongoing and important activity. In this context, computing the spectrum of Lyapunov exponents has proven to be the most useful dynamical diagnostic for chaotic systems. The Lyapunov exponents give the average exponential rates of divergence or convergence of nearby orbits in the phase-space. In systems exhibiting exponential orbital divergence, small initial differences which we may not be able to resolve get magnified rapidly leadingto loss of predictability. Any system containing atleast one positive Lyapunov exponent is defined to be chaotic, with the magnitude of the exponent reflecting the time scale on which system dynamics become unpredictable. For systems whose equations of motions are explicitly known, there exist several methods for computing Lyapunov exponents. In this paper, we briefly describe these various methods, their advantages and disadvantages. Let us consider an \(n\) dimensional continuous-time dynamical system, \[{dz\over dt}=F(z,t),\] where \(z=(z_{1},z_{2},...,z_{n})\) and \(F\) is a \(n\)-dimensional vector field. Let \(Z(t)~{}=~{}z(t)-z_{0}(t)\) denote deviations from the fiducial trajectory \(z_{0}(t)\). Linearizing eq(1) around \(z_{0}(t)\), we have \[{dZ\over dt}~{}=~{}DF(z_{0}(t),t)~{}.~{}Z,\] where \(DF\) denotes the \(n\times n\) Jacobian matrix. The linearized equations are integrated along the fiducial trajectory to yield the tangent map \(~{}M(z_{0}(t),t)~{}\) which takes the set of initial variables \(Z^{in}\) into the time-evolved variables \(Z(t)\), where \[Z(t)~{}=~{}M(z_{0}(t),t)~{}Z^{in}.\] The evolution equation of \(M\) is given by \[{dM\over dt}~{}=~{}DF~{}M.\] Let \(\Lambda\) be an \(n\times n\) matrix given by \[\Lambda~{}=~{}\lim_{t\rightarrow\infty}(~{}M~{}M^{t}~{})^{1/2t},\] where \(M^{t}\) denotes the transpose of \(M\). The Lyapunov exponents are the logarithm of the eigenvalues of \(\Lambda\) [1]. All the methods of computing Lyapunov exponents are either based on the \(QR\) or the singular value decomposition. In the following sections, we will describe some of these methods. **2. Singular Value Decomposition method** Let \[M\ =\ U\ F\ V^{t}\] be the singular value decomposition (SVD) of \(M\) into the product of the orthogonal matrices \(U\), \(V\) and the diagonal matrix \(F\ =\ {\rm diag}(\ \sigma_{1}(t)\,\ \sigma_{2}(t)\,...,\ \sigma_{n}(t)\ )\). The diagonal elements of \(F\) are called the singular values of \(M\). The SVD is unique up to permutations of the corresponding columns, rows and diagonal elements of the matrices \(U\), \(V\) and \(F\). A unique decomposition can be achieved by requesting the singular value spectrum to be strictly monotonically decreasing singular value spectrum, i.e., \(\ \sigma_{1}(t)>\sigma_{2}(t)>...>\sigma_{m}(t)\.\) Postmultiplying eq(6) with the \(M^{t}\ =\ V\ F\ U^{t}\) shows, that the squares of the singular values \(\ \sigma_{t}(t)\) of \(M\) are the eigenvalues of the matrix \(M\ M^{t}\) [2]. Therefore, from eq(5), we have the relation between the Lyapunov exponents \(\ \lambda_{i}\), the eigenvalues \(\ \mu_{i}\) of \(\Lambda\) and the singular values \(\ \sigma_{i}(t)\,\ i=1,2,...,n\) as follows: \[\lambda_{i}\ =\ \log\mu_{i}\ =\ \lim_{t\rightarrow\infty}\log\ (\ \sigma_{i}^{ 2}(t)\ )^{1/2t}=\lim_{t\rightarrow\infty}\frac{1}{t}\log\sigma_{i}.\] The geometric intepretation of this method is explained in the reference [3]. Following Ref. [3], we will now formulate the differential equations for the quantities that are needed to compute the Lyapunov spectrum in terms of the singular value decomposition. Let us introduce a matrix \(E\), where \[E\ =\ \log F\ =\ {\rm diag}(\ \epsilon_{1},\ \epsilon_{2},...,\epsilon_{n}\ ),\] where the elements \(\ \epsilon_{i}\ =\ \log\sigma_{i}\ \ (i=1,2,..,n)\). Differentiating \(E\) with respect to time,yields \[E^{\prime}\ =\ F^{-1}\ F^{\prime}, \tag{9}\] where \[F^{\prime}\ =\ U^{t}\ DF\ U\ F-U^{t}\ U^{\prime}\ F-F\ (V^{\prime})^{t}\ V. \tag{10}\] This is got by subtituting eq(6) in eq(4) and differentiating w.r.t time. Due to the orthogonality of \(\ U\) and \(\ V\), we have \[V^{t}\ V^{\prime}\ +\ (V^{\prime})^{t}\ V = \ 0, \tag{11}\] \[U^{t}\ U^{\prime}\ +\ (U^{\prime})^{t}\ U = \ 0. \tag{12}\] Let us denote \[A = \ U^{t}\ U^{\prime}, \tag{13}\] \[B = -F^{-1}\ A\ F,\] (14) \[C = \ U^{t}\ DF\ U,\] (15) \[D = \ F^{-1}\ C\ F. \tag{16}\] Also, \(\ E^{\prime}+(E^{\prime})^{t}=2E^{\prime}\\) yields \[2\ E^{\prime}\ =\ B\ +\ B^{t}\ +\ D\ +\ D^{t}. \tag{17}\] To compute the Lyapunov exponents, the diagonal elements of \(E^{\prime}\) need to be calculated. For this, we see from the above equation that the elements of matrices \(\ B\) and \(\ D\) are required. They are given by \[B_{ij} = -A_{ij}\ \frac{\sigma_{j}}{\sigma_{i}}, \tag{18}\] \[D_{ij} = \ C_{ij}\ \frac{\sigma_{j}}{\sigma_{i}}. \tag{19}\] Since \(\ U\) is orthogonal, \(\ A\\) is skew-symmetric and \(\ B_{ii}=0\,i=1,2,..,n\). The diagonal elements \(\ \epsilon^{\prime}_{i}\ \) of \(\ E^{\prime}\\) therefore satisfy the equation: \[\epsilon^{\prime}_{i}\ =\ C_{ii}. \tag{20}\]The above equation can used to compute the Lyapunov exponents \(\ \lim_{t\rightarrow\infty}\epsilon_{i}(t)/t\ i=1,2..,n\) provided \(U\) is known as a function of time. To determine \(U(t)\), consider the off-diagonal elements in eq(17), the \(\ n(n-1)/2\\) equations \[-A_{ij}\ \frac{\sigma_{j}}{\sigma_{i}}-A_{ji}\ \frac{\sigma_{i}}{\sigma_{j}}\ +\ C_{ij}\ \frac{\sigma_{j}}{\sigma_{i}}\ +\ C_{ji}\ \frac{\sigma_{i}}{\sigma_{j}}\ =\ 0,\ \ i>j \tag{21}\] To get rid of the exponentially growing quantities, eq(21) is multiplied by \(\ \sigma_{i}/\sigma_{j}\\). Let \[h_{ij}\ =\ \sigma_{i}^{2}/\sigma_{j}^{2}\ =\ \exp{(2(\epsilon_{i}-\epsilon_{j}))},\ i\neq j. \tag{22}\] Therefore, we have \[A_{ij}\ =\ \left\{\begin{array}{l}\frac{C_{ji}\ +\ C_{ij}\ h_{ji}}{h_{ji}-1},\ i\neq j\\ 0,\ \ \ \ \ \ \ \ \ \ \ \ \ i=j\\ \frac{C_{ij}\ +\ C_{ji}\ h_{ij}}{1-h_{ij}},\ i>j\end{array}\right. \tag{23}\] The time evolution of \(U\) can now be determined by integrating the following differential equation \[U^{\prime}\ =\ U\ A. \tag{24}\] In case of a non-degenerate spectra, the singular values constitute a strictly monotonically decreasing sequence for large time. When the above differential equation for \(U\) is solved, the orthogonality of \(U\) is quickly lost and one has to perform reorthogonalization every now and then. In case of a degenerate Lyapunov spectra, the matrix \(\ A\\) becomes singular. This is another disadvantage of this method. Also, it requires more operations than the QR method, which will be described in the following section. Further, evaluation of a partial Lyapunov spectrum can be computationally costly beyond a certain threshold [3]. **3. QR Decomposition method** We know that any non-singular matrix can be uniquely decomposed into a product of an orthogonal matrix and an upper-triangular matrix with positive diagonal elements. Using this knowledge, we decompose the tangent map \(M\) as \[M\ =\ Q\ R, \tag{25}\] where \(Q\) is an \(n\times n\) orthogonal matrix and R is an \(n\times n\) upper-triangular matrix with positive diagonal elements \(R_{ii}\). The Lyapunov exponents are given by \[\lambda_{i}\ =\ \lim_{t\rightarrow\infty}\frac{1}{t}\log{(R_{ii})}. \tag{26}\] In general, in the limit \(t\rightarrow\infty\) the Lyapunov exponents constitute a monotonically decreasing sequence[4]. Substituting eq(25) in the eq(4), we have \[Q^{\prime}\ R\ +\ Q\ R^{\prime}\ =\ DF\ Q\ R. \tag{27}\] Premultiplying and postmultiplying the above eq with \(Q^{-1}=Q^{t}\) and \(R^{-1}\) respectively, we have \[Q^{t}\ Q^{\prime}-Q^{t}\ DF\ Q\ =\ -R^{\prime}\ R^{-1}. \tag{28}\] The right hand side is an upper-triangular matrix with diagonal elements \(-R^{\prime}_{ii}/R_{ii}\), while the \(Q^{t}\ Q^{\prime}\) is a skew-symmetric matrix. Let \[S\ =\ Q^{t}\ Q^{\prime}. \tag{29}\] Therefore, the differential equation for \(Q\) is given by \[Q^{\prime}\ =\ Q\ S. \tag{30}\] The equations for the diagonal elements of \(R\) are given by \[\frac{R^{\prime}_{ii}}{R_{ii}}\ =\ (\ Q^{t}\ DF\ Q\ )_{ii},\ (1\leq i\leq n). \tag{31}\]Using the above equations, the Lyapunov exponents can be computed. This method is discussed in detail in reference[3]. This method also suffers from most of the disadvantages of the previous method. In the following section, we shall see how things get simplified by using group-theoretical representations of the orthogonal matrix. ## 4. \(Mm^{t}\) method In this section, we describe a method utilizing representations of orthogonal matrices applied to the decompositions of the tangent map product \(MM^{t}\). In this method [4], a matrix \(A\) is introduced[5], where \[A\ =\ MM^{t}. \tag{32}\] The time-evolution of A is given by the following equation: \[\frac{dA}{dt}\ =\ DF\ A\ +\ A\ DF^{t}. \tag{33}\] Since this matrix is symmetric and positive definite, it can be written as an exponential of a symmetric matrix \(S\). Moreover, any symmetric matrix can be diagonalised by an orthogonal matrix. Therefore, we have \[A = \ \exp(B) \tag{34}\] \[= \exp(\ O\ D\ O^{t})\] (35) \[= O\ \exp(D)\ O^{t}, \tag{36}\] where \(O\) is an \(n\times n\) orthogonal matrix, and \(D\) is an \(n\times n\) diagonal matrix, whose diagonal elements are the Lyapunov exponents multiplied by time. Since \(D\) is already in the exponent, there is no need for rescaling. An easy to obtain group-theoretical representation of the orthogonal matrix is used for the matrix \(O\)[6]. This ensures that the number of variables used to characterize the system is minimum. The number of parameters needed to characterize \(O\) and \(D\) are \(n(n-1)/2\) and \(n\) respectively, giving a total of \(n(n+1)/2\). This method also maintains the orthogonality without any need for rescaling. Hence, the numerical errors can never lead to loss of orthogonality. The working of this method can be explained by taking the example of \(n=2\) case. \(O\) is represented by the following matrix: \[\left(\begin{array}{cc}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right). \tag{37}\] \(D\) is given by \[\left(\begin{array}{cc}\lambda_{1}&0\\ 0&\lambda_{2}\end{array}\right). \tag{38}\] The Jacobian matrix \(DF\) is given by \[\left(\begin{array}{cc}df_{11}&df_{12}\\ df_{21}&df_{22}\end{array}\right). \tag{39}\] Substituting these expressions for \(A\) in eq(33), we have \[\frac{d\lambda_{1}}{dt} = df_{11}+df_{22}+(df_{11}-df_{22})\ \cos\ 2\theta-(df_{12}+df_{21}) \ \sin\ 2\theta, \tag{40}\] \[\frac{d\lambda_{2}}{dt} = df_{11}+df_{22}-(df_{11}-df_{22})\ \cos\ 2\theta+(df_{12}+df_{21}) \ \sin\ 2\theta. \tag{41}\] Similarly, the differential equation for \(\theta\) can also be obtained. The next method to be discussed is a variant of the above method with further advantages. **5. Continuous QR method using representations of orthogonal matrices** In this method [4], the orthogonal matrix \(Q\) is represented as a product of \(n(n-1)/2\) orthogonal matrices, each of which corresponds to a simple rotation in the \(\ i-jth\\) plane \((i<j)\). Denoting the the matrix corresponding to this rotation by \(Q^{ij}\), its matrix elements are given by: \[Q^{(ij)}_{kl}\ =\ \left\{\begin{array}{l}1\ \ \ \mbox{if}\ k=1\neq i,j;\\ \cos\theta\ \mbox{if}\ k=1=i\ \mbox{or}j;\\ \sin\theta\ \mbox{if}\ k=i,\ l=j;\\ -\sin\theta\ \mbox{if}\ k=j,\ l=i;\\ 0\ \ \ \ \mbox{otherwise}\end{array}\right.\] where \(\theta\) is an angle variable. Then, the matrix \(Q\) is represented by: \[Q\ =\ Q^{(12)}\ Q^{(13)}...Q^{(1n)}\ Q^{(23)}...Q^{(n-1,n)}. \tag{43}\] So, we have \(n(n-1)/2\) angle variables denoted by \(\theta_{i}\,i=1,..,n(n-1)/2\). Here, \(Q\) is represented by a special orthogonal matrix because of the choice of initial conditions. We choose the identity matrix as the initial orthogonal matrix. Since we start with a matrix from the \(SO(n)\) component of the group of orthogonal matrices, due to continuity, we remain in the same component for all time. Hence, we are justified in choosing \(Q\) to be an \(SO(n)\) matrix. Since the upper-triangular matrix has positive diagonal elements, it can be represented as follows: \[\left(\begin{array}{cccc}\exp\lambda_{1}&r_{12}&...&...&r_{1n}\\ 0&\exp\lambda_{2}&r_{23}&...&r_{2n}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&0&\exp\lambda_{n}\end{array}\right). \tag{44}\] Using the representations of \(Q\)\(,\)\(Q^{t}\ Q^{\prime}\) is given by \[\left(\begin{array}{cccc}0&-f_{1}(\theta^{\prime})&...&-f_{n-1}(\theta^{ \prime})\\ f_{1}(\theta^{\prime})&0&...&-f_{2n-3}(\theta^{\prime})\\ \vdots&\vdots&\vdots&\vdots\\ f_{n-1}(\theta^{\prime})&...&f_{n(n-1)/2}(\theta^{\prime})&0\end{array}\right), \tag{45}\] where \(\theta^{\prime}\ =\ (\theta^{\prime}_{1},\theta^{\prime}_{2},..,\theta^{\prime}_{ n(n-1)/2})\). Substituting the above matrices in eq(27), we have \[\lambda^{\prime}_{i}\ =\ (\ Q^{t}\ DF\ Q\ )_{ii}. \tag{46}\] The equations for the angles are given by \[f_{1}(\theta^{\prime})=(Q^{t}\ DF\ Q)_{21}\ ;\ f_{2}(\theta^{\prime})=(Q^{t}\ DF\ Q)_{31}\ ;\...;\ f_{n(n-1)/2}(\theta^{\prime})=(Q^{t}\ DF\ Q)_{n,n-1}.\]The Lyapunov exponents are given by \[\lim_{t\rightarrow\infty}\frac{\lambda_{i}}{t},\ \ \ i\ =\ 1,2,...,n.\] Here again, we need minimum number of parameters to characterize the system and there is no need for rescaling. Furthermore, numerical errors can never lead to loss of orthogonality. This method has other advantages over the previous ones. The equations for \(\theta_{i}\) are decoupled from the equations for \(\lambda_{i}\). Hence, we need not worry about degenerate spectra. Another very interesting feature of this method is the dependence of \(\lambda_{1}{}^{\prime}\) on the first \((n-1)\)\(\theta_{i}\)'s, \(\lambda_{2}{}^{\prime}\) on the first \((2n-3)\)\(\theta_{i}\)'s and so on. Therefore, to obtain the first two \(\lambda_{i}\)'s, one needs to solve only \((2n-1)\) equations. In general, to solve for the first \(m\) Lyapunov exponents, one has to solve \(m(2n-m+1)/2\) equations which is always less than \(n(n+1)/2\) for \(m<n\). Therefore, the partial spectrum can be easily calculated unlike in the methods listed above. This is a major advantage of this method. In the \(n=2\) case, \(Q\) is parametrized as \[\left(\begin{array}{cc}\cos\theta_{1}&\sin\theta_{1}\\ -\sin\theta_{1}&\cos\theta_{1}\end{array}\right). \tag{47}\] \(R\) is written as, \[\left(\begin{array}{cc}\exp\lambda_{1}&r_{12}\\ 0&\exp\lambda_{2}\end{array}\right). \tag{48}\] The Jacobian matrix \(DF\) may be written as: \[\left(\begin{array}{cc}df_{11}&df_{12}\\ df_{21}&df_{22}\end{array}\right). \tag{49}\] Substituting the above into eq(27), we have \[\frac{d\lambda_{1}}{dt}\ =\ df_{11}\cos^{2}\theta_{1}\ +\ df_{22}\sin^ {2}\theta_{1}-\frac{1}{2}\ (df_{12}+df_{21})\ \sin\ 2\theta_{1}, \tag{50}\] \[\frac{d\lambda_{2}}{dt}\ =\ df_{11}\sin^{2}\theta_{1}\ +\ df_{22} \cos^{2}\theta_{1}+\frac{1}{2}\ (df_{12}+df_{21})\ \sin\ 2\theta_{1}. \tag{51}\] The equation for \(\theta_{1}\) is given by \[\frac{d\theta_{1}}{dt}\ =\ -\ \frac{1}{2}\ (df_{11}-df_{22})\ \sin\ 2\theta_{1}\ +\ df_{12}\ \sin^{2}\theta_{1}-df_{21}\ \cos^{2}\theta_{1}. \tag{52}\]The above equations are numerically integrated till the desired convergence for the Lyapunov exponents \(\lambda_{1}/t\) and \(\lambda_{2}/t\) is achieved. This method also preserves the global invariances of the Lyapunov spectrum. This method is discussed in detail in the reference [4]. **6. Conclusion** In this paper, we have briefly reviewed some of the methods for computing the Lyapunov exponents of continuous-time dynamical systems. The advantages accrued by using a group-theoretical representation of orthogonal matrices were brought out. It should also be noted that the methods reviewed can be applied to discrete maps with appropriate modifications [3,7]. **Acknowledgement** GR's work was supported in part by a research grant from the Department of Science and Technology. ## References: 1. J. P. Eckmann and D. Ruelle, _Rev. Mod. Phys._**57**,617 (1985) and references therein. 2. K. Geist, U. Parlit and W. Lauterborn, _Prog. Theo. Phys._**83**, 875 (1990) 3. G. Benettin, L. Galgani, A. Giorgilli and J. A. Vastano, _Physica_**D16**, 285 (1985). 4. G. Rangarajan, S. Habib and R. D. Ryne, _Phys. Rev. Lett._**80**, 3747 (1998). 5. S. Habib and R. D. Ryne, _Phys. Rev. Lett._**74**, 70 (1995). 6. I. M. Gelfand, R. A. Minlos and Z. Ya. Shapiro, _Representation of the Rotation and Lorenz groups and their Applications_ (Pergamon, NY, 1963) pp. 353-354. 7. T. M. Janaki, G. Rangarajan, S. Habib and R. D. Ryne (in preparation) (1998). # Fourth-order time-stepping for stiff PDEs+ Footnote †: Received by the editors July 8, 2002; accepted for publication (in revised form) December 8, 2003; published electronically March 11, 2005. This work was supported by the Engineering and Physical Sciences Research Council (UK) and by MathWorks, Inc. [http://www.siam.org/journals/sisc/26-4/41063.html](http://www.siam.org/journals/sisc/26-4/41063.html) Aly-Khan Kassam Oxford University Computing Laboratory, Wolfson Bldg., Parks Road, Oxford OX1 3QD, UK (LNT@comlab.ox.ac.uk). Lloyd N. Trefethen Oxford University Computing Laboratory, Wolfson Bldg., Parks Road, Oxford OX1 3QD, UK (LNT@comlab.ox.ac.uk). ###### Abstract A modification of the exponential time-differencing fourth-order Runge-Kutta method for solving stiff nonlinear PDEs is presented that solves the problem of numerical instability in the scheme as proposed by Cox and Matthews and generalizes the method to nondiagonal operators. A comparison is made of the performance of this modified exponential time-differencing (ETD) scheme against the competing methods of implicit-explicit differencing, integrating factors, time-splitting, and Fornberg and Driscoll's "sliders" for the KdV, Kuramoto-Sivashinsky, Burgers, and Allen-Cahn equations in one space dimension. Implementation of the method is illustrated by short Matlab programs for two of the equations. It is found that for these applications with fixed time steps, the modified ETD scheme is the best. E TD, exponential time-differencing, KdV, Kuramoto-Sivashinsky, Burgers, Allen-Cahn, implicit-explicit, split step, integrating factor Primary, 65M70; Secondary, 65L05, 65M20 10.1137/S1064827502410633 ## 1 Introduction Many time-dependent PDEs combine low-order nonlinear terms with higher-order linear terms. Examples include the Allen-Cahn, Burgers, Cahn-Hilliard, Fisher-KPP, Fitzhugh-Naguno, Gray-Scott, Hodgkin-Huxley, Kuramoto-Sivashinsky (KS), Navier-Stokes, nonlinear Schrodinger, and Swift-Hohenberg equations. To obtain accurate numerical solutions of such problems, it is desirable to use high-order approximations in space and time. Yet because of the difficulties introduced by the combination of nonlinearity and stiffness, most computations heretofore have been limited to second order in time. Our subject in this paper is fourth-order time-differencing. We shall write the PDE in the form \[u_{t}=\mathcal{L}u+\mathcal{N}(u,t), \tag{1}\] where \(\mathcal{L}\) and \(\mathcal{N}\) are linear and nonlinear operators, respectively. Once we discretize the spatial part of the PDE we get a system of ODEs, \[u_{t}=\mathbf{L}u+\mathbf{N}(u,t). \tag{2}\] There seem to be five principal competing methods for solving problems of this kind, which we will abbreviate by IMEX, SS, IF, SL, and ETD. Of course these are not the only schemes that are being used. Noteworthy schemes that we ignore are the exponential Runge-Kutta schemes [26] and deferred correction [37] or semi-implicit deferred correction [6, 45]. _IMEX \(=\) Implicit-explicit._ These are a well-studied family of schemes that have an established history in the solution of stiff PDEs. Early work looking at some stability issues dates to the beginning of the 1980s [61]. Schemes have been proposed for specific examples, e.g., the KdV equation [13] and the Navier-Stokes equations[11, 32, 34], as well as certain classes of problems, for example, reaction-diffusion problems [51] and atmospheric modelling problems [62]. An overview of the stability properties and derivations of implicit-explicit schemes can be found in [2]. Implicit-explicit schemes consist of using an explicit multistep formula, for example, the second order Adams-Bashforth formula, to advance the nonlinear part of the problem and an implicit scheme, for example, the second order Adams-Moulton formula, to advance the linear part. Other kinds of formulations also exist; for developments based on Runge-Kutta rather than Adams-Bashforth formulae, for example, again see work by Ascher, Ruuth, and Spiteri [3], as well as very recent work by Calvo, Frutos, and Novo [10] and Kennedy and Carpenter [33]. In this report, we use a scheme known either as AB4BD4 (in [14]) or SBDF4 (in [2]), which consists of combining a fourth-order Adams-Bashforth and a fourth-order backward differentiation scheme. The formula for this scheme is \[u_{n+1}=(25-12h\mathbf{L})^{-1}(48u_{n}-36u_{n-1}+16u_{n-2}-3u_{n-3}+48h \mathbf{N}_{n} \tag{3}\] \[\qquad\qquad\qquad\qquad-72\mathbf{N}_{n-1}+48\mathbf{N}_{n-2}-1 2\mathbf{N}_{n-3})\quad(\text{AB4BD4}).\] _SS \(=\) Split step._ The idea of split step methods seems to have originated with Bagrinovskii and Godunov in the late 1950s [4] and to have been independently developed by Strang for the construction of finite difference schemes [57] (the simplest of these is often called "Strang splitting"). The idea has been widely used in modelling Hamiltonian dynamics, with the Hamiltonian of a system split into its potential and kinetic energy parts. Some early work on this was done by Ruth [50]. Yoshida [63] developed a technique to produce split step methods of arbitrary even order. McLachlan and Atela [41] studied the accuracy of such schemes, and McLachlan [42] made some further comparisons of different symplectic and nonsymplectic schemes. Overviews of these methods can be found in Sanz-Serna and Calvo [53] and Boyd [7], and a recent discussion of the relative merits of operator splitting in general can be found in a paper by Schatzman [54]. In essence, with the split step method, we want to write the solution as a composition of linear and nonlinear steps: \[u(t)\approx\exp(c_{1}t\mathbf{L})F(d_{1}t\mathbf{N})\exp(c_{2}t\mathbf{L})F(d_{ 2}t\mathbf{N})\cdots\exp(c_{k}t\mathbf{L})F(d_{k}t\mathbf{N})u(0), \tag{4}\] where \(c_{i}\) and \(d_{i}\) are real numbers and represent _fractional_ time steps (though we use product notation, the nonlinear substeps are nonlinear). Generating split step methods becomes a process of generating the appropriate sets of real numbers, \(\{c_{i}\}\) and \(\{d_{i}\}\), such that this product matches the exact evolution operator to high order. The time-stepping for such a scheme can be either a multistep or a Runge-Kutta formula. We use a fourth-order Runge-Kutta formula for the time-stepping in this experiment. _IF \(=\) Integrating factor._ Techniques that multiply both sides of a differential equation by some integrating factor and then make a relevant change of variable are well known in the theory of ODEs (see, for example, [38]). A similar method has been developed for the study of PDEs. The idea is to make a change of variable that allows us to solve for the linear part exactly, and then use a numerical scheme of our choosing to solve the transformed, nonlinear equation. This technique has been used for PDEs by Milewski and Tabak [44], Maday, Patera, and Ronquist [40], Smith and Waleffe [55, 56], Fornberg and Driscoll [20], Trefethen [60], Boyd [7], and Cox and Matthews [14]. Starting with our generic discretized PDE, we define \[v=e^{-\mathbf{L}t}u. \tag{5}\]The term \(e^{-{\bf L}t}\) is known as the _integrating factor_. In many applications we can work in Fourier space and render \({\bf L}\) diagonal, so that scalars rather than matrices are involved. Differentiating (1.5) gives \[v_{t}=-e^{-{\bf L}t}{\bf L}u+e^{-{\bf L}t}u_{t}.\] Now, multiplying (1.2) by the integrating factor gives \[\frac{e^{-{\bf L}t}u_{t}-e^{-{\bf L}t}{\bf L}u}{v_{t}}=e^{-{\bf L}t}{\bf N}(u),\] that is, \[v_{t}=e^{-{\bf L}t}{\bf N}(e^{{\bf L}t}v).\] This has the effect of ameliorating the stiff linear part of the PDE, and we can use a time-stepping method of our choice (for example, a fourth-order Runge-Kutta formula) to advance the transformed equation. In practice, one doesn't use the equation as it is written in (1.8), but rather replaces actual time, \(t\), with the time step, \(\Delta t\), and incrementally updates the formula from one time step to the next. This greatly improves the stability. In both the split step method and the integrating factor method, we use a fourth-order Runge-Kutta method for the time-stepping. The fourth-order Runge-Kutta algorithm that we used to perform the time integration for this method was \[\begin{split} a&=hf(v_{n},t_{n}),\\ b&=hf(v_{n}+a/2,t_{n}+h/2),\\ c&=hf(v_{n}+b/2,t_{n}+h/2),\\ d&=hf(v_{n}+c,t_{n}+h),\\ v_{n+1}&=v_{n}+\frac{1}{6}(a+2b+2c+d)\quad\text{( Fourth-order RK)},\end{split}\] where \(h\) is the time step and \(f\) is the nonlinear functional on the right-hand side of (1.8). For the split step method, we simply replace \(f\) in (1.9) with \(F\) from (1.4). _SL = Sliders._ In a recent paper [20], Fornberg and Driscoll describe a clever extension of the implicit-explicit concept described above. In addition to splitting the problem into a linear and a nonlinear part, they also split the linear part (after transformation to Fourier space) into three regions: low, medium, and high wavenumbers. The slider method involves using a different numerical scheme in each region. The advantage of this method is that one can combine high-order methods for the low wave numbers with high-stability methods for the higher wave numbers. We can summarize one version of this method with the following table. \begin{tabular}{|c|c|c|} \hline Low \(|k|\) & Medium \(|k|\) & High \(|k|\) \\ \hline AB4/AB4 & AB4/AM6 & AB4/AM2* \\ \hline \end{tabular} Here \(k\) is the wavenumber, AB4 denotes the fourth-order Adams-Bashforth formula, AM6 denotes the sixth-order Adams-Moulton formula, and AM2\({}^{*}\) denotes a modified second-order Adams-Moulton formula specified by \[u^{n+1}=u^{n}+\frac{h}{2}\left(\frac{3}{2}{\bf L}u^{n+1}+\frac{1}{2}{\bf L}u^{n -1}\right),\] where \(h\) is the time step. Unfortunately, this scheme is stable only for purely dispersive equations. In order to generalize the concept, Driscoll has developed a very similar idea using Runge-Kutta time-stepping [17]. Again, the idea is to make use of different schemes for "fast" and "slow" modes. In this case, he uses the fourth-order Runge-Kutta formula to deal with the slow, nonlinear modes, and an implicit-explicit third-order Runge-Kutta method to advance the "fast" linear modes. This is the method that we explore in this paper. _ETD = Exponential time-differencing._ This method is the main focus of this paper, and we will describe it in section 2. One might imagine that extensive comparisons would have been carried out of the behavior of these methods for various PDEs such as those listed in the first paragraph, but this is not so. One reason is that SL and ETD are quite new; but even the other three methods have not been compared as systematically as one might expect. Our aim in beginning this project was to make such a comparison. However, we soon found that further development of the ETD schemes was first needed. As partly recognized by their originators Cox and Matthews, these methods as originally proposed encounter certain problems associated with eigenvalues equal to or close to zero, especially when the matrix \({\bf L}\) is not diagonal. If these problems are not addressed, ETD schemes prove unsuccessful for some PDE applications. In section 2 we propose a modification of the ETD schemes that solves these numerical problems. The key idea is to make use of complex analysis and evaluate certain coefficient matrices or scalars via contour integrals in the complex plane. Other modifications would very possibly also achieve the same purpose, but so far as we know, this is the first fully practical ETD method for general use. In section 3 we summarize the results of experimental comparison of the five fourth-order methods listed above for four PDEs: the Burgers, KdV, Allen-Cahn, and KS equations. We find that the ETD scheme outperforms the others. We believe it is the best method currently in existence for stiff PDEs, at least in one space dimension. In making such a bold statement, however, we should add the caveat that we are considering only fixed time steps. Our ETD methods do not extend cheaply to variable time-stepping; an IMEX scheme, for example, is a more natural candidate for such problems. Sections 4 and 5 illustrate the methods in a little more detail for a diagonal example (KS) and a nondiagonal example (Allen-Cahn). They also provide brief Matlab codes for use by our readers as templates. ## 2 A modified ETD scheme Low-order ETD schemes arose originally in the field of computational electrodynamics [59]. They have been independently derived several times [5, 12, 14, 21, 46, 48]--indeed Iserles has pointed out to us that in the ODE context, related ideas go back as far as Filon in 1928 [18, 30]--but the most comprehensive treatment, and in particular the exponential time-differencing fourth-order Runge-Kutta (ETDRK4) formula, is in the paper by Cox and Matthews [14], and it is from this paper that we take details of the scheme. Cox and Matthews argue that ETD schemes outperform IMEX schemes because they treat transient solutions better (where the linear term dominates), and outperform IF schemes because they treat nontransient solutions better (where the nonlinear term dominates). Algebraically, ETD is similar to the IF method. The difference is that we do not make a complete change of variable. If we proceed as in the IF approach and apply the same integrating factor and then integrate over a _single_ time step of length \(h\), we get \[u_{n+1}=e^{\mathbf{L}h}u_{n}+e^{\mathbf{L}h}\int_{0}^{h}e^{-\mathbf{L}\tau}\mathbf{N }(u(t_{n}+\tau),t_{n}+\tau)d\tau. \tag{2.1}\] This equation is exact, and the various order ETD schemes come from how one approximates the integral. In their paper Cox and Matthews first present a sequence of recurrence formulae that provide higher and higher-order approximations of a multi-step type. They propose a generating formula \[u_{n+1}=e^{\mathbf{L}h}u_{n}+h\sum_{m=0}^{s-1}g_{m}\sum_{k=0}^{m}(-1)^{k}{m \choose k}\mathbf{N}_{n-k}, \tag{2.2}\] where \(s\) is the order of the scheme. The coefficients \(g_{m}\) are given by the recurrence relation \[\mathbf{L}hg_{0}=e^{\mathbf{L}h}-\mathbf{I},\] \[\mathbf{L}hg_{m+1}+\mathbf{I}=g_{m}+\frac{1}{2}g_{m-1}+\frac{1}{3 }g_{m-2}+\cdots+\frac{g_{0}}{m+1},\quad m\geq 0. \tag{2.3}\] Cox and Matthews also derive a set of ETD methods based on Runge-Kutta time-stepping, which they call ETDRK schemes. In this report we consider only the fourth-order scheme of this type, known as ETDRK4. According to Cox and Matthews, the derivation of this scheme is not at all obvious and requires a symbolic manipulation system. The Cox and Matthews ETDRK4 formulae are: Unfortunately, in this form, ETDRK4 (and indeed any of the ETD schemes of order higher than two) suffers from numerical instability. To understand why this is the case, consider the expression \[g(z)=\frac{e^{z}-1}{z}. \tag{2.4}\] The accurate computation of this function is a well-known problem in numerical analysis and is discussed, for example, in the monograph by Higham [25], as well as the paper by Friesner et al. [21]. The reason it is not straightforward is that for small \(z\), (2.4) suffers from cancellation error. We illustrate this in Table 2.1 by comparing the true value of \(g(z)\) to values computed directly from (2.4) and from five terms of a Taylor expansion. For small \(z\), the direct formula is no good because of cancellation, but the Taylor polynomial is excellent. For large \(z\), the direct formula is fine, but the Taylor polynomial is inaccurate. For one value of \(z\) in the table, neither method gives full precision. The connection between (4) and the scheme ETDRK4 becomes apparent when we consider the coefficients in square brackets in the update formula for ETDRK4: \[\eqalign{\alpha&=h^{-2}{\bf L}^{-3}[-4-{\bf L}h+e^{{\bf L}h}(4-3{\bf L}h+({\bf L }h)^{2})],\cr\beta&=h^{-2}{\bf L}^{-3}[2+{\bf L}h+e^{{\bf L}h}(-2+{\bf L}h)], \cr\gamma&=h^{-2}{\bf L}^{-3}[-4-3{\bf L}h-({\bf L}h)^{2}+e^{{\bf L}h}(4-{\bf L }h)].\cr}\] These three coefficients are higher-order analogues of (4). The cancellation errors are even more pronounced in these higher-order variants, and all three suffer disastrous cancellation errors when \({\bf L}\) has eigenvalues close to zero. This vulnerability to cancellation errors in the higher-order ETD and ETDRK schemes can render them effectively useless for problems which have small eigenvalues in the discretized linear operator. Cox and Matthews were aware of this problem, and in their paper they use a cutoff point for small eigenvalues. In particular, as they work mainly with linear operators that are diagonal, they use a Taylor series representation of the coefficients for diagonal elements below the cutoff, much as in Table 1. This approach, however, entails some problems. One is that as the table illustrates, one must be careful to ensure that there is no overlap region where neither formula is accurate. Another more serious problem is, how does the method generalize to non-diagonal problems, i.e., matrices rather than scalars? To handle such cases gracefully one would like a single formula that is simultaneously accurate for all values of \(z\). We have found that this can be achieved by making use of ideas of complex analysis. First let us describe the accuracy problem in general terms. We have a function \(f(z)\) to evaluate that is analytic except for a removable singularity at \(z=0\). For values of \(z\) close to that singularity, though the formula given for \(f(z)\) is mathematically exact, it is numerically inaccurate because of cancellation errors. We seek a uniform procedure to evaluate \(f(z)\) accurately for values of \(z\) that may or may not lie in this difficult region. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(z\) & Formula (4) & 5-term Taylor & Exact \\ \hline 1 & 1.71828182845905 & 1.71666666666667 & 1.71828182845905 \\ 1e-1 & 1.05170918075648 & 1.051709166666667 & 1.05170918075648 \\ 1e-2 & 1.00501670841679 & 1.00501670841667 & 1.00501670841681 \\ 1e-3 & 1.00050016670838 & 1.00050016670834 & 1.00050016670834 \\ 1e-4 & 1.00005000166714 & 1.0000500166671 & 1.000050016671 \\ 1e-5 & 1.00000500000696 & 1.00000000001667 & 1.00000500001667 \\ 1e-6 & 1.000000049996218 & 1.000000050000017 & 1.00000000000017 \\ 1e-7 & 1.00000004943368 & 1.00000005000000 & 1.00000005000000 \\ 1e-8 & 0.99999999392253 & 1.000000000500000 & 1.000000000500000 \\ 1e-9 & 1.00000008274037 & 1.000000000050000 & 1.00000000005000 \\ 1e-10 & 1.00000008274037 & 1.000000000000500 & 1.000000000000500 \\ 1e-11 & 1.00000008274037 & 1.0000000000000500 & 1.000000000000500 \\ 1e-12 & 1.000008890058234 & 1.0000000000000050 & 1.0000000000000050 \\ 1e-13 & 0.99920072216264 & 1.0000000000000005 & 1.000000000000005 \\ \hline \end{tabular} \end{table} Table 1: Computation of \(g(z)\) by two different methods. The formula (4) is inaccurate for small \(z\), and the order-\(5\) partial sum of the Taylor series is inaccurate for larger \(z\). For \(z=1e-2\), neither is fully accurate. All computations are done in IEEE double precision arithmetic. The solution we have found is to evaluate \(f(z)\) via an integral over a contour \(\Gamma\) in the complex plane that encloses \(z\) and is well separated from 0: \[f(z)=\frac{1}{2\pi i}\int_{\Gamma}\frac{f(t)}{t-z}\,dt.\] When \(z\) becomes a matrix \({\bf L}\) instead of a scalar, the same approach works, with the term \(1/(t-z)\) becoming the resolvent matrix \((t{\bf I}-{\bf L})^{-1}\): \[f({\bf L})=\frac{1}{2\pi i}\int_{\Gamma}f(t)(t{\bf I}-{\bf L})^{-1}\,dt.\] Here \(\Gamma\) can be any contour that encloses the eigenvalues of \(l\). Contour integrals of analytic functions (scalar or matrix) in the complex plane are easy to evaluate by means of the trapezoid rule, which converges exponentially [15, 16, 24, 60]. In practice we take \(\Gamma\) to be a circle and usually find that 32 or 64 equally spaced points are sufficient. When \({\bf L}\) is real, we can exploit the symmetry and evaluate only in equally spaced points on the upper half of a circle centered on the real axis, then take the real part of the result. The scalars or eigenvalues of \({\bf L}\) that arise in a discretized PDE typically lie in or near the left half of the complex plane and may cover a wide range, which grows with the spatial discretization parameter \(N\). For diffusive problems they are close to the negative real axis (e.g., the KS equation), and for dispersive problems they are close to the imaginary axis (KdV). Suitable contours \(\Gamma\) may accordingly vary from problem to problem. Our experience shows that many different choices work well, so long as one is careful to ensure that the eigenvalues are indeed enclosed by \(\Gamma\). For some diffusive problems, it might be advantageous to use a parabolic contour extending to \({\rm Re}\,z=-\infty\), taking advantage of exponential decay deep in the left half-plane, but we have not used this approach for the problems treated in this paper. For diagonal problems, we have the additional flexibility of being able to choose a contour \(\Gamma\) that depends on \(z\), such as a circle centered at \(z\). In this special case, the contour integral reduces simply to a mean of \(f(t)\) over \(\Gamma\), which we approximate to full accuracy by a mean over equally spaced points along \(\Gamma\) (or again just the upper half of \(\Gamma\), followed by taking the real part). For the details of exactly how this contour integral approach can be implemented, see the Matlab codes listed in sections 4 and 5. There is considerable flexibility about this procedure, and we do not claim that our particular implementations are optimal, merely that they work and are easy to program. For nondiagonal problems, quite a bit of computation is involved--say, 32 matrix inverses--but as this is done just once before the time-stepping begins (assuming that the time steps are of a fixed size), the impact on the total computing time is small. It would be greater for some problems in multiple space dimensions, but in some cases one could ameliorate the problem with a preliminary Schur factorization to bring the matrix to triangular form. Contour integrals and Taylor series are not the only solutions that have been proposed for this problem. Both Beylkin [5] and Friesner et al. [21], for example, use a method that is based on scaling and squaring. That method is also effective, but the contour integral method appeals to us because of its greater generality for dealing with arbitrary functions. To demonstrate the effectiveness of our stabilization method, Table 2.2 considers the computation of the coefficient \(\gamma(z)\) of (2.5) (here \(z={\bf L}\) and \(h=1\)) by use of the formula (2.5) and by a contour integral method. Here we follow the simplest approach in which the contour is a circle of radius 2 sampled at equally spaced points at angles \(\pi/64,3\pi/64,\ldots,127\pi/64\). The integral becomes just a mean over these points, and because of the \(\pm i\) symmetry, it is enough to compute the mean over the 32 points in the upper half-plane and then take the real part of the result. The table shows accuracy in all digits printed. (In fact, for this example the same is achieved with as few as 12 sample points in the upper half-plane.) Another way to demonstrate the effectiveness of our method is to see it in action. Figures 1-4 of the next section demonstrate fast fourth-order convergence achieved in application to four PDEs. An additional figure in that section, Figure 5, shows what happens to the ETDRK4 method if instead of using contour integrals it is implemented directly from the formulas as written. _Burgers equation_ with periodic boundary conditions, \[\begin{split} u_{t}&=-\left(\frac{1}{2}u^{2}\right)_{x}+ \epsilon u_{xx},\quad x\in[-\pi,\pi],\\ u(x,t=0)&=\exp(-10\sin^{2}(x/2)),\end{split} \tag{10}\] with \(\epsilon=0.03\) and the simulation running to \(t=1\). _KS equation_ with periodic boundary conditions, \[\begin{split} u_{t}&=-uu_{x}-u_{xx}-u_{xxxx},\quad x \in[0,32\pi],\\ u(x,t=0)&=\cos\left(\frac{x}{16}\right)\left(1+\sin \left(\frac{x}{16}\right)\right),\end{split} \tag{11}\] with the simulation running to \(t=30\). _Allen-Cahn equation_ with constant Dirichlet boundary conditions, \[\begin{split} u_{t}&=\epsilon u_{xx}+u-u^{3},\quad x \in[-1,1],\\ u(x,t=0)&=.53x+.47\sin(-1.5\pi x),\quad u(-1,t)=-1, \quad u(1,t)=1,\end{split} \tag{12}\] with \(\epsilon=0.001\) and the simulation running to \(t=3\). To impose the boundary conditions we define \(u=w+x\) and work with homogeneous boundary conditions in the \(w\) variable; the spatial discretization is by an 80-point Chebyshev spectral method (see section 5). We emphasize that the first three problems, because of the periodic boundary conditions, can be reduced to diagonal form by Fourier transformation, whereas the fourth cannot be reduced this way and thus forces us to work with matrices rather than scalars. Our results are summarized in Figures 1-4. The first plot in each figure compares accuracy against step size and thus should be reasonably independent of machine and implementation. Ultimately, of course, it is computer time that matters, and this is what is displayed in the second plot in each figure, based on our Matlab implementations on an 800 MHz Pentium 3 machine. Other implementations and other machines would give somewhat different results. Before considering the differences among methods revealed in Figures 1-4, let us first highlight the most general point: our computations show that it is entirely practical to solve these difficult nonlinear PDEs to high accuracy by fourth-order time-stepping. Most simulations in the past have been of lower order in time, typically second order, but we believe that for most purposes, a fourth-order method is superior. Turning now to the differences between methods revealed in Figures 1-4, the first thing we note is that the differences are very considerable. The methods differ in efficiency by factors as great as 10 or higher. We were not able to make every method work in every case. If a method does not appear on a graph it means that it did not succeed, seemingly for reasons of nonlinear instability (which perhaps might have been tackled by dealiasing in our spectral discretizations). A key feature to look for in the first plot in each figure is the relative positioning of the different methods. Schemes that are further to the right for a given accuracy take fewer steps to achieve that accuracy. It is possible that each step is more costly, however, so just because a scheme achieves a good accuracy in few steps does not mean that it is the most efficient. The second plot in the figures gives insight into these computation times. We can make a few comments on each of the methods investigated. _Exponential time-differencing_ is very good in every case. It works equally well for diagonal and nondiagonal problems, it is fast and accurate, and it can take large time steps. The _implicit-explicit_ scheme used in this study does not perform well. This is striking, as this is probably the most widely used of all the schemes. This fares particularly poorly for the dispersive equations, KdV, and Burgers equation, and for the KdV, we could not get the IMEX scheme to work at all at the spatial resolution that we used. The _split step_ method also performed poorly. It was unstable for all of the experiments that we performed with 512 points. The main problem with this scheme, even if it is stable, is the long computation time caused by the large number of Runge-Kutta evaluations at each time step for the nonlinear term. This becomes a particular Figure 1: Accuracy versus time step and computer time for three schemes for the KdV equation. Figure 2: Accuracy versus time step and computer time for four schemes for the Burgers equation. problem with schemes of higher than second order. For second-order calculations, split step schemes are certainly competitive. The _slider_ method does well in all of the diagonal cases. It is fast, accurate, and very stable. This is remarkable when we compare its performance with that of the IMEX schemes from which it was derived. The only problem with this scheme is the difficulty in generalizing it to nondiagonal cases. Finally, the _integrating factor_ scheme performs well for the Burgers equation. It doesn't do well for the KS equation, though, coming off worst of all, and we couldn't get it to work for the KdV equation or the Allen-Cahn equation with the spatial discretization that we used. This is also a little surprising, considering how widely used this scheme is. Figure 4: Results for the Allen–Cahn equation. This problem is nondiagonal and more challenging than the others. Our Matlab code is listed in section 5. Figure 3: Results for the KS equation. Our Matlab code is listed in section 4. One important consideration is, how easily do these schemes generalize to several space dimensions? With the exception of the slider method, they all generalize in a straightforward manner. Formally, almost nothing changes other than the fact that we must work with tensor product matrices. There are a few problems that arise from this fact, though. The ETD, IF, and SS schemes all need to calculate and store a matrix exponential. Even if the original matrix is sparse, this is not an insignificant amount of work, and the matrix exponential will not itself be sparse, which can be a significant amount of storage. It is true that these can be calculated in advance, and for one-dimensional problems the calculation is not very expensive. As the dimension of the problem increases, however, the cost goes up. Our difficulty with the IF method for the nondiagonal problem was that numerical instability meant that we were unable to calculate the appropriate exponential at all. The nature of these matrices depends crucially on whether the problem is periodic. Each periodic dimension corresponds to a diagonalization; if all dimensions are periodic, we have only scalars to deal with, and if only one dimension is nonperiodic, we have a collection of small blocks. Another generalization that one might consider would be how easily these schemes could be adapted to take advantage of variable time-stepping. The IMEX and slider methods should be relatively easy to adapt, while those methods that use a matrix exponential would present some difficulties. Overall then, it appears that the ETD scheme requires the fewest steps to achieve a given accuracy. It is also the fastest method in computation time, has excellent stability properties, and is the most general. It might be noted that the numerical experiments we have presented, since they resolve the spatial part of the problem to very high accuracy, are somewhat stiffer than if the spatial errors had been permitted on the same scale as the temporal errors, raising the question of whether ETD would also be the best for less fully resolved problems. Experiments with coarser resolutions indicate that yes, the advantages of ETD are compelling there, too. We conclude this section with an illustration of how crucial our complex contour integrals, or other stabilization devices, are to the success of ETD schemes. Figure 5 shows what happens if instead of a contour of radius 1 we shrink the radius to \(10^{-3}\), \(10^{-8}\), or 0. The accuracy is quickly lost. The case of radius 0 corresponds to an ETD scheme implemented directly from the formula without any stabilization device beyond the use of l'Hopital's rule to eliminate the removable singularity at \(z=0\). ## 4 A diagonal example: Kuramoto-Sivashinsky We now give a little more detail about the KS equation, which dates to the mid-1970s and has been used in the study of a variety of reaction-diffusion systems [39]. Our one-dimensional problem can be written as \[u_{t}=-uu_{x}-u_{xx}-u_{xxxx},\quad x\in[0,32\pi]. \tag{11}\] As it contains both second- and fourth-order derivatives, the KS equation produces complex behavior. The second-order term acts as an energy source and has a destabilizing effect, and the nonlinear term transfers energy from low to high wavenumbers where the fourth-order term has a stabilizing effect. The KS equation is also very interesting from a dynamical systems point of view, as it is a PDE that can exhibit chaotic solutions [28, 47]. We use the initial condition \[u(x,0)=\cos(x/16)(1+\sin(x/16)). \tag{14}\] As the equation is periodic, we discretize the spatial part using a Fourier spectral method. Transforming to Fourier space gives \[\widehat{u}_{t}=-\frac{ik}{2}\widehat{u^{2}}+(k^{2}-k^{4})\widehat{u}, \tag{15}\] or, in the standard form of (2), \[(\mathbf{L}\hat{u})(k)=(k^{2}-k^{4})\hat{u}(k),\qquad\mathbf{N}(\hat{u},t)= \mathbf{N}(\hat{u})=-\frac{ik}{2}(F((F^{-1}(\hat{u}))^{2})), \tag{16}\] where \(F\) denotes the discrete Fourier transform. We solve the problem entirely in Fourier space and use ETDRK4 time-stepping to solve to \(t=150\). Figure 6 shows the result, which took less than 1 second of computer time on our workstation. Despite the extraordinary sensitivity of the solution at later times to perturbations in the initial data (such perturbations are amplified by as much as \(10^{8}\) up to \(t=150\)), we are confident that this image is correct to plotting accuracy. It would not have been practical to achieve this with a time-stepping scheme of lower order. We produced Figure 6 with the Matlab code listed in Figure 7. ## 5 A nondiagonal example: Allen-Cahn The Allen-Cahn equation is another well-known equation from the area of reaction-diffusion systems: \[u_{t}=\epsilon u_{xx}+u-u^{3},\quad x\in[-1,1], \tag{17}\] Figure 5: Loss of stability in the ETD scheme applied to the Burgers equation as the radius of the contour shrinks to zero. The contour of zero radius corresponds to an ETD calculation directly from the defining formula. and following p34.m of [60], we used the initial conditions \[u(x,0)=.53x+.47\sin(-1.5\pi x),\quad u(-1,t)=-1,\quad u(1,t)=1. \tag{10}\] It has stable equilibria at \(u=\pm 1\) and an unstable equilibrium at \(u=0\). One of the interesting features of this equation is the phenomenon of _metastability_. Regions of the solution that are near \(\pm 1\) will be flat, and the interface between such areas can remain unchanged over a very long time scale before changing suddenly. We can write a discretization of this equation in our standard form (2), with \[{\bf L}=\epsilon{\bf D}^{2},\qquad{\bf N}(u,t)=u-u^{3}, \tag{11}\] where \({\bf D}\) is the Chebyshev differentiation matrix [60]. \({\bf L}\) is now a full matrix. Again we use ETDRK4 for the time-stepping and we solve up to \(t=70\) with \(\epsilon=0.01\). Figure 8 shows the result produced by the Matlab code listed in Figure 9, which also runs in less than a second on our workstation. This code calls the function cheb.m from [60], available at [http://www.comlab.ox.ac.uk/work/nick.trefethen](http://www.comlab.ox.ac.uk/work/nick.trefethen). % kursiv.m - solution of Kuramoto-Sivashinsky equation by ETDRK4 scheme % % u_t = -u*u_x - u_xx - u_xxxx, periodic BCs on [0,32*pi] % computation is based on v = fft(u), so linear term is diagonal % compare p27.m in Trefethen, "Spectral Methods in MATLAB", SIAM 2000 % AK Kassam and LN Trefethen, July 2002 % Spatial grid and initial condition: N = 128; x = 32*pi*(1:N)'/N; u = cos(x/16).*(1*sin(x/16)); v = fft(u); % Precompute various ETDRK4 scalar quantities: h = 1/4; % time step k = [0:N/2-1 0 -N/2+1:-1]'/16; % wave numbers L = k.^2 - k.^4; % Fourier multipliers E = exp(h*L); E2 = exp(h*L/2); M = 16; % no. of points for complex means r = exp(1i*pi*((1:M)-.5)/M); % roots of unity LR = h*L(:,ones(M,1)) + r(ones(N,1),:); Q = h*real(mean( (exp(LR/2)-1)./LR,2)); f1 = h*real(mean( (-4-LR+exp(LR).*(4-3*LR+LR.^2))./LR.^3,2)); f2 = h*real(mean( (2+LR+exp(LR).*(-2+LR))./LR.^3,2)); f3 = h*real(mean( (-4-3*LR-LR.^2+exp(LR).*(4-LR))./LR.^3,2)); % Main time-stepping loop: uu = u; tt = 0; tmax = 150; nmax = round(tmax/h); nplt = floor((tmax/100)/h); g = -0.5i*k; for n = 1:nmax t = n*h; Nv = g.*fft(real(ifft(v)).^2); a = E2.*v + Q.*Nv; Na = g.*fft(real(ifft(a)).^2); b = E2.*v + Q.*Na; Nb = g.*fft(real(ifft(b)).^2); c = E2.*a + Q.*(2*Nb-Nv); Nc = g.*fft(real(ifft(c)).^2); v = E.*v + Nv.*f1 + 2*(Na*Nb).*f2 + Nc.*f3; if mod(n,nplt)==0 u = real(ifft(v)); uu = [uu,u]; tt = [tt,t]; end end % Plot results: surf(tt,x,uu), shading interp, lighting phong, axis tight view([-90 90]), colormap(autumn); set(gca,'zlim',[-5 50]) light('color',[1 1 0],'position',[-1,2,2]) material([0.30 0.60 0.60 40.00 1.00]); Figure 7: Matlab code to solve the KS equation and produce Figure 6. Despite the extraordinary sensitivity of this equation to perturbations, this code computes correct results in less than \(1\) second on an \(800\,\mathrm{MHz}\)Pentium machine. Figure 8: Time evolution for the Allen–Cahn equation. The x axis runs from \(x=-1\) to \(x=1\), and the t-axis runs from \(t=0\) to \(t=70\). The initial hump is metastable and disappears near \(t=45\). # The \(\lambda\)-function The \(\lambda\)-function is defined as \[\lambda(x)=\frac{1}{\sqrt{\lambda(x)}}\] where \(\lambda(x)\) is the real-valued function. The \(\lambda\)-function is defined as \[\lambda(x)=\frac{1}{\sqrt{\lambda(x)}}\] and the \(\lambda\)-function is defined as ## References * [1]A. Aceves, H. Adachihara, C. Jones, J. C. Lerman, D. W. McLaughlin, J. V. Moloney, and A. C. Newell, _Chaos and coherent structures in partial differential equations_, Phys. D, 18 (1986), pp. 85-112. * [2]U. M. Ascher, S. J. Ruuth, and B. T. R. Wetton, _Implicit-explicit methods for time-dependent partial differential equations_, SIAM J. Numer. Anal., 32 (1995), pp. 797-823. * [3]U. M. Ascher, S. J. Ruuth, and R. J. Spiteri, _Implicit-explicit Runge-Kutta methods for time-dependent partial differential equations_, Appl. Numer. Math., 25 (1997), pp. 151-167. * [4]K. A. Bagrinovskii and S. K. Godunov, _Difference schemes for multi-dimensional problems_, Dokl. Acad. Nauk, 115 (1957), pp. 431-433. * [5]G. Beylkin, J. M. Keiser, and L. Vozovoi, _A new class of time discretization schemes for the solution of nonlinear PDEs_, J. Comput. Phys., 147 (1998), pp. 362-387. * [6]A. Bourlioux, A. T. Layton, and M. L. Minion, _High-order multi-implicit spectral deferred correction methods for problems of reactive flows_, J. Comput. Phys., 189 (2003), pp. 651-675. * [7]J. P. Boyd, _Chebyshev and Fourier Spectral Methods_, Dover, Mineola, NY, 2001; also available online at [http://www-personal.engin.umich.edu/](http://www-personal.engin.umich.edu/)\(\sim\)jpbody/. * [8]J. M. Burgers, _A mathematical model illustrating the theory of turbulence_, Adv. Appl. Mech., 1 (1948), pp. 171-199. * [9]G. D. Byrne and A. C. Hindmarsh, _Stiff ODE solvers: A review of current and coming attractions_, J. Comput. Phys., 70 (1987), pp. 1-62. * [10]M. P. Calvo, J. de Frutos, and J. Novo, _Linearly implicit Runge-Kutta methods for advection-diffusion equations_, Appl. Numer. Math., 37 (2001), pp. 535-549. * [11]C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, _Spectral Methods in Fluid Dynamics_, Springer-Verlag, Berlin, 1988. * [12]J. Certaine, _The solution of ordinary differential equations with large time constants_, in Mathematical Methods for Digital Computers, A. Ralston and H. S. Wilf, eds., Wiley, New York, 1960, pp. 128-132. * [13]T. F. Chan and T. Kerkhoven, _Fourier methods with extended stability intervals for the Korteweg-de Vries equation_, SIAM J. Numer. Anal., 22 (1985), pp. 441-454. * [14]S. M. Cox and P. C. Matthews, _Exponential time differencing for stiff systems_, J. Comput. Phys., 176 (2002), pp. 430-455. * [15]P. J. Davis, _On the numerical integration of periodic analytic functions_, in On Numerical Approximation, E. R. Langer, ed., University of Wisconsin Press, Madison, WI, 1959, pp. 45-49. * [16]P. J. Davis and P. Rabinowitz, _Methods of Numerical Integration_, 2nd ed., Academic Press, New York, 1984. * [17]T. A. Driscoll, _A composite Runge-Kutta method for the spectral solution of semilinear PDEs_, J. Comput. Phys., 182 (2002), pp. 357-367. * [18]L. N. G. Filon, _On a quadrature formula for trigonometric integrals_, Proc. Roy. Soc. Edinburgh Sect. A, 49 (1928-1929), pp. 38-47. * [19]B. Fornberg, _A Practical Guide to Pseudospectral Methods_, Cambridge University Press, Cambridge, UK, 1996. * [20]B. Fornberg and T. A. Driscoll, _A fast spectral algorithm for nonlinear wave equations with linear dispersion_, J. Comput. Phys., 155 (1999), pp. 456-467. * [21]R. A. Friesner, L. S. Tuckerman, B. C. Dornblaser, and T. V. Russo, _A method for exponential propagation of large stiff nonlinear differential equations_, J. Sci. Comput., 4 (1989), pp. 327-354. * [22]E. Hairer, S. P. Norsett, and G. Wanner, _Solving Ordinary Differential Equations_ I, Springer-Verlag, Berlin, 1991. * [23]E. Hairer and G. Wanner, _Solving Ordinary Differential Equations_ II, Springer-Verlag, Berlin, 1996. * [24]P. Henrici, _Applied and Computational Complex Analysis_, Vol. 3, Wiley, New York, 1986. * [25]N. J. Higham, _Accuracy and Stability of Numerical Algorithms_, SIAM, Philadelphia, 1996. * [26]M. Hochbruck, C. Lubich, and H. Selhofer, _Exponential integrators for large systems of differential equations_, SIAM J. Sci. Comput., 19 (1998), pp. 1552-1574. * [27]R. A. Horn and C. R. Johnson, _Topics in Matrix Analysis_, Cambridge University Press, Cambridge, UK, 1991. * [28]J. M. Hyman and B. Nicolaenko, _The Kuramoto-Sivashinsky equation: A bridge between PDEs and dynamical systems_, Phys. D, 18 (1986), pp. 113-126. * [29]A. Iserles, _A First Course in the Numerical Analysis of Differential Equations_, Cambridge University Press, Cambridge, UK, 2000. * [30]A. Iserles, _On the numerical quadrature of highly oscillating integrals_ I: _Fourier transforms_, IMA J. Numer. Anal., 24 (2004), pp. 365-391. * [31]J. C. Jiminez, R. Biscay, C. Mora, and L. M. Rodriguez, _Dynamic properties of the local linearization method for initial-value problems_, Appl. Math. Comput., 126 (2002), pp. 63-81. * [32]G. E. Karniadakis, M. Israeli, and S. A. Orszag, _High order splitting methods for the incompressible Navier-Stokes equations_, J. Comput. Phys., 97 (1991), pp. 414-443. * [33]C. A. Kennedy and M. H. Carpenter, _Additive Runge-Kutta schemes for convection-diffusion-reaction equations_, Appl. Numer. Math., 44 (2003), pp. 139-181. * [34]J. Kim and P. Moin, _Applications of a fractional step method to incompressible Navier-Stokes equations_, J. Comput. Phys., 59 (1985), pp. 308-323. * [35]O. M. Kuo, H. N. Najm, and P. S. Wyckoff, _A semi-implicit numerical scheme for reacting flow_, J. Comput. Phys., 154 (1999), pp. 428-467. * [36]D. Korteweg and G. de Vries, _On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves_, Philos. Mag. Ser. 5, 39 (1895), pp. 422-433. * [37]W. Kress and B. Gustafsson, _Deferred correction methods for initial value boundary problems_, J. Sci. Comput., 17 (2002), pp. 241-251. * [38]M. Krusemeyer, _Differential Equations_, Macmillan College Publishing, New York, 1994. * [39]Y. Kuramoto and T. Tsuzuki, _Persistent propagation of concentration waves in dissipative media far from thermal equilibrium_, Prog. Theoret. Phys., 55 (1976), pp. 356-369. * [40]Y. Maday, A. T. Patera, and E. M. Ronquist, _An operator-integration-factor splitting method for time-dependent problems: Application to incompressible fluid flow_, J. Sci. Comput., 5 (1990), pp. 263-292. * [41]R. I. McLachlan and P. Atela, _The accuracy of symplectic integrators_, Nonlinearity, 5 (1992), pp. 541-562. * [42]R. McLachlan, _Symplectic integration of Hamiltonian wave equations_, Numer. Math., 66 (1994), pp. 465-492. * [43]W. J. Merryfield and B. Shizgal, _Properties of collocation third-derivative operators_, J. Comput. Phys., 105 (1993), pp. 182-185. * [44]P. A. Milewski and E. G. Tabak, _A pseudospectral procedure for the solution of nonlinear wave equations with examples from free-surface flows_, SIAM J. Sci. Comput., 21 (1999), pp. 1102-1114. * [45]M. L. Minion, _Semi-implicit spectral deferred correction methods for ordinary differential equations_, Commun. Math. Sci., 1 (2003), pp. 471-500. * [46]D. R. Mott, E. S. Oran, and B. van Leer, _A quasi-steady state solver for the stiff ordinary differential equations of reaction kinetics_, J. Comput. Phys., 164 (2000), pp. 407-428. * [47]B. Nicolaenko, B. Scheurer, and T. Temam, _Some global properties of the Kuramoto-Sivashinsky equation: Nonlinear stability and attractors_, Phys. D, 16 (1985), pp. 155-183. * [48]S. P. Norsett, _An \(A\)-stable modification of the Adams-Bashforth methods_, in Conference on Numerical Solution of Differential Equations (Dundee, 1969), Lecture Notes in Math. 109, Springer-Verlag, Berlin, 1969, pp. 214-219. * [49]E. Ott, _Chaos in Dynamical Systems_, Cambridge University Press, Cambridge, UK, 1993. * [50]R. D. Ruth, _A canonical integration technique_, IEEE Trans. Nuclear Science, NS-30 (1983), pp. 2669-2671. * [51]S. J. Ruuth, _Implicit-explicit methods for reaction-diffusion problems in pattern formation_, J. Math. Biol., 34 (2) (1995), pp. 148-176. * [52]Y. Saad, _Analysis of some Krylov subspace approximations to the matrix exponential operator_, SIAM J. Numer. Anal., 29 (1992), pp. 209-228. * [53]J. M. Sanz-Serna and M. P. Calvo, _Numerical Hamiltonian Problems_, Chapman and Hall, London, 1994. * [54]M. Schatzman, _Toward non-commutative numerical analysis: High order integration in time_, J. Sci. Comput., 17 (2002), pp. 99-116. * [55]L. M. Smith and F. Waleffe, _Transfer of energy to two-dimensional large scales in forced, rotating three-dimensional turbulence_, Phys. Fluids, 11 (1999), pp. 1608-1622. * [56]L. M. Smith and F. Waleffe, _Generation of slow large scales in forced rotating stratified turbulence_, J. Fluid Mech., 451 (2002), pp. 145-169. * [57]G. Strana, _On the construction and comparison of difference schemes_, SIAM J. Numer. Anal., 5 (1968), pp. 506-517. * [58]E. Tadmor, _The exponential accuracy of Fourier and Chebyshev differencing methods_, SIAM J. Numer. Anal., 23 (1986), pp. 1-10. * [59]A. Taflove, _Computational Electrodynamics: The Finite-Difference Time-Domain Method_, Artech House, Boston, 1995. * [60]L. N. Trefethen, _Spectral Methods in MATLAB_, Software Environ. Tools 10, SIAM, Philadelphia, 2000. * [61]J. M. Varah, _Stability restrictions on second order, three level finite difference schemes for parabolic equations_, SIAM J. Numer. Anal., 17 (1980), pp. 300-309. * [62]J. G. Verwer, J. G. Blom, and W. Hundsdorfer, _An implicit-explicit approach for atmospheric transport-chemistry problems_, Appl. Numer. Math., 20 (1996), pp. 191-209. * [63]H. Yoshida, _Construction of higher order symplectic integrators_, Phys. Lett. A, 150 (1990), pp. 262-268. ## Chapter 11 Kuramoto-Sivashinsky Equation The _Kuramoto-Sivashinsky equation_ ([6], p593) is \[\frac{\partial u}{\partial t}=-u\frac{\partial u}{\partial x}-\alpha\frac{ \partial^{2}u}{\partial x^{2}}-\beta\frac{\partial^{3}u}{\partial x^{3}}- \gamma\frac{\partial^{4}u}{\partial x^{4}} \tag{11.1}\] with the analytical solution [6] for the parameter values \(\beta=0,\alpha=\gamma=1,k=\pm\sqrt{\frac{11}{19}}\). \[u(x,t)=\frac{15}{19}k\left(11H^{3}-9H+2\right);\;\;H=\tanh\left(\frac{1}{2}kx -\frac{15}{19}k^{2}t\right). \tag{11.2}\] Two _Dirichlet_ BCs are available by applying eq. (11.2) with \(x=-10,20\). Two more BCs are available by differentiating eq. (11.2) \[\frac{\partial u}{\partial x}=\frac{15}{19}k\left(33H^{2}\frac{\partial H}{ \partial x}-9\frac{\partial H}{\partial x}\right) \tag{11.3}\] with \[\frac{\partial H}{\partial x}=\frac{1}{2}k\left[1-\tanh^{2}\left(\frac{1}{2}kx -\frac{15}{19}k^{2}t\right)\right]\] Equation (11.3) can then be applied as _Neumann_ BCs at \(x=-10,20\). The Matlab routines closely resemble those of Chapters 3-10. Here, we list a few details pertaining to eqns. (11.1)-(11.3). First, the ODE routine pde_1.m is The IC from eq. (11.2) with \(t=0\) is programmed in inital_1.m (the IC is programmed in ua_1.m with \(t=0\)). function u0=inital_1(t0) % % Function inital_1 sets the initial condition for the Kuramoto- % Sivashinsky equation % % Parameters shared with other routines global x1 xu x nncall % % Spatial domain and initial condition x1=-10: xu= 20: n=201: dx=(xu-x1)/(n-1):
% IC from analytical solution for i=1:n x(i)=x1+(i-1)*dx; u0(i)=ua_l(x(i),0.0); end Listing 11.2: Function initial_l.m for the IC from eq. (11.2) with \(t=0\). We can note the following points about inital_l.m: * The function and some global parameters are first defined. function u0=inital_l(t0) % % Function inital_l sets the initial condition for the Kuramoto- % Sivashinsky equation % % Parameters shared with other routines global x1 xu x nncall * The grid in \(x\) is then defined over the interval \(-10\leq x\leq 20\) for 201 points. % % Spatial domain and initial condition x1=-10; xu= 20: n=201: dx=(xu-x1)/(n-1); % % IC from analytical solution for i=1:n x(i)=x1+(i-1)*dx; u0(i)=ua_l(x(i),0.0); end As the grid in \(x\) is defined in the for loop, function ua_l (listed next) is called (for \(t=0\)) to define the IC from eq. (11.2) with \(t=0\). Function ua_l.m is a straightforward implementation of the analytical solution, eq. (11.2). function uanal=ua_l(x,t) % Function uanal computes the exact solution of the Kuramoto- % Sivashinsky equation for comparison with the numerical solution % % Model parameters global alpha beta gamma k % # Analytical solution H=tanh(0.5*k*x-(15/19)*k^2*t); uanal=(15/19)*k*(11*H^3-9*H+2); Listing 11.3: Function ua_l.m for analytical solution (11.2). As noted previously, eq. (11.1) is fourth order in \(x\) and therefore requires four BCs. Two Dirichlet BCs are available by applying eq. (11.2) at \(x=-10,20\). Two more (Neumann) BCsare available by differentiating eq. (11.2) with respect to \(x\) (to give eq. (11.3)). This derivative is programmed in uax_1. function uax=uax_1(x,t) % Function uax computes the derivative (in x) of the exact solution % of the Kuramoto-Sivashinsky equation % global alpha beta gamma k % % Analytical solution arg=0.5*k*x-(15/19)*k^2*t; H=tanh(arg): Hx=(1-tanh(arg)^2)+0.5*k; uax=(15/19)*k*(33*H^2*Hx-9*Hx); **LISTING 11.4:** Function uax_1.n for the analytical derivative (11.3). Function uax_1.m is a straightforward implementation of the analytical derivative, eq. (11.3). The main program, pde_1_main, is essentially the same as pde_1_main of Listing 2.1. A selected part of this routine follows. % Model parameters global alpha beta gamma k alpha=1; beta=0: gamma=1; k=(11/19)^0.5; % Independent variable for ODE integration t0=0.0: tf=10: tout=[t0:2:tf]^; not=6: ncall=0: % Initial condition u0=initial_1(t0); . . % Display selected output for it=1:nout fprintf('n t x u(it,i) u_anal(it,i) err(it,i)\n'): for i=1:5:n fprintf('%6.2f%8.3f%15.6f%15.6f%15.6f%15.6f\n'.... t(it),x(i),u(it,i).u_anal(it,i).err(it,i)); end end end fprintf(' ncall = %4d\n\n'.ncall); % Plot numerical and analytical solutions figure(2)This code consists essentially of two parts. * The problem parameters are defined numerically and the scale in \(t\) is defined as \(0\leq t\leq 10\) with output displayed at \(t=0,2,4,\ldots,10\). * The solution is plotted in 2D (by a call to plot) and 3D (by a call to surf). This main program produces the same three figures and tabulated output as Chapters 3-10, which are now reviewed. The Jacobian matrix routine jpattern_num_l.m is the same as jpattern_num_l.m in Chapters 3-10 and is therefore not reproduced here. Figure 11.1 indicates good agreement between the analytical and numerical solutions. Figure 11.2 is the 3D plot of the numerical solution. The map of the ODE Jacobian matrix, Fig. 11.3, reflects the banded structure of the ODEs produced by dss004. The tabular analytical and numerical solutions indicate good agreement as displayed in Table 11.1. The computational effort reflected in ncal1 = 564 is modest. We note in eq. (11.2) that \(x\) and \(t\) appear as the linear combination, \(\frac{1}{2}kx-\frac{15}{19}k^{2}t\) so that the analytical solution represents a _traveling wave_ as reflected in Fig. 11.1. If we consider the _Lagrangian_ variable to be \(k(x-vt)\) where \(k\) and \(v\) are the _wavenumber_ and _wave velocity_, respectively, then \(k\) and \(v\) follow immediately from the linear combination \(\frac{1}{2}kx-\frac{15}{19}k^{2}t\). This point was discussed in earlier chapters in which the wave velocity from the analytical solution was compared with the wave velocity from the numerical solution (see, for example, Chapter 10, for an example of this comparison). In summary, the solution of eq. (11.1) subject to the IC from eq. (11.2) (with \(t=0\)), two Dirichlet BCs from eq. (11.2) (with \(x=-10,20\)), and two Neumann BCs from eq. (11.3) (with \(x=-10,20\)) is straightforward. Also, eq. (11.1) is nonlinear, yet the programming in pde1.m is straightforward. Consequently, variations in the PDE can easily be made for cases for which an analytical solution might not be available. Figure 11.2: 3D plot of the numerical solution to eq. (11.1). ## Appendix The _Kuramoto-Sivashinsky_ equation describes one of the simplest nonlinear systems that exhibit _turbulence_. It has been used to study various reaction-diffusion problems and, in particular, it is used to model the _thermal mechanism of flame propagation_ or _combustion waves_. Both Gregory Sivashinsky, whilst studying _laminar flame fronts_[7], and Yoshiki Kuramoto, whilst studying _diffusion-induced chaos_[4], discovered independently the equation now known as the _Kuramoto-Sivashinsky_ equation, which is usually presented in a normalized form. This equation models the time evolution of flame-front velocity which is determined by a balance between the quantity of heat released by the combustion reaction and the heat required to preheat the incoming reactants. The eq. (11.1) with constants \(\alpha=1\), \(\beta=0\), \(\gamma=1\) reduces to the form \[u_{t}=-\frac{1}{2}\left(u^{2}\right)_{x}-u_{xx}-u_{xxxx},\] Figure 11.3: Jacobian matrix map of the MOL ODEs for \(n=201\). which can be simulated using periodic boundary conditions, \(x\in[0,L]\), to give rich examples of chaotic behavior. The results at later times are extremely sensitive to small changes in initial conditions, and the transition from a smooth solution to chaos makes modeling in the time domain very difficult for extended simulated time periods. However, transforming the problem into the Fourier domain greatly reduces the stiffness of the problem and enables good results to be obtained with little computing effort. For example, see Fig. 11.4. \begin{table} \begin{tabular}{l c c c c} t & x & u(it,i) & u\_anal(it,i) & err(it,i) \\ 0.00 & \(-\)10.000 & 0.014276 & 0.014276 & 0.000000 \\ 0.00 & \(-\)9.250 & 0.025224 & 0.025224 & 0.000000 \\ 0.00 & \(-\)8.500 & 0.044520 & 0.044520 & 0.000000 \\ 0.00 & \(-\)7.750 & 0.078424 & 0.078424 & 0.000000 \\ 0.00 & \(-\)7.000 & 0.137674 & 0.137674 & 0.000000 \\ \(\cdot\) & \(\cdot\) & \(\cdot\) & \(\cdot\) \\ 0.00 & 17.000 & 2.402728 & 2.402728 & 0.000000 \\ 0.00 & 17.750 & 2.402758 & 2.402758 & 0.000000 \\ 0.00 & 18.500 & 2.402775 & 2.402775 & 0.000000 \\ 0.00 & 19.250 & 2.402785 & 2.402785 & 0.000000 \\ 0.00 & 20.000 & 2.402791 & 2.402791 & 0.000000 \\ \(\cdot\) & \(\cdot\) & \(\cdot\) & \(\cdot\) \\ 0.00 & \(\cdot\) & \(\cdot\) & \(\cdot\) \\ The Matlab code, kursiv.m, used to generate Fig. 11.4 is based on _spectral methods_ and uses the _exponential time differencing_ scheme ETDRK4 described in detail by Cox and Matthews [2]. A copy of this code is included with the downloads for this book. The solution of PDEs by spectral methods is currently an active area of research and can provide outstanding results for some problems. A good general introduction to these methods is given in Trefethen's monograph [9]. This topic will not be considered further here, and for additional information relating to applications and theory of the Kuramoto-Sivashinsky equation, readers are referred to [1, 3, 5, 8]. The Kuramoto-Sivashinsky equation also admits traveling wave solutions, and these can be found using any one of the _tanh_, _exp_, and _Riccati_ methods described in the main Appendix. We choose the tanh method and the Maple code listed in Listing 11.6 finds 11 solutions, one of which corresponds to the original solution given in eq. (11.2). ``` >#Kuramoto-SivashinskyEquation #AttemptatMalfliet'stanhsolution restart:with(PDEtools):with(PolynomialTools):with(plots):unprotect(gamma):balias(un=u(x,t)): Figure 11.4: Time evolution for the _Kuramoto–Sivashinsky_ equation with initial condition \(u(x,0)=\cos(x/16)\) (\(1+\sin(x/16)\)) and \(x\in[0,32\pi]\). This image was generated from Matlab code described in the paper by Kassam and Trefethen [3]. >pdel:=diff(u,t)+u+diff(u,x)+alpha*diff(u,x,x) +beta*diff(u,x,x,x)+gamma*diff(u,x,x,x,x)=0; ># set parameter values beta:=0: alpha:=l:gamma:=l: >read("tanhMethod.txt"): >intFlg:=l: # No integration of U(xi) needed! M:=3: # Set order of approximation infoLevOut:=0; tanhMethod(M,pdel.intFlg,infoLevOut): ># Animate solution zz:=rhs(sol[8]):x0:=0: animate(zz,x=-10..35,t=0..20. numpoints=100.frames=50. axes=framed.labels=["x"."u"]. thickness=3.title="Kuramoto-Sivashinsky Equation". labelfont=[TIMES, ROMAN, 16],axesfont=[TIMES, ROMAN, 16]. titlefont=[TIMES, ROMAN, 16]): ># Generate a 3D surface plot of solution plot3d(zz,x=-10..35,t=0..20. axes=framed, grid=[100,100],thickness=0. labeldirections=[HORIZONTAL,HORIZONTAL,VERTICAL]. style=patchnogrid.labels=["x"."t"."u(x,t)"]. orientation=[-116.46].title="Kuramoto-Sivashinsky Equation". shading=Z. lightmodel=none. labelfont=[TIMES, ROMAN, 16],axesfont=[TIMES, ROMAN, 16]. titlefont=[TIMES, ROMAN, 16]): ``` Listing 6: Maple code to solve eq. (11.1) using the _tanh_ method. It is as equally straightforward to find traveling wave solutions by application of either of the Maple procedures expMethod() or riccatiMethod() described in the main Appendix. The _exp_ method finds eight solutions, and the _Riccati_ method finds \(11\times 6\) solutions (recall that each solution of the Riccati equation yields six separate traveling wave solutions). They both find solutions that match the original solution of eq. (11.2). In order to save space, listings of the Maple code implementations of the exp and Riccati methods will not be included here, but they are available in the downloadable software for this book. ## References * [1] C. Chandre, E.K. Diakonos, P. Schmelcher, Turbulence, in: _Chaos: Classical and Quantum_, Niels Bohr Institute, Copenhagen, 2009, <[http://chaosbook.org/version13/chapters/ChaosBook.pdf](http://chaosbook.org/version13/chapters/ChaosBook.pdf)>. * [2] S.M. Cox, P.C. Matthews, Exponential time differencing for stiff systems, _J. Comp. Phys._ 176 (2002) 430-455. * [3] A-K. Kassam, L.N. Trefethen, Fourth order time stepping for stiff PDEs, _SIAM J. Sci. Comp._ 26 (2002) 1214-1233. * [4] Y. Kuramoto, Diffusion-induced chaos in reaction systems, _Progr. Theor. Phys. Suppl._ 64 (1978) 346-367. - A cyclist's view_, PhD thesis (version 0.8), School of Physics, Georgia Institute of Technology, May 18, 2004. * [6] A.D. Polyanin, V.F. Zaitsev, _Handbook of Nonlinear Partial Differential Equations_, Chapman & Hall/CRC, Boca Raton, FL, 2004. * [7] G.I. Sivashinsky, Nonlinear analysis of hydrodynamic instability in laminar flames-I. Derivation of basic equations, _Acta Astronautica_ 4 (1977) 1177-1206. * [8] G.I. Sivashinsky, Instabilities, pattern formation, and turbulence in flames, _Ann. Rev. Fluid Mech._ 15 (1983) 179-99. * [9] L.N. Trefethen, _Spectral Methods in Matlab_, SIAM, Philadelphia, PA, 2000. ## Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories--a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average \(<\log|df/dx|>\) over the dynamics (chapter 4). In this chapter the concept is generalized to higher dimensional maps and flows. There are now a number of exponents equal to the dimension of the phase space \(\lambda_{1},\lambda_{2}\dots\) where we choose to order them in decreasing value. The exponents can be intuitively understood geometrically: line lengths separating trajectories grow as \(e^{\lambda_{1}t}\) (where \(t\) is the continuous time in flows and the iteration index for maps); areas grow as \(e^{(\lambda_{1}+\lambda_{2})t}\); volumes as \(e^{(\lambda_{1}+\lambda_{2}+\lambda_{3})t}\) etc. However, areas and volumes will become strongly distorted over long times, since the dimension corresponding to \(\lambda_{1}\) grows more rapidly than that corresponding to \(\lambda_{2}\) etc., and so this is not immediately a practical way to calculate the exponents. ### 7.1 Maps Consider the map \[U_{n+1}=F(U_{n}). \tag{7.1}\] with \(U\) the phase space vector. We want to know what happens to a small change in \(U_{0}\). This is given by the iteration of the "tangent space" given by the Jacobean matrix \[K_{ij}(U_{n})=\left.\frac{\partial F_{i}}{\partial U^{(j)}}\right|_{U=U_{n}}. \tag{7.2}\]Then if the change in \(U_{n}\) is \(\varepsilon_{n}\) \[\varepsilon_{n+1}={\bf K}(U_{n})\varepsilon_{n}, \tag{7.3}\] or \[\frac{\partial U_{n}^{(i)}}{\partial U_{0}^{(j)}}=M_{ij}^{n}=\left[{\bf K}(U_{ n-1}){\bf K}(U_{n-2})\ldots{\bf K}(U_{0})\right]_{ij}. \tag{7.4}\] ### 7.2 Flows For continuous time systems \[\frac{dU}{dt}=f(U) \tag{7.5}\] a change \(\varepsilon(t)\) in \(U(t)\) evolves as \[\frac{d\varepsilon}{dt}={\bf K}(U)\varepsilon\quad\mbox{with}\quad K^{(ij)}= \left.\frac{\partial f_{i}}{\partial U^{(j)}}\right|_{U=U(t)}. \tag{7.6}\] Then \[\frac{\partial U^{(i)}(t)}{\partial U^{(j)}(t_{0})}=M_{ij}(t,t_{0}) \tag{7.7}\] with \({\bf M}\) satisfying \[\frac{d{\bf M}}{dt}={\bf K}(U(t)){\bf M}. \tag{7.8}\] ### 7.3 Oseledec's Multiplicative Ergodic Theorem Roughly, the eigenvalues of \({\bf M}\) for large \(t\) are \(e^{\lambda_{i}n}\) or \(e^{\lambda_{i}(t-t_{0})}\) for maps and flows respectively. The existence of the appropriate limits is known as Oseledec's multiplicative ergodic theorem [1]. The result is stated here in the language of flows, but the version for maps should then be obvious. For almost any initial point \(U(t_{0})\) there exists an orthonormal set of vectors \(v_{i}\) (\(t_{0}\)), \(1\leq i\leq n\) with \(n\) the dimension of the phase space such that \[\lambda_{i}=\lim_{t\to\infty}\frac{1}{t-t_{0}}\log\|\mathbf{M}(t,t_{0})v_{i}(t_{ 0})\| \tag{7.9}\] exists. For ergodic systems the \(\{\lambda_{i}\}\) do not depend on the initial point, and so are global properties of the dynamical system. The \(\lambda_{i}\) may be calculated as the log of the eigenvalues of \[\left[\mathbf{M}^{T}(t,t_{0})\mathbf{M}(t,t_{0})\right]^{\frac{1}{2(t-t_{0})}}. \tag{7.10}\] with \(T\) the transpose. The \(v(t_{0})\) are the eigenvectors of \(\mathbf{M}^{T}(t,t_{0})\mathbf{M}(t,t_{0})\) and are independent of \(t\) for large \(t\). Some insight into this theorem can be obtained by considering the "singular valued decomposition" (SVD) of \(M=M(t,t_{0})\) (figure 7.1a). Any real matrix can be decomposed \[\mathbf{M}=\mathbf{W}\mathbf{D}\mathbf{V}^{T} \tag{7.11}\] where \(D\) is a diagonal matrix with diagonal values \(d_{i}\) the square root of the eigenvalues of \(\mathbf{M}^{T}\mathbf{M}\) and \(V\), \(W\) are orthogonal matrices, with the columns \(v_{i}\) of \(V\) the orthonormal eigenvectors of \(M^{T}M\) and the columns \(w_{i}\) of \(W\) the orthonormal eigenvectors of \(MM^{T}\). Pictorially, this shows us that a unit circle of initial conditions is mapped by \(M\) into an ellipse: the principal axes of the ellipse are the \(w_{i}\) and the lengths of the semi axes are \(d_{i}\). Furthermore the preimage of the \(w_{i}\) are \(v_{i}\) i.e. the \(v_{i}\) are the particular choice of orthonormal axes for the unit circle that are mapped into the ellipse axes. The multiplicative ergodic theorem says that the vectors \(v_{i}\) are _independent_ of \(t\) for large \(t\), and the \(d_{i}\) yield the Lyapunov exponents in this limit. The vector \(v_{i}\) defines a direction such that an initial displacement in this direction is asymptotically amplified at a rate given by \(\lambda_{i}\). For a fixed _final point_\(U(t)\) one would similarly expect the \(w_{i}\) to be independent of \(t_{0}\) for most \(t_{0}\) and large \(t-t_{0}\). Either the \(v_{i}\) or the \(w_{i}\) may be called Lyapunov eigenvectors. ### 7.4 Practical Calculation The difficulty of the calculation is that for any initial displacement vector \(v\) (which may be an attempt to approximate one of the \(v_{i}\)) any component along \(v_{1}\) will be enormously amplified relative to the other components, so that the iterated displacement becomes almost parallel to the iteration of \(v_{0}\), with all the information of the other Lyapunov exponents contained in the tiny correction to this. Various numerical techniques have been implemented [2] to maintain control of the small correction, of which the most intuitive, although not necessarily the most accurate, is the method using Gramm-Schmidt orthogonalization after a number of steps [3] (figure 7.1b). Orthogonal unit displacement vectors \(O^{(1)}\), \(O^{(2)},\dots\) are iterated according to the Jacobean to give, after some number of iterations \(n_{1}\) (for a map) or some time \(\Delta t_{1}\) (for a flow), \(P^{(1)}=\mathbf{M}O^{(1)}\) and \(P^{(2)}=\mathbf{M}O^{(2)}\) etc. We will use \(O^{(1)}\) to calculate \(\lambda_{1}\) and \(O^{(2)}\) to calculate \(\lambda_{2}\) etc. The vectors \(P^{(i)}\) will all tend to align along a single direction. We keep track of the orthogonal components using Gramm Figure 7.1: Calculating Lyapunov exponents. (a) Oseledec’s theorem (SVD picture): orthonormal vectors \(v_{1}\), \(v_{2}\) can be found at initial time \(t_{0}\) that \(M(t,t_{0})\) maps to orthonormal vectors \(w_{1}\), \(w_{2}\) along axes of ellipse. For large \(t-t_{0}\) the \(v_{i}\) are independent of \(t\) and the lengths of the ellipse axes grow according to Lyapunov eigenvalues. (b) Gramm-Schmidt procedure: arbitrary orthonormal vectors \(O_{1}\), \(O_{2}\) map to \(P_{1}\), \(P_{2}\) that are then orthogonalized by the Gramm-Schmidt procedure preserving the growing area of the parallelepiped. Schmidt orthogonalization. Write \(P^{(1)}=N^{(1)}\hat{P}^{(1)}\) with \(N^{(1)}\) the magnitude and \(\hat{P}^{(1)}\) the unit vector giving the direction. Define \(P^{\prime(2)}\) as the component of \(P^{(2)}\) normal to \(P^{(1)}\) \[P^{\prime(2)}=P^{(2)}-\left(P^{(2)}\cdot\hat{P}^{(1)}\right)\hat{P}^{(1)}. \tag{7.12}\] and then write \(P^{\prime(2)}=N^{(2)}\hat{P}^{\prime(2)}\). Notice that the area \(P^{(1)}\times P^{(2)}=P^{(1)}\times P^{\prime(2)}\) is preserved by this transformation, and so we can use \(P^{\prime(2)}\) (in fact its norm \(N^{(2)}\)) to calculate \(\lambda_{2}\). For dimensions larger than 2 the further vectors \(P^{(i)}\) are successively orthogonalized to all previous vectors. This process is then repeated and the eigenvalues are given by (quoting the case of maps) \[\begin{array}{l}e^{n\lambda_{1}}=N^{(1)}(n_{1})N^{(1)}(n_{2})\ldots\\ e^{n\lambda_{2}}=N^{(2)}(n_{1})N^{(2)}(n_{2})\ldots\end{array} \tag{7.13}\] etc. with \(n=n_{1}+n_{2}+\ldots\). Comparing with the singular valued decomposition we can describe the Gramm-Schmidt method as following the growth of the area of parallelepipeds, whereas the SVD description follows the growth of ellipses. ### Example 1: the Lorenz Model The Lorenz equations (chapter 1) are \[\begin{array}{lcl}\dot{X}&=&-\sigma\,(X-Y)\\ \dot{Y}&=&r\,X-Y-XZ\\ \dot{Z}&=&XY-bZ\end{array}. \tag{7.14}\] A perturbation \(\varepsilon_{n}=(\delta X,\,\delta Y,\,\delta Z)\) evolves according to "tangent space" equations given by linearizing (7.14) \[\begin{array}{lcl}\delta\dot{X}&=&-\sigma(\delta X-\delta Y)\\ \delta\dot{Y}&=&r\,\delta X-\delta Y-(\delta X\,Z+X\,\delta Z)\\ \delta\dot{Z}&=&\delta X\,Y+X\,\delta Y-b\delta Z\end{array} \tag{7.15}\] or \[\frac{d\varepsilon}{dt}=\left[\begin{array}{ccc}-\sigma&\sigma&0\\ r-Z&-1&-X\\ Y&X&-b\end{array}\right]\varepsilon \tag{7.16}\]defining the Jacobean matrix \(\mathbf{K}\). To calculate the Lyapunov exponents start with three orthogonal unit vectors \(t^{(1)}=(1,\,0,\,0)\), \(t^{(2)}=(0,\,1,\,0)\) and \(t^{(3)}=(0,\,0,\,1)\) and evolve the components of each vector according to the tangent equations (7.16). (Since the Jacobean depends on \(X,\,Y,\,Z\) this means we evolve \((X,\,Y,\,Z)\) and the \(t^{(i)}\) as a twelve dimensional coupled system.) After a number of iteration steps (chosen for numerical convenience) calculate the magnification of the vector \(t^{(1)}\) and renormalize to unit magnitude. Then project \(t^{(2)}\) normal to \(t^{(1)}\), calculate the magnification of the resulting vector, and renormalize to unit magnitude. Finally project \(t^{(3)}\) normal to the preceding _two_ orthogonal vectors and renormalize to unit magnitude. The product of each magnification factor over a large number iterations of this procedure evolving the equations a time \(t\) leads to \(e^{\lambda_{i}t}\). Note that in the case of the Lorenz model (and some other simple examples) the trace of \(\mathbf{K}\) is independent of the position on the attractor [in this case \(-\)\((1+\sigma+b)\)], so that we immediately have the result for the sum of the eigenvalues \(\lambda_{1}+\lambda_{2}+\lambda_{3}\), a useful check of the algorithm. (The corresponding result for a map would be for a _constant determinant_ of the Jacobean: \(\sum\lambda_{i}=\ln\det\,|K\,|\).) #### Example 2: the Bakers' Map For the Bakers' map, the Lyapunov exponents can be calculated analytically. For the map in the form \[\begin{array}{lcl}x_{n+1}&=&\left\{\begin{array}{lcl}\lambda_{a}x_{n}& \mbox{if}&y_{n}<\alpha\\ (1-\lambda_{b})+\lambda_{b}x_{n}&\mbox{if}&y_{n}>\alpha\\ y_{n}/\alpha&\mbox{if}&y_{n}<\alpha\\ (y_{n}-\alpha)/\beta&\mbox{if}&y_{n}>\alpha\end{array}\right.\end{array} \tag{7.17}\] with \(\beta=1-\alpha\) the exponents are \[\begin{array}{lcl}\lambda_{1}&=&-\alpha\log\alpha-\beta\log\beta&>&0\\ \lambda_{2}&=&\alpha\ln\lambda_{a}+\beta\log\lambda_{b}&<&0\end{array}. \tag{7.18}\] This easily follows since the stretching in the \(y\) direction is \(\alpha^{-1}\) or \(\beta^{-1}\) depending on whether \(y\) is greater or less than \(\alpha\), and the measure is uniform in the \(y\) direction so the probability of an iteration falling in these regions is just \(\alpha\) and \(\beta\) respectively. Similarly the contraction in the \(x\) direction is \(\lambda_{a}\) or \(\lambda_{b}\) for these two cases. #### Numerical examples Numerical examples on 2D maps are given in the demonstrations. ### 7.5 Other Methods #### Householder transformation The Gramm-Schmidt orthogonalization is actually a method of implementing "QR decomposition". Any matrix \(\mathbf{M}\) can be written \[\mathbf{M}=\mathbf{Q}\mathbf{R} \tag{7.19}\] with \(\mathbf{Q}\) an orthogonal matrix \[\mathbf{Q}=\left[\begin{array}{cccc}\tilde{w}_{1}&\tilde{w}_{2}&\cdots& \tilde{w}_{n}\end{array}\right]\] and \(\mathbf{R}\) an upper triangular matrix \[\mathbf{R}=\left[\begin{array}{cccc}v_{1}&*&*&*\\ 0&v_{2}&*&*\\ \vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&v_{n}\end{array}\right], \tag{7.20}\] where \(*\) denotes a nonzero (in general) element. In particular for the tangent iteration matrix \(\mathbf{M}\) we can write \[\mathbf{M}=\mathbf{M}_{N-1}\mathbf{M}_{N-2}\ldots\mathbf{M}_{0} \tag{7.21}\] for the successive steps \(\Delta t_{i}\) or \(n_{i}\) for flows or maps. Then writing \[\mathbf{M}_{0}=\mathbf{Q}_{1}\mathbf{R}_{0},\quad\mathbf{M}_{1}\mathbf{Q}_{1}= \mathbf{Q}_{2}\mathbf{R}_{1},\ \text{etc.} \tag{7.22}\] we get \[\mathbf{M}=\mathbf{Q}_{N}\mathbf{R}_{N-1}\mathbf{R}_{N-2}\ldots\mathbf{R}_{0} \tag{7.23}\]so that \({\bf Q}={\bf Q}_{N}\) and \({\bf R}={\bf R}_{N-1}{\bf R}_{N-2}\dots{\bf R}_{0}\). Furthermore the exponents are \[\lambda_{i}=\lim_{t\to\infty}\frac{1}{t-t_{0}}\ln R_{ii}. \tag{7.24}\] The correspondence with the Gramm-Schmidt orthogonalization is that the \({\bf Q}_{i}\) are the set of unit vectors \(P^{\,\prime}_{1}\), \(P^{\,\prime}_{2}\),... etc. and the \(v_{i}\) are the norms \(N_{i}\). However an alternative procedure, known as the Householder transformation, may give better numerical convergence [1],[4]. #### Evolution of the singular valued decomposition The trick of this method is to find a way to evolve the matrices \({\bf W},{\bf D}\) in the singular valued decomposition (7.11) directly. This appears to be only possible for continuous time systems, and has been implemented by Kim and Greene [5]. ### 7.6 Significance of Lyapunov Exponents A positive Lyapunov exponent may be taken as the defining signature of chaos. For attractors of maps or flows, the Lyapunov exponents also sharply discriminate between the different dynamics: a fixed point will have all negative exponents; a limit cycle will have one zero exponent, with all the rest negative; and a \(m\)-frequency quasiperiodic orbit (motion on a \(m\)-torus) will have \(m\) zero eigenvalues, with all the rest negative. (Note, of course, that a fixed point on a map that is a Poincare section of a flow corresponds to a periodic orbit of the flow.) For a flow there is in fact always one zero exponent, except for fixed point attractors. This is shown by noting that the phase space velocity satisfies the tangent equations: \[\frac{d\dot{U}^{(i)}}{dt}=\frac{\partial F_{i}}{\partial U^{(j)}}\dot{U}^{(j)} \tag{7.25}\] so that for this direction \[\lambda=\lim_{t\to\infty}\frac{1}{t}\log\left|\dot{U}(t)\right| \tag{7.26}\] which tends to zero except for the approach to a fixed point. ### 7 Lyapunov Eigenvectors This section is included because I became curious about the vectors defined in the Oseledec theorem, and found little discussion of them in the literature. It can well be skipped on a first reading (and probably subsequent ones, as well!). The vectors \(v_{i}\)--the direction of the initial vectors giving exponential growth--seem not immediately accessible from the numerical methods for the exponents (except the SVD method for continuous time systems [5]). However the \(w_{i}\) are naturally produced by the Gramm-Schmidt orthogonalization. The relationship of these orthogonal vectors to the natural stretching and contraction directions seems quite subtle however. The relationship can be illustrated in the case of a map with one stretching direction \(\vec{e}^{u}\) and one contracting direction \(\vec{e}^{s}\) in the tangent space. These are unit vectors at each point on the attractor conveniently defined so that separations along \(\vec{e}^{s}\) asymptotically contract exponentially at the rate \(e^{\lambda_{-}}\) per iteration for _forward_ iteration, and separations along \(\vec{e}^{u}\) asymptotically contract exponentially at the rate \(e^{-\lambda_{+}}\) for _backward_ iteration. Here \(\lambda_{+}\), \(\lambda_{-}\) are the positive and negative Lyapunov exponents. The vectors \(\vec{e}^{s}\) and \(\vec{e}^{u}\) are tangent to the stable and unstable manifolds to be discussed in chapter 22, and have an easily interpreted physical significance. How are the orthogonal "Lyapunov eigenvectors" related to these directions? Since \(\vec{e}^{s}\) and \(\vec{e}^{u}\) are not orthogonal, it is useful to define the adjoint unit vectors \(\vec{e}^{u+}\) and\(\vec{e}^{s+}\) as in Fig.(7.2) so that \[\vec{e}^{s}\cdot\vec{e}^{u+}=\vec{e}^{u}\cdot\vec{e}^{s+}=0. \tag{7.27}\] Then under some fixed large number of iterations \(N\) it is easy to convince oneself that orthogonal vectors \(\vec{e}_{1}^{(0)}\), \(\vec{e}_{2}^{(0)}\) asymptotically close to the orthogonal pair \(\vec{e}^{s}\), \(\vec{e}^{u+}\) at the point \(U_{0}\) on the attractor are mapped by the tangent map \({\bf M}^{N}\) to directions \(\vec{e}_{1}^{(N)}\), \(\vec{e}_{2}^{(N)}\) asymptotically close to the orthogonal pair \(\vec{e}^{u}\), \(\vec{e}^{s+}\) at the iterated point \(U_{N}=F^{N}(U_{0})\), with expansion factors given asymptotically by the Lyapunov exponents (see Fig.(7.2)). For example \(\vec{e}^{s}\) is mapped to \(e^{N\lambda_{-}}\vec{e}^{s}\). However a small deviation from \(\vec{e}^{s}\) will be amplified by the amount \(e^{N\lambda_{+}}\). This means that we can find an \(\vec{e}_{1}^{(0)}\) given by a carefully chosen deviation of order \(e^{-N(\lambda_{+}-\lambda_{-})}\) from \(\vec{e}^{s}\) that will be mapped to \(\vec{e}^{s+}\). Similarly almost all initial directions will be mapped very close to \(\vec{e}^{u}\) because of the strong expansion in this direction. Deviations in the direction will be of order \(e^{-N(\lambda_{+}-\lambda_{-})}\). In particular an \(\vec{e}_{2}^{(0)}\) chosen orthogonal to \(\vec{e}_{1}^{(0)}\), i.e. very close to \(\vec{e}^{u+}\), will be mapped very close to \(\vec{e}^{u}\). Thus vectors very close to \(\vec{e}^{s}\), \(\vec{e}^{u+}\) at the point \(U_{0}\) satisfy the requirements for the \(v_{i}\) of Oseledec's theorem and \(\vec{e}^{u}\), \(\vec{e}^{s+}\) at the iterated point \(F^{N}(U_{0})\) are the \(w_{i}\) of the SVD and the vectors of the Gramm-Schmidt procedure. It should be noted that for \(2N\) iterations rather than \(N\) (for example) the vectors \(\vec{e}_{1}^{(0)}\), \(\vec{e}_{2}^{(0)}\), mapping to \(\vec{e}^{u}\), \(\vec{e}^{s+}\) at the iterated point \(U_{2N}\), must be chosen as a very slightly _different_ perturbation from \(\vec{e}^{s}\), \(\vec{e}^{u+}\)--equivalently the vectors \(\vec{e}_{1}^{(N)}\), \(\vec{e}_{2}^{(N)}\) at \(U_{N}\) will _not_ be mapped under a further \(N\) iterations to \(\vec{e}^{u}\), \(\vec{e}^{s+}\) at the iterated point \(U_{2N}\). It is apparent that even for this very simple two dimensional case neither the \(v_{i}\) nor the \(w_{i}\) separately give us the directions of both \(\vec{e}^{u}\) and \(\vec{e}^{s}\). The significance of the orthogonal Lyapunov eigenvectors in higher dimensional systems remains unclear. _January 26, 2000_ ## Bibliography * [1] J-P. Eckmann and D. Ruelle, Rev. Mod. Phys. **57**, 617 (1985) * [2] K. Geist, U. Parlitz, and W. Lauterborn, Prog. Theor. Phys. **83**, 875 (1990) * [3] A. Wolf, J.B. Swift, H.L. Swinney, and J.A. Vastano, Physica **16D**, 285 (1985) * [4] P. Manneville, in "Dissipative Structures and Weak Turbulence" (Academic, Boston 1990), p280. * [5] J.M. Green and J-S. Kim, Physica **24D**, 213 (1987) # The Nikolaevskiy equation with dispersion Eman Simbawa pmxes3@nottingham.ac.uk Paul C. Matthews paul.matthews@nottingham.ac.uk Stephen M. Cox stephen.cox@nottingham.ac.uk School of Mathematical Sciences, University of Nottingham, Nottingham NG7 2RD, United Kingdom November 6, 2021 ###### Abstract The Nikolaevskiy equation was originally proposed as a model for seismic waves and is also a model for a wide variety of systems incorporating a neutral "Goldstone" mode, including electro-convection and reaction-diffusion systems. It is known to exhibit chaotic dynamics at the onset of pattern formation, at least when the dispersive terms in the equation are suppressed, as is commonly the practice in previous analyses. In this paper, the effects of reinstating the dispersive terms are examined. It is shown that such terms can stabilise some of the spatially periodic traveling waves; this allows us to study the loss of stability and transition to chaos of the waves. The secondary stability diagram ("Busse balloon") for the traveling waves can be remarkably complicated. pacs: 47.54.-r,82.40.Ck Introduction In 1989, Nikolaevskiy Nikolaevskiy (1989) derived a model for longitudinal seismic waves, in the form of a one-dimensional partial differential equation for a displacement velocity. Although Nikolaevskiy's equation included dispersive terms, most subsequent analysis has treated a simplified version of the PDE, in which these terms are omitted. This reduced form is now generally known as the _Nikolaevskiy equation_, which may be written in the form \[\frac{\partial u}{\partial t}=-\frac{\partial^{2}}{\partial x^{2}}\left[r- \left(1+\frac{\partial^{2}}{\partial x^{2}}\right)^{2}\right]u-u\frac{ \partial u}{\partial x}, \tag{1}\] where \(r\) is a control parameter. The equation (1) has been proposed as a model for several other physical systems, including phase instabilities in reaction-diffusion equations Nikolaevskiy (1989), electroconvection Nikolaevskiy and transverse instabilities of fronts Nikolaevskiy (1990). More generally, (1) can be regarded as a simple model of a pattern-forming system with an instability at finite wavenumber and a neutral "Goldstone" mode arising from symmetry Nikolaevskiy (1989); Nikolaevskiy (1991). The uniform state \(u\equiv 0\) of (1) becomes unstable at \(r=0\) to spatially periodic "roll" solutions, with wavenumbers around \(k=1\). However, these, in turn, are themselves all unstable at onset in sufficiently large domains Nikolaevskiy (1989); this unusual instability arises from the neutral mode at wavenumber \(k=0\). In fact, numerical simulations show that the Nikolaevskiy equation exhibits spatiotemporal chaos at onset Nikolaevskiy (1991); Nikolaevskiy (1991). The scalings associated with this chaotic regime are unusual in pattern forming systems Nikolaevskiy (1991); Nikolaevskiy (1991), and this interesting feature of the equation has stimulated significant investigation Nikolaevskiy (1992). Although in some applications (such as the instability of fronts Nikolaevskiy (1990)) the omission of dispersive terms is justified on symmetry grounds, this is not the case in the original context of a model for seismic waves Nikolaevskiy (1989). Earlier work that _has_ considered the effects of dispersion includes the paper of Malomed Malomed (1989), who reinstated one dispersive term in the Nikolaevskiy equation and analysed the secondary stability of traveling-wave solutions by means of coupled Ginzburg-Landau-type equations for the amplitude of the traveling waves and a large-scale mode. His results showed that dispersion could stabilize waves; however, his derivation was not entirely asymptotically self-consistent Nikolaevskiy (1989). Kudryashov and Migita Kudryashov and Migita (1989) showed, on the basis of numerical simulations, that traveling waves can be stabilized by the presence of dispersive terms in the Nikolaevskiy equation. It is also known that in the related Kuramoto-Sivashinsky equation,the introduction of a dispersive term can stabilize periodic traveling waves [11]. Our aim in this paper is to provide a systematic examination of the effects of dispersion. By varying the parameters corresponding to dispersion, we can find when dispersion stabilizes traveling waves and investigate how the chaotic state in the non-dispersive equation arises as the dispersion is reduced. In the following section we give the form of the equation and the traveling waves under consideration. Computational results on the stability of these waves are given in Sec. III. The stability analysis of the waves is complicated and depends on the magnitude of the dispersion terms; three different scalings are considered in Secs. IV, V and VI. Sec. VII illustrates some numerical simulations of the Nikolaevskiy equation with dispersion, and our conclusions are summarized in Sec. VIII. ## II The Nikolaevskiy equation with dispersion We examine the Nikolaevskiy equation with dispersion in the form \[\frac{\partial u}{\partial t}=-\frac{\partial^{2}}{\partial x^{2}}\left[r- \left(1+\frac{\partial^{2}}{\partial x^{2}}\right)^{2}\right]u-u\frac{\partial u }{\partial x}+\alpha\frac{\partial^{3}u}{\partial x^{3}}+\beta\frac{\partial^ {5}u}{\partial x^{5}}, \tag{2}\] where \(\alpha\) and \(\beta\) are the dispersion coefficients. This equation is thus the one originally proposed by Nikolaevskiy [1] (and later examined in [9; 10]), with all spatial derivatives up to the sixth appearing on the right-hand side. In the numerical simulations presented in Sec. VII, we shall impose the periodic boundary condition \[u(x+D,t)=u(x,t) \tag{3}\] for some domain length \(D\). Before proceeding, we note that (2) has the same Galilean symmetry (\(x\mapsto x+Vt\), \(u\mapsto u+V\)) as the nondispersive equation (1). Moreover, in view of the Galilean symmetry and the observation that, when the boundary condition (3) is imposed, \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{0}^{D}u(x,t)\,\mathrm{d}x=0,\] the spatial average of \(u\) may be set as zero (by transforming to a moving frame of reference if necessary). The reflection symmetry (\(x\mapsto-x\), \(u\mapsto-u\)) of (1) is broken by the presence of the dispersive terms. However, there is a symmetry \(x\mapsto-x\), \(u\mapsto-u\), \(\alpha\mapsto-\alpha\), \(\beta\mapsto-\beta\); as a consequence of this symmetry we need consider only the case \(\beta\geq 0\). Linearization around the steady state \(u\equiv 0\) yields the dispersion relation \[\lambda=k^{2}\left[r-(k^{2}-1)^{2}\right]+\mathrm{i}k^{3}(k^{2}\beta-\alpha)\] for Fourier modes proportional to \(\mathrm{e}^{\mathrm{i}kx+\lambda t}\). Thus in general these perturbations take the form of traveling waves, with phase speed \[c_{p}=-\frac{\lambda_{i}}{k}=k^{2}(\alpha-k^{2}\beta) \tag{4}\] and group velocity \[c_{g}=-\frac{\partial\lambda_{i}}{\partial k}=k^{2}(3\alpha-5\beta k^{2}). \tag{5}\] The real part of the growth rate, \(\lambda_{r}\), is plotted in Fig. 1, for \(r\) just above the threshold value \(r_{c}=0\) for the onset of instability. This figure shows that there exists a band around the critical wavenumber \(k_{c}=1\) of linearly growing modes, and a neutral mode at \(k=0\) (the so-called "Goldstone mode"), which significantly affects the nonlinear dynamics of (2). Just beyond the onset of instability of the zero solution, it is straightforward to carry out a weakly nonlinear analysis of (2), with \[r=\epsilon^{2}r_{2}. \tag{6}\] This analysis reveals that there are traveling-wave solutions of the form \[u\sim\epsilon a_{0}\mathrm{e}^{\mathrm{i}k(x-st)}+\mathrm{c.c.}, \tag{7}\] Figure 1: Plot of \(\lambda_{r}\) for the case \(r=0.1\): note the linearly growing modes with wavenumber around \(k_{c}=1\), and the weakly damped large-scale modes close to \(k=0\). where the wavenumber \(k=1+\epsilon q\). The amplitude turns out to be given by \[a_{0}=6(r_{2}-4q^{2})^{1/2}(1+\tfrac{1}{36}(\alpha-5\beta)^{2})^{1/2} \tag{8}\] and the speed of the wave is \[s=c_{p}-\tfrac{1}{6}\epsilon^{2}(r_{2}-4q^{2})(\alpha-5\beta)+o(\epsilon^{2}), \tag{9}\] where \(c_{p}\) is given by (4) and the second contribution to \(s\) reflects (weakly) nonlinear effects. So, regardless of the values of the dispersion parameters \(\alpha\) and \(\beta\), such spatially periodic solutions exist for \(r_{2}>4q^{2}\). We now turn to the question of the secondary stability of these solutions. ## III Secondary stability of traveling waves: numerical results In this section we first outline a numerical method for the calculation of the nonlinear traveling waves and their secondary stability, and then give the results of these computations, showing the stability boundaries of traveling waves. ### Numerical method for calculating secondary stability To calculate the secondary stability of a traveling wave solution for given values of the parameters, we first find the traveling wave solution \(\bar{u}(x,t)=f(z)\), where \(z=x-ct\). Here, \(c\) is the nonlinear wave speed, which in general is not exactly equal to the linear wave speed \(c_{p}\) (4). We approximate the solution numerically using the truncated Fourier series \[f(z)=\sum_{-N/2+1}^{N/2}\bar{u}_{n}\mathrm{e}^{\mathrm{i}nkz}.\] Substitution in (2) (and calculation of the nonlinear term pseudospectrally) yields a system of nonlinear equations (solved in Matlab) for the Fourier coefficients of \(f(z)\), together with \(c\), which is determined from \[c\int_{0}^{D}(f^{\prime})^{2}\,\mathrm{d}z=\alpha\int_{0}^{D}(f^{\prime\prime}) ^{2}\,\mathrm{d}z-\beta\int_{0}^{D}(f^{\prime\prime\prime})^{2}\,\mathrm{d}z+ \int_{0}^{D}f(f^{\prime})^{2}\,\mathrm{d}z, \tag{10}\]where \(D=2\pi/k\) is the length of the domain and \(k\) is the wavenumber of the solution under consideration. The expression (10) follows from multiplying (2) by \(f^{\prime}(z)\) and integrating over the domain, using integration by parts multiple times. To compensate for the additional unknown \(c\), we have an additional equation from the fact that we may choose the phase of the wave, for example by specifying that \(\bar{u}_{1}\) is real. After calculating the solution, we construct the eigenvalue problem for perturbations. If we suppose that \(u(x,t)=f(z)+\tilde{u}(x,t)\), then substitution in (2) yields the linearized perturbation equation \[\frac{\partial\tilde{u}}{\partial t}=-\frac{\partial^{2}}{\partial x^{2}}\left[ r-\left(1+\frac{\partial^{2}}{\partial x^{2}}\right)^{2}\right]\tilde{u}+\alpha \frac{\partial^{3}\tilde{u}}{\partial x^{3}}+\beta\frac{\partial^{5}\tilde{u}} {\partial x^{5}}-f(z)\frac{\partial\tilde{u}}{\partial x}-\tilde{u}f^{\prime} (z). \tag{11}\] We take \[\tilde{u}=\mathrm{e}^{\sigma t+\mathrm{i}pz}\sum_{-N/2+1}^{N/2}v_{n}\mathrm{e }^{\mathrm{i}nkz},\] where all possible eigenfunctions may be captured by limiting consideration to \(-k/2\leq p\leq k/2\). The resulting eigenvalue equations to determine the growth rate \(\sigma\) are then \[(\sigma-\mathrm{i}cK_{n})v_{n}=\mathcal{L}v_{n}-\sum_{-N/2+1}^{N/2}\mathrm{i}K _{m}v_{m}\bar{u}_{n-m}-\sum_{-N/2+1}^{N/2}\mathrm{i}mkv_{n-m}\bar{u}_{m},\] where \(\mathcal{L}=K_{n}^{2}(r-(1-K_{n}^{2})^{2})-\mathrm{i}\alpha K_{n}^{3}+\mathrm{ i}\beta K_{n}^{5}\) and \(K_{n}=p+nk\). The eigenvalues of this system are computed numerically. By examining the largest real part of all eigenvalues \(\sigma\) for a large sample of values of \(p\) in the relevant interval, we determine whether the original traveling waves are stable or unstable. In the following section we provide some stability diagrams based on the above method. In determining our results, we have been careful to check that: adequate samples in \(p\) are taken (too few, particularly for small values of \(p\), can lead one to miss certain small regions of instability); adequate Fourier modes are taken in determining both the original solution and the perturbations; adequate samples are taken in parameter space to determine all regions of stable rolls. Typically, 300 values of \(p\) are used, with \(N=16\). ### Results Now we present the secondary stability diagrams. The first case considered here is setting \(\beta=0\) and varying \(\alpha\) -- see Fig. 2. When \(\alpha\) is small (\(\alpha=1/2\)), there is a very small region of stable waves in the \((k,r)\) plane. The stable region is a thin strip, confined to small values of \(r\); in this case, for \(r>0.0078\) all rolls are unstable. For larger \(\alpha\) this strip of stable waves is longer and wider; for example at \(\alpha=2\) there are some stable rolls up to \(r\approx 0.22\), and at \(\alpha=5\) the stability region extends at least as far as \(r=0.9\). Furthermore, it is apparent for \(\alpha=5\) that a symmetrical Eckhaus-like stability region is present for very small values of \(r\) (from the numerical results themselves, it seems to be present in all three cases, but is visible only in the last plot of Fig. 2). The shrinkage of the region of stable traveling waves for small \(\alpha\) is consistent with there being no stable rolls at all in the nondispersive case. While an exhaustive examination of the secondary stability diagrams across \((\alpha,\beta)\) parameter space is infeasible, it is worthy of note that these diagrams may be extremely Figure 3: The secondary stability regions of traveling waves of (2), calculated numerically for (a) \(\beta=5\), (b) \(\beta=5.5\), both with \(\alpha=40\). Shown are the marginal curve \(r=(1-k^{2})^{2}\) (solid line) and the secondary stability boundaries of the traveling waves (dashed line), with stability between the dashed lines. To clarify the regions of stability/instability, the “s†indicates one of the stable regions. Figure 2: The secondary stability regions of traveling waves of (2), calculated numerically for (a) \(\alpha=1/2\), (b) \(\alpha=2\) and (c) \(\alpha=5\), all for \(\beta=0\). Shown are the marginal curve \(r=(1-k^{2})^{2}\) (solid line) and the secondary stability boundary of the traveling waves (dashed line), with stability between the dashed lines. complicated. A good example arises if we set \(\alpha=40\) and vary \(\beta\) -- see Fig. 3. For \(\beta=5\) there is a small Eckhaus-like stability region for \(r<0.001\); for larger values of \(r\), there remains a single stability region. For larger \(\beta\), however, the stability region splits into several parts; for example, at \(\beta=5.5\) there may be up to five separate intervals of stable traveling waves for a given value of \(r\). Above we have presented our secondary stability diagrams in the \((k,r)\) plane, for fixed values of \(\alpha\) and \(\beta\). If our interest is in the effects of dispersion on the stability of traveling waves then it is more instructive instead to fix \(r\) and present results in either the \((k,\alpha)\) or the \((k,\beta)\) plane. Our first example is for \(r=0.01\) and \(\beta=0\) -- see Fig. 4(a). Given this value of \(r\), the traveling waves exist for \(0.9487<k<1.0488\). We expect that if \(\alpha\) is small enough then all roll solutions are unstable; this is indeed the case. For larger values of \(\alpha\), a region of stable rolls appears. In Fig. 4(b), we show a second case, where we fix \(\alpha=40\) and \(r=0.1\), to emphasize that the structure of the stability region may be rather complicated, exhibiting a sensitive parameter dependence. ## IV Secondary stability of traveling waves: \(\alpha,\beta=O(1)\) In this and the following two sections, we analyse the secondary stability of traveling waves (7). The most straightforward case arises when the dispersion parameters \(\alpha\) and \(\beta\) are each \(O(1)\). To contrast with later sections, we shall characterize this case as _strong dispersion_. Whereas the nondispersive Nikolaevskiy equation has no _stable_ spatially periodic states, Figure 4: The secondary stability of traveling waves of (2) calculated numerically for (a) fixed \(\beta=0\) and \(r=0.01\) in \((k,\alpha)\) parameter space, (b) fixed \(\alpha=40\) and \(r=0.1\) in \((k,\beta)\) parameter space. The marginal curve is represented by the solid lines; traveling waves are stable inside the dashed lines. Kudryashov and Migita [10] found stable periodic waves in their numerical simulations of the dispersive PDE (2), in this regime. We begin by introducing the weakly nonlinear expansion \[u=\epsilon u_{1}+\epsilon^{2}u_{2}+\epsilon^{3}u_{3}+\cdots, \tag{12}\] with \(r\) given by (6). Then substitution in (2) and consideration of successive orders in \(\epsilon\) leads to the following. At \(O(\epsilon)\), we find that \[u_{1}=A\mathrm{e}^{\mathrm{i}(x-c_{0}t)}+\mathrm{c.c.},\] where \(c_{0}=\alpha-\beta\), and where the amplitude \(A\) varies slowly in space and in time, in principle depending on the slow variables \[X=\epsilon x,\qquad\tau=\epsilon t,\qquad T=\epsilon^{2}t.\] A consideration of the terms proportional to \(\mathrm{e}^{\mathrm{i}(x-c_{0}t)}\) at \(O(\epsilon^{2})\) then shows that in fact \(A=A(\xi,T)\), where \[\xi=X-(3\alpha-5\beta)\tau\equiv X-v\tau\] is a coordinate moving at the group velocity of the waves. Then solving the problem at this order in \(\epsilon\) yields \[u_{2}=-\frac{\mathrm{i}A^{2}}{36(1+\mathrm{i}(\alpha-5\beta)/6)}\mathrm{e}^{ 2\mathrm{i}(x-c_{0}t)}+\mathrm{c.c.}+f.\] Here \(f\) is a slow varying function of \(X\), \(\tau\) and \(T\), chosen to appear at this order to balance forcing terms appearing at the next order in \(\epsilon\). At \(O(\epsilon^{3})\), we find, from the respective consideration of the terms in (2) proportional to \(\mathrm{e}^{\mathrm{i}(x-c_{0}t)}\) and \(\mathrm{e}^{\mathrm{i}(x-c_{0}t)}\), the amplitude equations \[\frac{\partial A}{\partial T} = \left(r_{2}-\frac{1-\mathrm{i}(\alpha-5\beta)/6}{36+(\alpha-5 \beta)^{2}}|A|^{2}\right)A+(4+\mathrm{i}(3\alpha-10\beta))\frac{\partial^{2}A}{ \partial\xi^{2}}-\mathrm{i}fA, \tag{13}\] \[\frac{\partial f}{\partial\tau} = -\frac{\partial|A|^{2}}{\partial\xi}. \tag{14}\] Since \(A=A(\xi,T)\), the second amplitude equation suggests taking \(f=f(\xi,T)\), in which case (14) becomes \[-v\frac{\partial f}{\partial\xi}=-\frac{\partial|A|^{2}}{\partial\xi},\]and hence \(vf=|A|^{2}+K(T)\), for some \(K(T)\). However, the constraint that the spatial average of \(u\) should be zero gives \(K(T)=-\langle|A|^{2}\rangle\), where the angle brackets denote the average in \(\xi\). Thus \[f=\frac{-\langle|A|^{2}\rangle+|A|^{2}}{v} \tag{15}\] and the amplitude equation (13) becomes the nonlocal Ginzburg-Landau equation \[\frac{\partial A}{\partial T}=\left(r_{2}-\frac{1-\mathrm{i}(\alpha-5\beta)/6} {36+(\alpha-5\beta)^{2}}|A|^{2}+\mathrm{i}\frac{\langle|A|^{2}\rangle-|A|^{2} }{v}\right)A+(4+\mathrm{i}(3\alpha-10\beta))\frac{\partial^{2}A}{\partial\xi^ {2}}. \tag{16}\] It is worth mentioning that in view of (15) the present scaling breaks down when \(v\) is small; in particular, this is the case when \(\alpha\) and \(\beta\) are both small, and this case will be considered in later sections. It is helpful in analysing (16) to put it in canonical form by rescaling all the variables, to give \[\frac{\partial A}{\partial T}=A+\mathrm{i}d(\langle|A|^{2}\rangle-|A|^{2})A+( 1+\mathrm{i}a)\frac{\partial^{2}A}{\partial\xi^{2}}-(1+\mathrm{i}b)|A|^{2}A, \tag{17}\] where \[a=\frac{3\alpha-10\beta}{4},\qquad b=\frac{5\beta-\alpha}{6},\qquad d=\frac{36 +(5\beta-\alpha)^{2}}{v}.\] Equations similar to (17), including a nonlocal nonlinear term have been derived and studied in the context of convection in a rotating annulus [12] and in electrical and magnetic systems [13; 14]. Figure 5: Diagram showing the sign of \(1+a(b+d)\) in \(\alpha\beta\) parameter space. Regions with “s†are where \(1+a(b+d)>0\), so that a limited band of plane waves is stable, as in (19); a “u†indicates where \(1+a(b+d)<0\), and all plane waves are unstable. Equipped with (17), we are now in a position to explore the secondary stability of weakly nonlinear spatially periodic solutions of the dispersive Nikolaevskiy equation. Such solutions correspond to plane-wave solutions of (17), which exist in the form \(A=Pe^{{\rm i}(\omega T+q\xi)}\), with \(P=(1-q^{2})^{1/2}\) and \(\omega=q^{2}(b-a)-b\). To study the stability of the plane-wave solution, we write \(A=(1+p(\xi,T))Pe^{{\rm i}(\omega T+q\xi)}\), which, after substitution in (17) and linearization in the perturbation \(p\), yields \[\frac{\partial p}{\partial T}=(1+{\rm i}a)\left(\frac{\partial^{2}p}{\partial \xi^{2}}+2{\rm i}q\frac{\partial p}{\partial\xi}\right)-(1+{\rm i}b)P^{2}(p^{ *}+p)+{\rm i}dP^{2}\left(\langle p+p^{*}\rangle-(p+p^{*})\right).\] Then upon setting \(p(\xi,T)=R(T){\rm e}^{{\rm i}L\xi}+S^{*}(T){\rm e}^{-{\rm i}L\xi}\) and equating the coefficients of \({\rm e}^{{\rm i}L\xi}\) and \({\rm e}^{-{\rm i}L\xi}\), we have \[\frac{{\rm d}R}{{\rm d}T} = -(1+{\rm i}a)L(LR+2qR)-(1+{\rm i}b)Q^{2}(R+S)-{\rm i}dQ^{2}(R+S),\] \[\frac{{\rm d}S}{{\rm d}T} = -(1-{\rm i}a)L(LS-2qS)-(1-{\rm i}b)Q^{2}(R+S)+{\rm i}dQ^{2}(R+S).\] Finally, with \(R(T)\) and \(S(T)\) proportional to \({\rm e}^{\mu T}\), and expanding the growth rate in powers of the perturbation wavenumber \(L\), we have the dispersion relation \[\mu=-2{\rm i}q(a-b-d)L+L^{2}P^{-2}\left(-1-a(b+d)+q^{2}\left[3+2(b+d)^{2}+a(b+ d)\right]\right)+O(L^{3}). \tag{18}\] If we suppose (as is generally the case) that \(a\neq b+d\) then it is apparent from (18) that the solution has a long-wavelength oscillatory instability whenever \[q^{2}\left(3+2(b+d)^{2}+a(b+d)\right)>1+a(b+d).\] Since \(1+a(b+d)<3+a(b+d)+2(b+d)^{2}\), we see that stability is determined by the following. If \(1+a(b+d)>0\) then \(3+a(b+d)+2(b+d)^{2}>0\), and the plane-wave solutions are stable provided \[0\leq q^{2}<q_{c}^{2}\equiv\frac{1+a(b+d)}{3+2(b+d)^{2}+a(b+d)}<1. \tag{19}\] If instead \(1+a(b+d)<0\), then all plane waves are unstable. We note that setting \(a=b=d=0\) reduces (17) to a real Ginzburg-Landau equation for \(A\), and our results reduce to the usual Eckhaus instability (with stability for \(q^{2}<1/3\)) [15; 16]. To apply this result to (2) it is necessary to indicate the regions in \(\alpha\), \(\beta\) parameter space in which the quantity \(1+a(b+d)\) is positive or negative. In Fig. 5, regions denoted by "s" indicate where \(1+a(b+d)>0\), so that there are some stable plane waves, as in (19); those regions denoted by "u" show where \(1+a(b+d)<0\), and hence all plane waves are unstable. As discussed in Sec. II, only the region \(\beta\geq 0\) need be presented. The existence of a stable region in Fig. 5 is consistent with the numerical results of Sec. III; for example Fig. 2 shows a stable region when \(\alpha=O(1)\) and \(\beta=0\). In Fig. 6, we illustrate the considerations above with some numerical simulations of the modified complex Ginzburg-Landau equation (17). Our numerical code is pseudospectral, and uses exponential time differencing [17]. In each case the initial condition is a plane wave plus small-amplitude random noise. For the simulations illustrated in Fig. 6(a) and (b), \(1+a(b+d)>0\). The two plots show the fate of initial conditions in the unstable and stable regions of Fig. 5, respectively. In each case, a stable plane wave is obtained at large \(T\). Figure 6(c) shows the development of instability in the case \(1+a(b+d)<0\), where all plane waves are unstable. Here the solution is persistently time-dependent. The analysis above tells us about the secondary stability of traveling-wave solutions of the dispersive Nikolaevskiy equation when \(\alpha,\beta=O(1)\), and the results are summarized in Fig. 5. We may think of this analysis as holding for any fixed \(\alpha\) and \(\beta\) (not both zero) in the limit as \(r\to 0\); thus we expect the lowest part of the secondary stability diagram in \((r,k)\) parameter space to reflect Fig. 5. However, as indicated earlier, when \(\alpha\) and \(\beta\) are both small, the analysis above does Figure 6: Numerical simulations of the amplitude equation (17): in each case the real part of \(A\) is plotted as a function of \(\xi\) and \(T\). In (a) and (b) \(\alpha=10\), whereas in (c) \(\alpha=8.4\); in each case \(\beta=2.6\). The initial condition in each case is a plane wave, with \(n\) wavelengths in the computational box \(-32\pi<\xi<32\pi\), plus small-amplitude random noise: (a) \(n=28\) (hence \(q=0.875\)); (b) \(n=10\) (\(q=0.3125\)); (c) \(n=20\) (\(q=0.625\)). not hold, and requires reconsideration. We should expect such analysis to break down in this limit, because Fig. 5 is inconsistent with the known behavior of the nondispersive Nikolaevskiy equation (\(\alpha=\beta=0\)), for which all rolls are unstable at onset [3; 5]. Thus in the next section we consider smaller values of \(\alpha,\beta\). ## V Secondary stability of traveling waves: \(\alpha,\beta=O(\epsilon^{3/4})\) It turns out, after some experimentation, that small \(\alpha\) and \(\beta\) first lead to a new scaling if we adapt the scaling first used by Tribelsky and Velarde [3] for the nondispersive case, and extended by Cox and Matthews [4] to a damped version of the Nikolaevskiy equation. In this scaling the original traveling waves remain \(O(\epsilon)\), but the perturbation to the traveling-wave amplitude is \(O(\epsilon^{3/2})\) and the large-scale mode is \(O(\epsilon^{7/4})\); furthermore, slow spatial and temporal variations of perturbations take place on scales given by \(X=\epsilon^{3/4}x\), \(T=\epsilon^{3/2}t\), \(\tau=\epsilon^{3/4}t\). (Note that these slow variables are different from those of the previous section, but our notation for slow variables is consistent within sections.) To allow the development of consistent amplitude equations for the perturbation we then take \[\alpha=\epsilon^{3/4}\hat{\alpha},\qquad\beta=\epsilon^{3/4}\hat{\beta}.\] Applying a weakly nonlinear analysis to (2) gives \[u=\epsilon(a_{0}+\epsilon^{1/2}a(X,T))\mathrm{e}^{\mathrm{i}M}+\mathrm{c.c.}+ \epsilon^{7/4}f(X,T)+\cdots, \tag{20}\] where \(a_{0}=6\sqrt{r_{2}-4q^{2}}\), \[M=(1+\epsilon q)x-\hat{c}\tau-\epsilon^{1/4}\hat{v}qT+\epsilon^{1/4}\psi(X,T),\] \(\hat{c}=\hat{\alpha}-\hat{\beta}\) and \(\hat{v}=3\hat{\alpha}-5\hat{\beta}\). Here \(a(X,T)\) represents disturbances to the amplitude of the pattern, \(\psi(X,T)\) represents corresponding disturbances to the phase of the pattern and \(f(X,T)\) is a large-scale mode. Substitution of \(u\), as given by (20), in (2) requires the consideration of the problem at successive orders in \(\epsilon^{1/4}\). After much consequent algebra, we find the (nonlinear) amplitude equations \[\frac{\partial\psi}{\partial T} = 4\frac{\partial^{2}\psi}{\partial X^{2}}-f-\hat{v}\frac{\partial \psi}{\partial X},\] \[\frac{\partial f}{\partial T} = \frac{\partial^{2}f}{\partial X^{2}}-2a_{0}\frac{\partial a}{ \partial X},\] \[\frac{\partial a}{\partial T} = 4\frac{\partial^{2}a}{\partial X^{2}}-4a_{0}\left(\frac{\partial \psi}{\partial X}\right)^{2}-8a_{0}q\frac{\partial\psi}{\partial X}-\hat{v} \frac{\partial a}{\partial X}.\]Note that dispersion is represented in these equations only through the terms \(\hat{v}\psi_{X}\) and \(\hat{v}a_{X}\), representing advection of the pattern envelope with the group velocity \(\hat{v}\). Note also that the group velocity of the large scale mode \(f\) is zero, and hence no corresponding term appears in the second of these equations. The three amplitude equations may be reduced to the single (nonlinear) phase equation \[\left(\frac{\partial}{\partial T}-4\frac{\partial^{2}}{\partial X^{2}}+\hat{v} \frac{\partial}{\partial X}\right)^{2}\left(\frac{\partial}{\partial T}-\frac{ \partial^{2}}{\partial X^{2}}\right)\psi=-16a_{0}^{2}\left(\frac{\partial\psi} {\partial X}+q\right)\frac{\partial^{2}\psi}{\partial X^{2}}. \tag{21}\] Then linearising this equation and setting \(\psi=e^{\mathrm{i}LX+\sigma T}\) yields the dispersion relation \[\sigma^{3}+9\sigma^{2}L^{2}+24\sigma L^{4}-\hat{v}^{2}\sigma L^{2}+16L^{6}- \hat{v}^{2}L^{4}-16a_{0}^{2}qL^{2}+\mathrm{i}\hat{v}(2\sigma^{2}L+10\sigma L^{ 3}+8L^{5})=0. \tag{22}\] Before considering this dispersion relation for general \(L\), it is helpful to consider the two limiting cases, of small and large \(L\). First, if \(L\) is small, then \(\sigma^{3}\sim 16a_{0}^{2}qL^{2}\). Thus, to leading order in \(L\), \(\sigma=\sigma_{2/3}L^{2/3}\), where \(\sigma_{2/3}^{3}=16a_{0}^{2}q\); hence all traveling waves are unstable if \(L\) is small. On the other hand, if \(L\) is large, then we have \(\sigma^{3}+9\sigma^{2}L^{2}+24\sigma L^{4}+16L^{6}\approx 0\), and so \(\sigma\approx-L^{2}\) or \(-4L^{2}\) (twice); hence traveling waves are stable to large-\(L\) disturbances. In summary, all traveling waves are unstable at onset (provided \(a_{0}^{2}q\neq 0\); in fact we shall see later that when \(a_{0}^{2}q\) is suitably small, we shall need to reconsider this conclusion). The rest of the section provides more details of the instability, for general values of \(L\). In order to find the secondary stability boundary for the traveling waves, we set \(\sigma=\mathrm{i}\Omega\) in the dispersion relation (22), where \(\Omega\) is real. From the real and the imaginary parts, we obtain \[\Omega^{2}-\frac{16}{9}L^{4}+\frac{16}{9}a_{0}^{2}q+\frac{\hat{v} ^{2}}{9}L^{2}+\frac{10}{9}\hat{v}L\Omega = 0,\] \[\Omega^{3}-24\Omega L^{4}+\hat{v}^{2}\Omega L^{2}+2\hat{v}L\Omega^ {2}-8\hat{v}L^{5} = 0,\] and then after eliminating \(\Omega\) between these two equations we find that this stability boundary is given by \[16a_{0}^{6}q^{3}-2500L^{12}+2100L^{8}a_{0}^{2}q+384L^{4}a_{0}^{4}q^{2}-200 \hat{v}^{2}L^{10}-4\hat{v}^{4}L^{8}-44\hat{v}^{2}L^{6}a_{0}^{2}q+\hat{v}^{2}L ^{2}a_{0}^{4}q^{2}=0. \tag{23}\] We note that in this equation \(L\) and \(\hat{v}\) appear only as even powers and thus we can restrict our attention to positive \(L\) and \(\hat{v}\) with no loss of generality. However, both even and odd powers of \(q\) occur, so no such economy is possible in considering \(q\) (indeed, in the light of [3], we should expect different behaviors for \(q>0\) and \(q<0\)). For the case of no dispersion, Tribelsky and Velarde [3] showed that, according to the present scaling, there is monotonic instability of the rolls with \(q>0\) (with unstable disturbances having \(0<L<(a_{0}^{2}q)^{1/4}\)). By contrast, oscillatory instability occurs for rolls with \(q<0\) (unstable modes having \(0<L<(-2a_{0}^{2}q/25)^{1/4}\)). It is convenient to present our results for the dispersive case in terms of the rescaled variables \(q^{\prime}=q/r_{2}^{1/2}\), \(L^{\prime}=L/r_{2}^{3/8}\) and \(v^{\prime}=\hat{v}/r_{2}^{3/8}\). Figure 7 illustrates the regions of stability and instability of the traveling waves, in the cases \(v^{\prime}=0\) and \(v^{\prime}=5\). Note that in the dispersive case all instabilities are oscillatory. We should view with caution the conclusion above that all traveling waves are unstable, because it relies crucially on the assumption that \(a_{0}^{2}q\) is not small. The stability analysis above breaks down if \(q\) or \(a_{0}^{2}\) are small; the true stability properties of corresponding traveling waves will be investigated in the next section. ## VI Secondary stability of traveling waves: \(\alpha,\beta=O(\epsilon)\) In this section, we investigate the cases of traveling waves with wavenumber close to \(k=1\) or close to the marginal stability boundary, in other words those for which in the previous scaling \(a_{0}^{2}q\ll 1\). ### Traveling waves with close-to-critical wavenumber In order to resolve the secondary stability problem for traveling waves with wavenumber close to \(k_{c}=1\), we set \(k=1+\epsilon^{2}q\), as was done for the nondispersive case by Tribelsky and Figure 7: Predicted secondary stability boundaries of spatially periodic solutions of the Nikolaevskiy equation. (a) Nondispersive case (\(v^{\prime}=0\)), (b) dispersive case (\(v^{\prime}=5\)). Velarde [3]. Then a distinguished balance occurs for \(\alpha,\beta=O(\epsilon)\); so we write \[\alpha=\epsilon\hat{\alpha},\qquad\beta=\epsilon\hat{\beta}.\] Upon setting \(\hat{c}=\hat{\alpha}-\hat{\beta}\), \(r=\epsilon^{2}r_{2}\), \(\hat{v}=3\hat{\alpha}-5\hat{\beta}\), \(X=\epsilon x\), \(\tau=\epsilon t\) and \(T=\epsilon^{2}t\) (the scalings for \(X\) and \(T\) being as in [3]), we find from (2) that \[u=\epsilon(6\sqrt{r_{2}}+\epsilon^{2}a(X,T))\mathrm{e}^{\mathrm{i}M}+\mathrm{c.c.}+\epsilon^{3}f(X,T)+\cdots,\] where now \[M=(1+\epsilon^{2}q)x-c\tau+\epsilon(-\hat{v}q+\tfrac{1}{6}r_{2}(\hat{\alpha}-5 \hat{\beta}))T+\epsilon\psi(X,T).\] The terms in \(M\) involving \(\hat{\alpha}\) and \(\hat{\beta}\) correspond to nonlinear effects of the finite traveling-wave amplitude on the speed of the waves; see (9). After much algebra, the relevant (nonlinear) amplitude equations are found to be, at \(O(\epsilon^{4})\) and \(O(\epsilon^{5})\), \[\frac{\partial\psi}{\partial T} = 4\frac{\partial^{2}\psi}{\partial X^{2}}-f-\hat{v}\frac{\partial \psi}{\partial X}, \tag{24}\] \[\frac{\partial f}{\partial T} = \frac{\partial^{2}f}{\partial X^{2}}-12r_{2}^{1/2}\frac{\partial a }{\partial X},\] (25) \[\frac{\partial a}{\partial T} = 4\frac{\partial^{2}a}{\partial X^{2}}-24r_{2}^{1/2}\left(\frac{ \partial\psi}{\partial X}\right)^{2}-\hat{v}\frac{\partial a}{\partial X}-6r_{ 2}^{1/2}\frac{\partial f}{\partial X}-2r_{2}a\] (26) \[{}+6r_{2}^{1/2}\left(-8q+\frac{22}{3}r_{2}+12\frac{\partial^{2}} {\partial X^{2}}+(10\hat{\beta}-3\hat{\alpha})\frac{\partial}{\partial X} \right)\frac{\partial\psi}{\partial X}.\] We note that in these equations the influence of dispersion arises not only through the terms involving the group velocity \(\hat{v}\), but also through the term \(10\hat{\beta}-3\hat{\alpha}\) in the equation for \(a_{T}\), in contrast to the previous case. To determine the stability of the traveling waves, these equations are linearized; for solutions proportional to \(\mathrm{e}^{\mathrm{i}LX+\sigma T}\), we find the dispersion relation \[\sigma^{3}+9\sigma^{2}L^{2}+24\sigma L^{4}+16L^{6}+528r_{2}^{2}L^ {2}\] \[-576r_{2}qL^{2}+82r_{2}\sigma L^{2}-568r_{2}L^{4}+2r_{2}\sigma^{2 }-\hat{v}^{2}\sigma L^{2}-L^{4}\hat{v}^{2}\] \[+\mathrm{i}(2r_{2}\hat{v}\sigma L+360r_{2}\hat{\beta}L^{3}+8\hat{ v}L^{5}+10\hat{v}\sigma L^{3}+2\hat{v}\sigma^{2}L+2r_{2}\hat{v}L^{3})=0. \tag{27}\] As in the previous section, in the limit of large \(L\), all eigenvalues have negative real part. By contrast, in the limit of small \(L\), if we expand \(\sigma=\sigma_{1}L+\sigma_{2}L^{2}+\cdots\), then from (27) we find that \(\sigma_{1}\) satisfies \[r_{2}\sigma_{1}^{2}-288r_{2}q+264r_{2}^{2}+ir_{2}\hat{v}\sigma_{1}=0,\]whereas \(\sigma_{2}\) is determined from \[\sigma_{1}^{3}+82r_{2}\sigma_{1}+4r_{2}\sigma_{1}\sigma_{2}-\hat{v}^{2}\sigma_{1}+ 2\mathrm{i}(r_{2}\hat{v}\sigma_{2}+\hat{v}\sigma_{1}^{2}+r_{2}\hat{v}+180r_{2} \hat{\beta})=0.\] The first of these gives \[\sigma_{1}=\frac{1}{2}\left(-\mathrm{i}\hat{v}\pm\sqrt{-\hat{v}^{2}+1152q-1056r _{2}}\right), \tag{28}\] and so traveling waves are certainly unstable if their wavenumber satisfies \(q>11r_{2}/12+\hat{v}^{2}/1152\). The term \(\hat{v}^{2}/1152\) indicates that these waves become more stable with respect to this instability in the presence of dispersion. If instead \(q<11r_{2}/12+\hat{v}^{2}/1152\), then \(\sigma_{1}\) is purely imaginary, and stability is determined by \[\sigma_{2}=\pm\frac{-72\hat{v}q/r_{2}+171\hat{v}/2-180\hat{\beta}}{\sqrt{\hat{ v}^{2}-1152q+1056r_{2}}}+\frac{91}{2}-\frac{72}{r_{2}}q, \tag{29}\] a consideration of which shows that these waves are made more unstable to the long-wavelength oscillatory instability in the presence of dispersion. Analysis of the stability boundaries to disturbances of general \(L\) is rather involved, and we do not present the details here. Furthermore, the parameter space is large enough to preclude our making general statements; instead we consider some illustrative special cases. To present the conclusions most generally, it is helpful to introduce \(q^{\prime}=q/r_{2}\), \(L^{\prime}=L/r_{2}^{1/2}\), \(\alpha^{\prime}=\hat{\alpha}/r_{2}^{1/2}\), \(\beta^{\prime}=\hat{\beta}/r_{2}^{1/2}\) and \(v^{\prime}=\hat{v}/r_{2}^{1/2}\). Let us begin by considering the special case \(\beta^{\prime}=0\). Figure 8 shows where traveling waves with different values of \(q^{\prime}\) are stable and unstable to perturbations with wavenumbers \(L^{\prime}\); each panel in the figure corresponds to a different choice of \(\alpha^{\prime}\). In understanding the sequence of transitions in the topology of the various panels, it is helpful to first consider the behavior of the stability boundaries for small \(\alpha^{\prime}\) (and hence small \(v^{\prime}\)), in particular in the \(L^{\prime}=0\) limit. We have seen above that the right-hand stability curve (labeled \(R\)) intersects the \(q^{\prime}\) axis at \(q^{\prime}_{+}=11/12+v^{\prime 2}/1152\). For small \(\hat{v}\), it follows from (29) that the left-hand stability curve (labeled \(T\)) intersects the \(q^{\prime}\) axis at \(q^{\prime}_{-}\sim 91/144+5|v^{\prime}|/26568^{1/2}\). Thus as \(\alpha^{\prime}\) is increased from zero, \(q^{\prime}_{-}\) moves to the right more rapidly than does \(q^{\prime}_{+}\). Eventually, at some sufficiently large value of \(\alpha^{\prime}\), \(q^{\prime}_{-}=q^{\prime}_{+}\), and all traveling waves are unstable in the limit \(L^{\prime}=0\). On the other hand, when \(\alpha^{\prime}\) is large, \(q^{\prime}_{-}\) halts at \(q^{\prime}_{-}=131/144\). However, \(q^{\prime}_{+}\) continues to increase, and this results in the appearance of a small-\(L^{\prime}\) stability region. In fact, for sufficiently large \(\alpha^{\prime}\), some rolls are stable to disturbances for all \(L^{\prime}\). For \(0\leq\alpha^{\prime}<\alpha^{\prime}_{c}\), where \(\alpha^{\prime}_{c}\approx 5.7\), all traveling waves are unstable. For \(\alpha^{\prime}>\alpha^{\prime}_{c}\), a stable region appears (see Fig. 8(e)). Subsequently, for any value of \(\alpha^{\prime}>\alpha^{\prime}_{c}\) the stable region becomes more apparent. This result can be compared with the numerical stability results shown in Fig. 2(a), where \(\alpha=1/2\). The stability condition \(\alpha^{\prime}>\alpha^{\prime}_{c}\approx 5.7\) (where \(\alpha^{\prime}=\alpha/\sqrt{r}\)) corresponds to \(r<(\alpha/5.7)^{2}=0.0077\), showing remarkably good agreement with the upper limit of the stable region in Fig. 2(a). If instead we consider the special case \(\alpha^{\prime}=0\), with \(\beta^{\prime}>0\), we find a broadly similar picture, in that all traveling waves are unstable when \(\beta^{\prime}\) is small, but some eventually stabilize, once \(\beta^{\prime}\) is sufficiently large. From Fig. 9 it is apparent that the two stability boundaries \(R\) and \(T\) intersect, coalesce, then lift off from the \(q^{\prime}\) axis as \(\beta^{\prime}\) is increased. Ultimately they re-attach to the \(q^{\prime}\) axis, when \(\beta^{\prime}=\beta^{\prime}_{c}\), where \(\beta^{\prime}_{c}\approx 5.06\) as shown in Fig. 9(g). For \(\beta^{\prime}>\beta^{\prime}_{c}\), there is a region of stable traveling waves. Let us now express the results above in a form more illuminating for comparison with our earlier numerical secondary stability calculations (Sec. III). As an example, we set \(\hat{\alpha}=1\) and \(\hat{\beta}=0\) and consider the limit of small \(L\), looking for regions of stable waves as \(r_{2}\) is varied. From (28), rolls are unstable as long as \(q>11r_{2}/12+\hat{v}^{2}/1152\). If \(q<11r_{2}/12+\hat{v}^{2}/1152\), then \(\sigma_{1}\) is purely imaginary and hence \(\sigma_{2}\) must be considered. From (29) we have definite instability if \(r_{2}>144q/91\). In addition to these rather blunt conditions, the sign of \(\sigma_{2}\) must also be considered in order to determine the stable region. Figure 10 shows the curves \(q=11r_{2}/12+\hat{v}^{2}/1152\) (solid line), \(q=91r_{2}/144\) (dashed line) and \(\sigma_{2}=0\) (dotted lines). Any region of stability must lie between the solid and dashed lines. After checking carefully the signs of the eigenvalues, we find that the stable region (indicated by the asterisks in the figure) lies between the two dotted lines in the upper and lower parts of the graph, and between the dotted and solid lines for a small range of intermediate values of \(r_{2}\) (see Fig. 10). Although they appear almost parallel in Fig. 10(a), for large \(r_{2}\), as in Fig. 10(b), the two sides of the secondary stability region are no longer approximately parallel. Note the qualitative similarity between the shapes of the stable regions in Fig. 10 and Fig. 2. The question remains of whether or not this stable region extends to indefinitely large values of \(r_{2}\). To investigate the large-\(r_{2}\) behavior of the stability region, we consider large \(r_{2}\) with \(q=O(r_{2})\), motivated by the observation, from Fig. 10(b), that stable rolls lie in some region between straight lines in \((q,r_{2})\) parameter space. In this limit the stability condition from (28) simplifies to \(q<11r_{2}/12\), while \(\sigma_{2}=91/2-72q/r_{2}+O(r_{2}^{-1/2})\). Hence we can conclude that the region of stable waves for small \(L\) and large \(r_{2}\) is \[91r_{2}/144<q<11r_{2}/12. \tag{30}\] In summary, the results of this section show that when \(\alpha\) and \(\beta\) are \(O(\epsilon)\), there can be a narrow region of stable traveling waves near \(k=1\), and that there is no upper limit on the size of \(r_{2}\) allowing stable rolls. For even smaller values of \(\alpha\) and \(\beta\), of order \(\epsilon^{2}\), we have checked that \(\alpha\) and \(\beta\) do not appear in the leading order amplitude equations, so in that case all traveling waves are unstable, as in the non-dispersive case. Figure 10: The region of secondary stability of traveling waves is marked with asterisks; for details refer to the text. The solid line shows where \(\sigma_{1}\) is purely imaginary; to the right of this line, the traveling waves are certainly unstable. The dashed line shows where \(q=91r_{2}/144\); to the left of this line, traveling waves are also certainly unstable. The dotted lines show where \(\sigma_{2}=0\). ### Traveling waves close to the marginal curve We now turn to the second case in which \(a_{0}^{2}q\) may be small: in the region close to the marginal stability curve. Following an analysis similar to that for the dissipative Nikolaevskiy equation [4], we find that, in contrast to the dissipative case (in which a narrow region of stable rolls exists close to the marginal curve [4]), here all traveling waves are unstable near the marginal curve. ## VII Numerical simulations of the dispersive Nikolaevskiy equation To illustrate some of the consequences of the results of the preceding sections, we have carried out numerical simulations of the dispersive Nikolaevskiy equation, using a pseudospectral method, with exponential time stepping [17], of which a small sample are presented here. The initial condition is taken to be a traveling wave with a given wavenumber \(k\) (approximated as a cosine of the amplitude given by (8)), plus small random noise, and the domain size is \(D=100\pi/k\). Figure 11 illustrates in order strong (a), intermediate (b) and weak dispersion (c)-(e). The values of \(\alpha\), \(\beta\) and \(k\) are chosen in each case to correspond to traveling waves that are predicted to be unstable by the asymptotic analysis. Figure 11 (a) shows the case of strong dispersion, with \(\alpha=2\) and \(\beta=1\), and wave number \(k=1\) and \(r=0.01\). The stability analysis of (17) predicts that the rolls are unstable, since \(\alpha=2\) and \(\beta=1\) lie in the unstable region in Fig. 5; the numerical simulation agrees with this asymptotic result. In Fig. 11 (b) an example of intermediate dispersion is simulated, where \(\alpha=2\epsilon^{3/4}\) and \(\beta=\epsilon^{3/4}\), \(r=\epsilon^{2}\) and \(k=1+\epsilon q\), for \(q=0.2\) and \(\epsilon=0.1\). It is known from the asymptotic results of Sec. V that \(\alpha\) and \(\beta\) being \(O(\epsilon^{3/4})\) with wave number \(k=1+\epsilon q\) will result in unstable traveling wave solutions, which agrees with the simulation shown in Fig. 11 (b). To show the effects of weak dispersion with wave number \(k=1+\epsilon^{2}q\) we take \(r=0.01\) and \(\epsilon=0.1\). Figure 11 (c) shows the case \(\alpha=2\epsilon\), \(\beta=0\), \(q=0.87\), while in Fig. 11 (d) the parameter values are \(\alpha=0\), \(\beta=5\epsilon\), \(q=2.5\). Rolls should in each case be unstable, according to the analysis of Sec. VI.1, and this is confirmed by the numerical simulations. Figure 11 (e) represents weak dispersion, with \(\alpha=\epsilon\) and \(\beta=0\). The wavenumber is\(k=1+\epsilon^{2}q\) and \(r=\epsilon^{2}r_{2}\), for \(\epsilon=0.25\), \(q=0.02\) with \(r_{2}=0.04\). These values of \(r_{2}\) and \(q\) lie in the unstable region given in Fig. 10, and the simulations support this prediction of instability. ## VIII Conclusions We have examined the stability of spatially periodic solutions to the dispersive Nikolaevskiy equation, which is the original model introduced by Nikolaevskiy [1] for seismic waves. The reincorporation of dispersive effects stands in contrast to most studies subsequent to Nikolaevskiy's paper. We have shown how the instability of _all_ spatially periodic solutions at the onset of pattern formation in the more-often treated, nondispersive version is modified by the presence of dispersive terms. Our results have been achieved through both a numerical calculation of the secondary stability boundary for the traveling wave solutions and an asymptotic treatment of three particular scalings in \(\epsilon\) for the dispersive terms. The secondary stability diagrams ("Busse balloons") can be rather complicated, and can depend sensitively on the size of the dispersive terms. Our consideration of the case \(\alpha,\beta=O(1)\) can be interpreted as giving information about the bottom of the secondary stability diagram obtained in \((k,r)\) parameter space for fixed \(\alpha\) and \(\beta\). Two cases were found: either all traveling waves are unstable at the bottom of the diagram, or there is a symmetrical, Eckhaus-like region of stable traveling waves, right down to onset at \(r=0\) (although the width of the region of stable rolls does not stand in the usual Eckhaus ratio to the width of the existence region of rolls). The separate analysis for smaller values of \(\alpha,\beta\) can be interpreted as shedding light on the upper parts of the fixed-\(\alpha\),\(\beta\) stability diagram in \(k,r\) parameter space. We have shown that for small \(\alpha,\beta\), a narrow region of stable waves may exist near \(k=1\). However, beyond the range of validity of the asymptotic analysis, the numerical stability results show the complicated nature of the secondary stability boundaries, so we are unable to draw any significant general conclusions about the form of the secondary stability diagram, limiting ourselves to some specific examples. Things are further complicated by the fact that rolls predicted to be stable by the asymptotics may in fact turn out to be unstable when the full numerical calculation is performed, since the asymptotics concerns only long-wavelength instabilities, and other, short-wavelength instabilities may turn out to be present. In this paper, we have said little about the behavior of time-dependent solutions of the dispersive Nikolaevskiy equation. However, it appears from our numerical simulations that when all waves are unstable, chaotic states are found that have a similar behavior to that found in the non-dispersive Nikolaevskiy equation [5; 6; 7]. ## References * (1) V. N. Nikolaevskiy, in _Recent Advances in Engineering Science_, edited by S. L. Koh and C. G. Speziale (Springer-Verlag, Berlin, 1989), no. 39 in Lecture Notes in Engineering, pp. 210-221. * (2) H. Fujisaka and T. Yamada, Prog. Theor. Phys. **106**, 315 (2001). * (3) M. I. Tribelsky and M. G. Velarde, Phys. Rev. E **54**, 4973 (1996). * (4) S. M. Cox and P. C. Matthews, Phys. Rev. E **76**, 056202 (2007). * (5) P. C. Matthews and S. M. Cox, Phys. Rev. E **62**, R1473 (2000). * (6) M. I. Tribelsky and K. Tsuboi, Phys. Rev. Lett. **76**, 1631 (1996). * (7) H. Sakaguchi and D. Tanaka, Phys. Rev. E **76**, 025201 (2007). * (8) R. W. Wittenberg and K.-F. Poon, Phys. Rev. E **79**, 56225 (2009). * (9) B. A. Malomed, Phys. Rev. A **45**, 1009 (1992). * (10) N. A. Kudryashov and A. V. Migita, Fluid Dynamics **42**, 463 (2007). * (11) T. Kawahara, Phys. Rev. Lett. **51**, 381 (1983). * (12) E. Plaut and F. H. Busse, J. Fluid Mech. **464**, 345 (2002). * (13) J. Duan, H. V. Ly, and E. S. Titi, ZAMP **47**, 432 (1996). * (14) F. J. Elmer, Physica D **30**, 321 (1988). * (15) W. Eckhaus, _Studies in nonlinear stability theory_ (Springer-Verlag, Berlin, 1965), seventh ed. * (16) R. Hoyle, _Pattern formation: An introduction to methods_ (University Press, Cambridge, 2006). * (17) S. M. Cox and P. C. Matthews, J. Comput. Phys. **176**, 430 (2002). **Numerical Methods for Stiff Systems** **A thesis submitted to the University of Nottingham for the Degree of Doctor of Philosophy** **by** **Hala Ashi** **Supervisors** **Dr. Paul Matthews** **Dr. Linda Cummings** **June 2008** **Abstract** Some real-world applications involve situations where different physical phenomena acting on very different time scales occur simultaneously. The partial differential equations (PDEs) governing such situations are categorized as "stiff" PDEs. Stiffness is a challenging property of differential equations (DEs) that prevents conventional explicit numerical integrators from handling a problem efficiently. For such cases, stability (rather than accuracy) requirements dictate the choice of time step size to be very small. Considerable effort in coping with stiffness has gone into developing time-discretization methods to overcome many of the constraints of the conventional methods. Recently, there has been a renewed interest in exponential integrators that have emerged as a viable alternative for dealing effectively with stiffness of DEs. Our attention has been focused on the explicit **Exponential Time Differencing** (ETD) integrators that are designed to solve stiff semi-linear problems. Semi-linear PDEs can be split into a linear part, which contains the stiffest part of the dynamics of the problem, and a nonlinear part, which varies more slowly than the linear part. The ETD methods solve the linear part exactly, and then explicitly approximate the remaining part by polynomial approximations. The first aspect of this project involves an analytical examination of the methods' stability properties in order to present the advantage of these methods in overcoming the stability constraints. Furthermore, we discuss the numerical difficulties in approximating the ETD coefficients, which are functions of the linear term of the PDE. We address ourselves to describing various algorithms for approximating the coefficients, analyze their performance and their computational cost, and weigh their advantages for an efficient implementation of the ETD methods. The second aspect is to perform a variety of numerical experiments to evaluate the usefulness of the ETD methods, compared to other competing stiff integrators, for integrating real application problems. The problems considered include the **Kuramoto-Sivashinsky** equation, the nonlinear **Schrodinger** equation and the nonlinear **Thin Film** equation, all in one space dimension. The main properties tested are accuracy, start-up overhead cost and overall computation cost, since these parameters play key roles in the overall efficiency of the methods. _To the ever living memory of my father and my father-in-law "May \(ALLAH\) bestow His mercy upon them and award them paradise"_ ## Acknowledgments First and foremost, I would like to thank ALLAH for giving me the strength and making the completion of this thesis possible. Early in this project, when I took the preliminary steps towards my thesis, I did not foresee how many individuals would contribute to its completion whom I owe my thanks and appreciation to. I am particularly grateful to my supervisors, Dr. Paul Matthews and Dr. Linda Cummings, for their unlimited support and guidance, thoughtful suggestions, objective comments, practical advice, and insightful direction. Without their knowledge and expertise this thesis would never have been completed. I would like to express my sincere appreciation to Helen Cunliffe, the Research Secretary, for her continuous support and assistance. My gratitude also goes to my beloved husband, Mamdouh Tayeb, whose boundless love and firm belief in me have been a source of inspiration for me during my lengthy project. I am extremely motivated to express my greatest appreciation to my mother and mother-in-law for their continuous support and prayers for me. Finally, special mention must be made of my children Taher, Mohammed, and Tala, I love you all. ###### Contents * 1 Introduction * 1.1 Introduction * 1.2 Layout Of Thesis * 2 Spatial Discretization Methods * 2.1 Introduction * 2.2 Finite Difference Formulas * 2.2.1 Finite Difference Approximation * 2.2.2 An Example * 2.2.3 Matrix Form * 2.3 Spectral Methods * 2.3.1 Fourier Spectral Methods * 2.3.2 Numerical Derivatives * 2.3.3 An Example * 3 Exponential Time Differencing (ETD) Methods * 3.1 Introduction * 3.2 Algorithm Derivation * 3.2.1 Integrating Factor Methods * 3.2.2 Exponential Time Differencing Methods * 3.2.3 Exponential Time Differencing Runge-Kutta Methods * 3.3 Stability Analysis * 3.3.1 Stability of Exponential Time Differencing Methods * 3.3.2 Stability of RK Exponential Time Differencing Methods * 3.4 Conclusion * 4Various Algorithms for Evaluating the ETD Coefficients * 4.1 Introduction * 4.2 The Scalar Case * 4.2.1 Taylor Series * 4.2.2 The Cauchy Integral Formula * 4.2.3 Scaling and Squaring Algorithm: Type **I* * 4.2.4 Scaling and Squaring Algorithm: Type **II* * 4.2.5 Composite Matrix Algorithm * 4.3 Non-Diagonal Matrix Case * 4.3.1 Taylor Series * 4.3.2 the Cauchy Integral Formula * 4.3.3 Varying the Radius of the Circular Contour * 4.3.4 Scaling and Squaring Algorithm: Type **I* * 4.3.5 Pade Approximation and the Taylor Series * 4.3.6 Composite Matrix Algorithm * 4.3.7 Matrix Decomposition Algorithm * 4.4 Chebyshev Spectral Differentiation Matrices * 4.5 Matrices With Imaginary Eigenvalues * 4.6 Computation Time * 4.7 Conclusion * 5 Numerical Experiments * 5.1 Introduction * 5.2 Numerical Experiments * 5.3 Kuramoto-Sivashinsky (K-S) Equation * 5.3.1 Computational Results * 5.3.2 Conclusion * 5.4 Non-Linear Schrodinger (NLS) Equation * 5.4.1 Computational Results * 5.4.2 Error Analysis of the ETD and the IF Methods * 5.4.3 Conclusion * 5.5 Thin Film Equation * 5.5.1 Computational Results * 5.5.2 Conclusion * 6Overall Conclusions * 6.1 Overall Conclusions ## Appendix A The Numerical Solution of the Kuramoto-Sivashinsky Equation * Derivation of the Local Truncation Errors ## Chapter 1 Introduction ### 1.1 Introduction The numerical solution of ordinary differential equations (ODEs) is an old topic. Various techniques have been devised over the years to solve such equations, and astonishingly, the old well-established methods such as the Runge-Kutta methods [78] are still the foundation for the most effective and widely-used codes. Nevertheless, there are several kinds of problems which classical methods do not handle very effectively, problems that are said to be "**stiff**". The earliest detection of stiffness in differential equations in the digital computer era, by the two chemists, **Curtiss** and **Hirschfelder** (1952) [20], was apparently far in advance of its time. They named the phenomenon and spotted the nature of stiffness (stability requirement dictates the choice of the step size to be very small). To resolve the problem they recommended possible methods such as the **Backward Differentiation Formula**[78] for numerical integration. In 1963, **Dahlquist**[21] defined the problem and demonstrated the difficulties that standard differential equation solvers have with stiff differential equations. At about this time several authors participated in independent research for handling and evading the problems posed by stiff differential equations. For example, **Gear**[31] in 1968, became one of the most important names in this field. More articles on integrating stiff ODEs are listed in [44, 68]. Considerable efforts have gone into developing numerical integration for stiff problems [72], and hence, the problem of stiffness was brought to the attention of the mathematical and computer science community, see [33] for further details on the topic of stiffness, and [30] for a comprehensive review of this phenomena. Stiff differential equations are categorized as those whose solutions (or different components of a single solution) evolve on very different time scales occurring simultaneously, i.e. the rates of change of the various components of the solutions differ markedly. Consider, for example, if one component of the solution has a term of the form \(e^{-ct}\), where \(c\) is a large positive constant. This component, which is called the transient solution, decays to zero much more rapidly, as \(t\) increases, than other slower components of the solutions. Alternatively, consider a case where a component of the solution oscillates rapidly on a time scale much shorter than that associated with the other solution components. For a numerical method which makes use of derivative values, the fast component continues to influence the solution, and as a consequence, the selection of the step size in the numerical solution is problematic. This is because the required step sizeis governed not only by the behavior of the solution as a whole, but also by that of the rapidly varying transient which does not persist in the solution that we are monitoring. In reality, the numerical values occurring in nature are frequently such as to cause stiffness. Therefore, a realistic representation of a natural system using a differential equation is likely to encounter this phenomenon. An example is the field of chemical kinetics [20]. Here ordinary differential equations describe reactions of various chemical species to form other species. The stiffness in such systems is a consequence of the fact that different reactions take place on vastly different time scales. Another important class of stiff ODEs originates frequently from application of the general approach _'the method of lines'_[84] to stiff time-dependent PDEs. In this method we first discretize the spatial derivatives of a PDE with a spatial derivative approximation method, which results in a stiff coupled system of ordinary differential equations (ODEs) in time only. Then, we apply any well established numerical method to achieve an accurate approximate solution to the problem. Two broadly applicable techniques include **Finite Difference Formulas**[58, 83], which are local methods and **Spectral Methods**[25, 83, 84], which are global methods, see SS2 for further details. The idea of using spectral representations for numerical solutions of ODEs goes back at least to **Lanczos**[51] in 1938. Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain partial differential equations, often involving the use of the **Fast Fourier Transform (FFT)** algorithm of **Cooley** and **Tukey**[18]. A short historical summary of the FFT can be found in [12], while a comprehensive survey and its mathematical applications can be found in **Henrici's article [34]. Spectral methods have been widely used for spatial discretization in the context of solving time-dependent PDEs since the early 1970's, see for example the article published in 1972 by **Kreiss** and **Oliger**[48]. The books by **Trefethen**[84] and **Fornberg**[25] are intended for researchers interested in exploring this field of study. If the differentiation matrix applied to discretize the spatial derivatives has eigenvalues with very diverse values, i.e. the ratio of largest to smallest (in magnitude) eigenvalues is very large, or if a PDE has spatial derivatives of higher than second order, then the problem is more likely to be stiff. The degree of stiffness dependson the grid spacing of the spatial discretization. As we decrease the grid spacing, i.e. increase the number of points with which we are discretizing the operator, the eigenvalues vary greatly in magnitude. Given that stiffness has extensive practical applications and arises in many physical situations, the demand for special techniques that permit the use of a step size governed only by the rate of change of the overall solution is very great. Despite the fact that numerical integrations of stiff systems with constant coefficients have been considered in detail, a stiff differential equation does not lend itself readily to numerical solution by classical methods [33]. In principle, the stability region of the integration method must include the eigenvalues of the discrete linear operator of a stiff PDE in order to be stable. Linear explicit schemes have a penalty of requiring an extremely small step size in order to be stable, causing unacceptable increase in the number of integration steps and in the integration times and even an excessive error accumulation. On the other hand, implicit schemes have an advantage of the freedom of choice of the time step and nice stability properties. However, discretization of a nonlinear PDE leads to a large nonlinear system of equations that has to be solved at each time step. This renders implicit schemes costly to implement. Thus, the goal of developing more efficient time integrators for stiff systems is to provide alternative choices of more sophisticated schemes that have better stability properties and require fewer arithmetic operations per time step than standard explicit and implicit schemes respectively. Various methods have been proposed to avoid the difficulties that appear when trying to solve nonlinear equations with an implicit method. A popular strategy is to combine pairs of an explicit multi-step formula to advance the nonlinear part of the problem and an implicit method to advance the linear part. This strategy forms the basis of the so-called **Implicit-Explicit (IMEX)** schemes. These schemes were proposed to solve stiff PDEs as far back as the late 1970's [87]. The direct derivation of the linear-multi-step IMEX schemes and their stability properties is fully documented in a paper by **Ascher**[4], and further stability analysis can be found in [27]. Other more complicated forms of the IMEX schemes such as the **Runge-Kutta** IMEX schemes are reported in [3, 16]. The most popular second-order linear-multi-step IMEX scheme (**AB2AM2**) [4] utilizes the second-order **Adams-Moulton**[14] and **Adams-Bashforth**[14] schemes to advance the linear and the nonlinear parts of the problem in time re respectively. Unfortunately, it is not always so easy to construct an IMEX scheme by coupling two multi-step methods. From an accuracy point of view, the combination of the order of accuracy of each coupled implicit and explicit methods must give the correct order of accuracy of the overall method. IMEX schemes can be useful methods, especially when used in conjunction with spectral methods, for approximating spatially discretized reaction-diffusion problems [66] arising in chemistry and mathematical biology. For these problems the nonlinear reaction term can be treated explicitly while the diffusion term is treated implicitly. Examples for reaction-diffusion systems from a biological standpoint can be found in [62]. IMEX methods are restricted from having an order higher than two if A-stability1 is required (this is the second Dahlquist stability barrier [78]). Therefore, despite their simplicity and frequent usage, they are not extendable to higher order. A subset of these schemes are the backwards differencing schemes. These schemes are frequently used in stiff problems because, although they may not be A-stable for order greater than two, they do correctly damp non-oscillatory decaying perturbations (but not those which are oscillatory in general) [43]. Footnote 1: _A-stability_ is the property that physically decaying solutions are numerically damped for any choice of time step. This feature is highly desirable for stiff problems, as fast decaying perturbations would be damped even with time steps much longer than their life time. Nonlinear methods or methods with non-constant coefficients are not restricted by the Dahlquist Barrier and may be generalized to arbitrary order. Such schemes have been explored by several authors [37, 82] to solve stiff DEs. In 1960, **Certaine**[17] observed that the negligible rapidly varying transient solution, in a system of first-order coupled differential equations, is a hindrance to any conventional scheme to give an accurate solution to the system (an example is the **Trapezium Rule**[14]). He resolved this by coming up with a new class of nonlinear schemes based on the **Adams-Moulton** methods of second and third order. A distinctive feature of these schemes is the exact evaluation of the linear part of the differential equation (and so the schemes are automatically A-stable). That is, if the nonlinear part is zero, then the scheme reduces to the evaluation of the exponential function of the operator (or matrix) that represents the linear part. Historically, various constructions of **Certaine** schemes with various names have been derived since the 1960's. In 1969, **Norsett**[63] modified the **Adams **Bashforth** formulas to be A-stable methods of arbitrary order, suitable for the numerical integration of a stiff system of ODEs. In 1998, **Beylkin**_et al._[9] constructed implicit and explicit schemes of arbitrary order, which they called **Exact Linear Part (ELP)** method. They analyzed, in detail, the stability of the methods when applied to solve nonlinear PDEs. However, the formulas of the methods' coefficients were not given explicitly. Later in 2002, a clear derivation of the explicit ELP schemes of arbitrary order was given by **Cox** and **Matthews**[19], where they referred to these methods as the '**Exponential Time Differencing (ETD)**' methods (the term used arose originally in the field of computational electrodynamics [40, 65, 71]). The authors also broadened these schemes to more accurate ETD '**Runge-Kutta**' (ETD-RK) methods, and illustrated the superior performance of the ETD methods when they were applied to both _dissipative_ and _dispersive_ PDEs. Furthermore, a class of exponential propagation techniques known as **Exponential Propagation Iterative (EPI)** schemes were introduced by **Tokman** in [81]. These schemes were constructed by reformulating the integral form of a solution to a nonlinear autonomous system of ODEs as an expansion in terms of products of matrix and vector functions. To trace the history of discovering and developing the ETD schemes see [89]. The formula of the ETD schemes is based on integrating the linear parts of the differential equation exactly, and approximating the nonlinear terms by a polynomial, which is then integrated exactly. A similar approach is used in the **Integrating Factor (IF)** schemes, which were introduced first in 1967 by **Lawson**[52]. In the approach of the IF schemes [7, 11, 19, 44, 45, 49, 84], we multiply both sides of an ODE by an appropriate integrating factor and obtain a differential equation in which we change variables so that the linear part can be solved exactly. Afterwards, we apply any numerical scheme (multi-step or Runge-Kutta methods) to integrate the transformed nonlinear equation. Then we transform back the approximated solution which becomes an approximate solution for the original variable, see [8, 57] for a comprehensive review. Methods like ETD and IF, based on the exact treatment of the linear terms, require the computation of matrix exponentials (or matrix functions) for the linear operators. However, as pointed out by **Cox** and **Matthews**[19] in their implementation of the ETD methods, a computational problem arises when evaluating the methods' coefficients. When we discretize a stiff semi-linear PDE in space, thelinear operator of the resulting system of coupled ODEs, which is represented by a diagonal (in case of discretizing with Fourier spectral methods) or a non-diagonal matrix (in case of discretizing with finite difference formulas or Chebyshev polynomials [11, 25, 83, 84]), might have zero, large and small (in magnitude) eigenvalues. For eigenvalues approaching zero, the ETD coefficients give imprecise results because of the large amount of cancellation in their formulas. This problem gets worse with the higher order ETD methods. For eigenvalues equal to zero, we are actually dividing the numerator by zero denominator in the explicit formulas for the coefficients, which renders them useless in this case. To deal with the problem in the case where the linear operator is represented by a diagonal matrix, the authors of [19] used the Taylor series to evaluate such coefficients for small eigenvalues and used the explicit formulas of the ETD coefficients for large eigenvalues. However, this process cannot be applied in case of the linear operator being a non-diagonal matrix because the eigenvalues of small and large magnitude are indistinguishable. It is therefore important to have an accurate numerical algorithm for evaluating the ETD coefficients. We note that the problem of computing the exponential of large matrices has been of interest in numerical analysis for a long time [59]. Recently, **Moler** and **Van Loan**[60] updated the publication of "Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later", in which they analyzed the efficiency of various algorithms and gave further developments in computing a matrix exponential. One example is the algorithm that is based on scaling and squaring which proved its efficiency in approximating a matrix exponential accurately [9]. Additionally, various algorithms have been devoted to compute non-diagonal matrix functions efficiently by many authors [2, 8, 35, 37, 47, 53, 54, 56, 57, 67, 70, 76, 80, 81]. Recently, **Kassam** and **Trefethen**[44, 45] proposed a modification of the ETD **Runge-Kutta** schemes of **Cox** and **Matthews**[19], to ameliorate the numerical difficulties associated with these schemes. The key idea is to evaluate the ETD coefficients by means of contour integrals in the complex plane using the well-known **Cauchy Integral Formula**[55]. Further discussion of this issue is detailed in SS4. Exponential Time Differencing methods have extensive applications in solving stiff systems [15, 39]. For example, in the field of chemical kinetics problems, the author of [81] conducted some numerical comparisons in which he deduced that explicit exponential integrators are highly competitive relative to those standard integrators. The authors of [22, 23] indicated that the higher order ETD based schemes can be several orders of magnitude faster than low-order semi-implicit methods in some simulations of micro-structure evolution (a core component of phase field modeling) in two and three dimensions. Moreover, **Kassam** and **Trefethen**[44, 45] compared various fourth-order methods, including the ETD methods of [19], and their results revealed that the best is the ETD4RK method of [19] for solving various one-dimensional diffusion-type problems. However, more recently **Krogstad**[49] presented a fourth-order ETDRK4-B method with better accuracy than the ETD4RK method of [19] and illustrated its efficiency in solving several semi-discretized PDEs, such as the Kuramoto-Sivashinsky (K-S) equation [41]. A recent report [57] also showed that the ETD type of exponential integrators surpass integrators of Lawson type [52] in solving parabolic semi-linear problems, such as the K-S and the nonlinear Schrodinger (NLS) equations [77]. Again and under certain circumstances, **Berland** and **Skafhestad**[7] solved the NLS equation and found that the performance of a fourth-order Lawson integrating factor method was demonstrably poorer than the fourth-order ETD4RK method of [19]. Related work in numerical simulations of stiff problems has made extensive use of the ETD methods [8, 37, 46]. The explicit ETD methods have proved their efficiency in numerous applications, for example, in computational electrodynamics [71], in reaction kinetics [61] and in solving incompressible magnetohydrodynamics equations [53]. This further motivates our theoretical and numerical investigation on the various properties of the ETD schemes while carrying out more computational studies for real application problems. ### 1.2 Layout Of Thesis The main objective of this thesis is to present the **Exponential Time Differencing** (ETD) schemes as a viable alternative to classical integrator methods for solving stiff semi-linear PDEs. In semi-linear PDES, the linear part contains higher order spatial derivatives than those in the nonlinear part. We place emphasis on the stability, accuracy, efficiency and reliability of these numerical integrators. The purpose of chapter 2 is to present the "**Method of Lines**" procedure for solving initial boundary value problems. This procedure starts with discretizingthe spatial derivatives in the PDE with algebraic approximations. We include two spatial derivative approximation techniques: **Finite Difference Formulas** and **Spectral Methods**. We show through examples how to formulate the resulting semi-discrete problem, which is a stiff system of coupled ODEs with time as the only independent variable. The next step in the procedure is to search for an accurate and fast numerical method and apply it to these initial value ODEs to compute an approximate numerical solution to the PDE. Hence, we consider in chapter 3, the ETD schemes of arbitrary order as time-discretization methods. We give in detail the derivation of these methods following the approach in [9, 17, 19, 63], and present the ETD-RK type constructed in [19]. In addition, we examine analytically the methods' stability properties to present the advantage of these methods in overcoming the stability constraints that are imposed on any conventional explicit method utilized to solve a stiff system of ODEs. The approach is to compute the boundaries of the stability regions in two dimensions for a general test problem, where the stiffness parameter is negative and purely real. In chapter 4, we go through the difficulties occurring in the computation of the ETD coefficients (as mentioned previously in the introduction, the evaluation of coefficients for eigenvalues approaching zero suffers from numerical rounding errors due to the large amount of cancellation in the explicit formulas). We conduct comparison experiments on various algorithms and analyze their performance and their computational cost for an accurate evaluation of the coefficients and an efficient implementation of the ETD methods. The algorithms included are the Taylor series, the Cauchy integral formula, the Scaling and Squaring algorithm, the Composite Matrix algorithm and the Matrix Decomposition algorithm for non-diagonal matrix cases. The matrices considered are the second-order centered difference differentiation matrix for the first and second derivatives and the Chebyshev differentiation matrix for the second derivative to show that the algorithms' efficiency is by no means restricted to any special structure of certain matrices. We have published in the article [5] (in press) some of the results developed in this chapter. In chapter 5, we demonstrate the effectiveness of the ETD methods for integrating real application problems. For the simulation tests, we consider three one space-dimensional problems with periodic boundary conditions. We apply Fourier spectral approximation for the spatial discretization, and employ first, second and fourth order ETD methods, first-order Implicit-Explicit (IMEX) method and first, second and fourth-order Integrating Factor (IF) methods for time discretization. The first two problems considered are the time-dependent scalar **Kuramoto-Sivashinsky (K-S)** equation and the nonlinear **Schrodinger (NLS)** equation. In these two equations, the linear terms are primarily responsible for stiffness. The stiffness in the K-S equation is due to the strong dissipation of high wave number modes on a time scale much shorter than that typical of the nonlinear term, whereas, the stiffness in the NLS equation is due to the rapid oscillations of high wave number modes. The third model considered is the nonlinear **Thin Film** equation. Solving this equation is a more challenging task since the nonlinear terms are the ones responsible for stiffness. To facilitate numerical studies of the thin film equation, we consider a perturbation to the uniform solution of the equation and obtain after a few algebraic manipulations two split parts of the linear and nonlinear terms. The stiffness in the problem is again due to the strong dissipation. The main testing factors in differentiating between the methods are the stability, the accuracy, the start-up overhead cost and the CPU time consumed by the methods. To address the question of stability and accuracy of the methods we perform a series of runs with different choices of final times which are computed, for all methods, with various time step sizes. The time step values are selected to ensure that all methods achieve stable accurate results. We measure the accuracy in terms of the relative error evaluated in the integrated error norm. Then, we turn our attention to the accuracy of the methods as a function of CPU time. All the calculations presented in this chapter are performed using Matlab codes. We evaluate the coefficients of the ETD methods, once at the beginning of the integration for each value of the time step sizes, using the 'Cauchy integral' approach proposed by **Kassam** and **Trefethen**[44, 45]. Finally, in chapter 6, we conclude with a brief discussion of the work carried out and the main results drawn from this research and reiterate the main conclusions; additionally, we outline a number of possible extensions to this work in further future studies. ## Chapter 1 Introduction ### Outline of Chapter Our physical world is most generally described in scientific terms with respect to three space dimensions and time. Time-dependent partial differential equations (PDEs) provide a mathematical description for a large range of physical space-time problems, and are very widely used in applied mathematics. A general numerical procedure for solving initial boundary value problems is the "**Method of Lines**". This procedure starts with discretizing the spatial derivatives in the PDE with algebraic approximations. The resulting semi-discrete problem, which is a system of coupled ordinary differential equations (ODEs) with time as the only independent variable, must then be integrated. The method of lines is an efficient tool that allows standard (accurate) general methods that have been developed for the numerical integration of ODEs to be used. The purpose of this chapter is to present two spatial derivative approximation techniques: **Finite Difference Formulas**[58, 83] and **Spectral Methods**[25, 83, 84], and show, through examples, how to formulate the system of ODEs that approximates the original PDE. ### 2.1 Introduction The field of partial differential equations (PDEs) is broad and varied, as is inevitable because of the great diversity of physical phenomena that these equations model. Much of the variety is introduced by the fact that practical problems involve different geometric classifications (hyperbolic, elliptic, parabolic), multiple space dimensions, systems of PDEs, different types of boundary conditions, varying smoothness of the initial conditions, variable coefficients and frequently, nonlinearity. A well-known numerical approach to solve a time-dependent PDE, whose solutions vary both in time and in space, is the **Method of Lines**[84]. In the approach of this method, firstly, we construct a semi-discrete approximation to the problem by setting up a regular grid in space1, i.e. the spatial independent variables that have boundary constraints are discretized. Thereby, we generate a coupled system of ordinary differential equations (ODEs) in the time independent variable \(t\), which is associated with the initial value. Secondly, we numerically approximate solutions to the original PDE by marching forward in time on this grid. Now we can apply any existing, and generally well established, numerical methods (such as the Exponential Time Differencing methods, see SS3 for more details) for these initial value ODEs to compute an approximate numerical solution to the PDE. Footnote 1: When solving a one-dimensional time-dependent PDE, we assume (throughout the thesis) a fixed space step \(h>0\) and a fixed time step \(\Delta t>0\) for discretizing the spatial part \(x\) and temporal part \(t\) respectively. Thereby, we are defining the points \((x_{n},t_{j})\), for any integers \(n\) and \(j\), in a two dimensional regular grid, formally, the subset \(h\mathbb{Z}\times\Delta t\mathbb{Z}\) of \(\mathbb{R}^{2}\). The idea of semi-discretization focuses attention on spatial difference operators as approximations of spatial differential operators. Two broadly applicable spatial derivative approximation techniques are **Finite Difference Formulas**[58, 83] and **Spectral Methods**[25, 83, 84]. The key factors in selecting among these techniques are the nature of the grid, the complexity of the domain and the required levels of accuracy of differentiation. These techniques are discussed in SS2.2 and SS2.3 respectively. ### 2.2 Finite Difference Formulas The purpose of discretizing time-dependent PDEs is to obtain a problem that can be solved by a finite procedure. The simplest such kind of finite procedure is the Finite Difference Formula (FDF). A FDF is a fixed formula that approximates acontinuous function by a function of a finite number of grid values; thereby, we obtain a finite system of equations to be solved. In this section we describe FDFs as discrete approximations to the spatial derivatives of a PDE. Given a function on a set of uniform grid points, Finite Difference Approximations (FDAs) approximate the derivative of the function by the derivative of the local interpolators on the grid [84]. #### Finite Difference Approximation Suppose that a function \(f(x)\), defined on an interval \(0<x<L\) that is divided into \(q\) subintervals, has some known values at a finite number of points on a uniform mesh of size \(h=L/q\). FDAs approximate the first and second numerical derivatives \(df(x)/dx\) and \(d^{2}f(x)/dx^{2}\) of \(f(x)\) respectively for example, as follows, \[\frac{df(x)}{dx}\bigg{|}_{x=x_{n}}=\frac{f(x_{n+1})-f(x_{n})}{h}+O(h),\] and \[\frac{d^{2}f(x)}{dx^{2}}\bigg{|}_{x=x_{n}}=\frac{f(x_{n})-2f(x_{n+1})+f(x_{n+2 })}{h^{2}}+O(h).\] The two approximations above are derived using the Taylor series and are called **Forward Differentiation**. Note that the first derivative \(df(x)/dx\) is obtained using the values of \(f(x)\) at the points \(x_{n}\) and \(x_{n+1}\), and the second derivative uses the points \(x_{n},x_{n+1}\) and \(x_{n+2}\). These approximations have a truncation error (obtained from truncating the Taylor series) of \(O(h)\), that is local to the interval enclosing the sampling points. For sufficiently small \(h\), the errors are then proportional to \(h\), and the approximations are **first-order** in \(h\). When using finite differences, it is important to keep in mind that there are several sources of errors: the truncation error (which is introduced by truncating the Taylor series approximation), roundoff error and condition error. Roundoff error comes from roundoff in the arithmetic computations required. Condition error comes from magnification of any errors in the function values. It arises typically from the division by a power of the step size, and so grows with decreasing step size. This means that in practice, even though the truncation error approaches zero as \(h\) does, the actual error starts growing beyond some point. To obtain higher-order approximations to the derivative, it is easy to invoke many function values further away from the point of interest. Thus, the **secondorder forward finite difference approximation** for \(df(x)/dx\) is \[\left.\frac{df(x)}{dx}\right|_{x=x_{n}}\hskip-28.452756pt=\frac{-3f(x_{n})+4f(x_{n +1})-f(x_{n+2})}{2h}+O(h^{2}),\] and for \(d^{2}f(x)/dx^{2}\) is \[\left.\frac{d^{2}f(x)}{dx^{2}}\right|_{x=x_{n}}\hskip-28.452756pt=\frac{2f(x_{n })-5f(x_{n+1})+4f(x_{n+2})-f(x_{n+3})}{h^{2}}+O(h^{2}).\] The simplest finite difference approximations are centered and symmetrical, i.e. they use values of the function at points either side equally and always have even order of accuracy. Using the Taylor expansions for \(f(x_{n+1})\) and \(f(x_{n-1})\), the **second-order centered finite difference approximation** for \(df(x)/dx\) is \[\left.\frac{df(x)}{dx}\right|_{x=x_{n}}\hskip-28.452756pt=\frac{f(x_{n+1})-f( x_{n-1})}{2h}+O(h^{2}),\] and for \(d^{2}f(x)/dx^{2}\) is \[\left.\frac{d^{2}f(x)}{dx^{2}}\right|_{x=x_{n}}\hskip-28.452756pt=\frac{f(x_{n +1})-2f(x_{n})+f(x_{n-1})}{h^{2}}+O(h^{2}).\] Note that it is not possible to use centered finite difference approximations in some cases as follows: * If we are near the boundary, required values of \(f(x)\) may not be available. * For certain problems, the stability is improved with one-sided forward finite differences, and hence one-sided forward finite difference approximations are sometimes used instead. In general, formulas for any given derivatives of any chosen order can be derived from Taylor expansions as long as a sufficient number of sample points are used. However, these approximations become cumbersome beyond the simple examples shown above. We refer the reader to the book by **Fornberg**[25], where the centered and one-sided Finite Differences formulas for approximating derivatives up to fourth-order, for equi-spaced grids, with order of accuracies up to the eighth are readily available in tables. #### An Example As a basic illustrative example of a PDE, we consider the linear heat (diffusion) equation \[\frac{\partial u(x,t)}{\partial t}=\nu\frac{\partial^{2}u(x,t)}{\partial x^{2}},\hskip 14.226378ptt_{0}\leq t\leq T,\;x_{0}\leq x\leq x_{q}, \tag{2.1}\]where * \(u(x,t)\) is the dependent variable, * \(t\) and \(x\) are the independent variables representing time and one-dimensional space respectively, * \(\nu\) is a real positive constant, subject to an initial condition at \(t_{0}=0\) \[u(x,t=0)=u^{0}(x),\] and two boundary conditions, corresponding to boundaries of a physical system (there are other possibilities) \[u(x=x_{0},t)=f(t),\ u(x=x_{q},t)=g(t),\] where \(f(t)\) and \(g(t)\) are given boundary values of \(u\) for all \(t\). To illustrate the method of lines procedure to solve the heat equation (2.1), suppose that \(u(x,t)\) is discretized in space with \(q+1\) points, of which \(q-1\) are interior points, on a uniform grid with step size \(h\) as follows \[u(x_{n},t)\approx u_{n}(t),\ \ \ \ 0\leq n\leq q,\] where the index \(n\) designates a position along the grid in \(x\). To approximate the spatial derivative \(\partial^{2}u/\partial x^{2}\) in equation (2.1), we use for example, the second-order centered finite difference approximation \[\left.\frac{\partial^{2}u(x,t)}{\partial x^{2}}\right|_{x=x_{n}}\approx\frac{ u_{n+1}(t)-2u_{n}(t)+u_{n-1}(t)}{h^{2}}+O(h^{2}). \tag{2.2}\] Substituting equation (2.2) into (2.1), gives a system of \(q-1\) approximating ODEs \[\begin{split} u_{0}(t)&=f(t),\\ du_{1}(t)/dt&=\nu(u_{2}(t)-2u_{1}(t)+u_{0}(t))/h^{2 },\\ du_{2}(t)/dt&=\nu(u_{3}(t)-2u_{2}(t)+u_{1}(t))/h^{2 },\\ \vdots&\\ du_{q-1}(t)/dt&=\nu(u_{q}(t)-2u_{q-1}(t)+u_{q-2}(t))/h^ {2},\\ u_{q}(t)&=g(t),\end{split} \tag{2.3}\] subject to the initial conditions \[u_{n}(t=0)=u^{0}(x_{n}),\ \ \ \ 0\leq n\leq q. \tag{2.4}\]To complete the solution of the original PDE (2.1), we compute a solution to the approximation system of ODEs (2.3). The system (2.3) and the initial conditions (2.4) now constitute the complete method of lines approximation of equation (2.1). #### Matrix Form Since differentiation and finite difference approximation are linear operations, an alternative way of representing an approximation to the differential operator is with a matrix. This matrix is referred to as a differentiation matrix. Using second-order centered FDAs, for example, to approximate the spatial derivatives on a uniform grid of \(q+1\) points, reduces the problem to \(q-1\) coupled ODEs. Hence, for any given non-periodic boundary conditions, the differentiation matrix representing the second derivative, for example, is a \((q-1)\times(q-1)\)**tridiagonal** matrix of the form \[M_{2}=\frac{1}{h^{2}}\left(\begin{array}{ccccccccc}-2&1&0&0&0&\ldots&0&0\\ 1&-2&1&0&0&\ldots&0&0\\ 0&1&-2&1&0&\ldots&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ldots&\vdots&\vdots\\ 0&0&0&0&0&\ldots&1&-2\end{array}\right).\] For periodic boundary conditions, \(M_{2}\) is of the same form but has a \(1\) in the top right and bottom left corners. Similarly, for the fourth-order centered approximation, the resulting differentiation matrix, representing the second derivative is **pentadiagonal**. As a result, the order of the approximation determines the sparsity of the matrix. ### 2.3 Spectral Methods In spectral methods, instead of representing a function by its values at grid points, the function is written as an expansion in terms of smooth basis functions \(\phi_{k}(x)\) as \[f(x)\approx\sum_{k=1}^{N}a_{k}\phi_{k}(x), \tag{2.5}\] for some integer \(N\). Spectral methods are global approximations since the values of the spectral coefficients \(a_{k}\) influence the function and its derivatives for all \(x\), whereas finite difference methods are local approximations since the value \(f(x_{n})\) of \(f(x)\) at a grid point \(x_{n}\) only has influence near that point. The definitive advantage of spectral methods lies in their remarkable accuracy properties. For any given analytic function, spectral approximations approximate the function and its derivatives within an exponential accuracy. It was shown [79] that when differencing analytic functions on regular grids using spectral methods, the errors decay to zero at an exponential rate as the grid is refined, rather than at (much slower) polynomial rates obtained by finite difference formulas. This behavior is essentially due to the corresponding exponential decay of the spectral coefficients as the number of grid points is increased. The basis functions \(\phi_{k}(x),\;k=1,\ldots,N\) used in the expansion (2.5) must satisfy three criteria [25]: 1. The coefficients \(a_{k}\) must decrease rapidly with \(k\), which ensures the rapid convergence of the approximation (2.5) of \(f(x)\). 2. It should be easy to write the first derivative \(df(x)/dx\) as an expansion in the same basis functions with the coefficients \(b_{k}\) such that \[\frac{d}{dx}\left(\sum_{k=1}^{N}a_{k}\phi_{k}(x)\right)=\sum_{k=1}^{N}b_{k} \phi_{k}(x),\] for given coefficients \(a_{k}\). 3. One must be able to convert from the coefficients \(a_{k}\) in the'spectral' space and the function \(f(x)\) in 'physical' space. Usually, each spectral method is named after choosing a function class for the basis functions. For non-periodic problems, the preferred choice is the orthogonal2 Footnote 2: The set of functions \(\{\phi_{0},\phi_{1},\ldots\phi_{n}\}\), defined on an interval \([a,b]\), is said to be an **orthogonal set**, with respect to the weight function \(w\), if \[\int_{a}^{b}w(x)\phi_{j}(x)\phi_{k}(x)\;dx=\left\{\begin{array}{ll}0&\mbox{ for }j\neq k,\\ \alpha_{k}>0&\mbox{ for }j=k.\end{array}\right.\] for some constant \(\alpha_{k}\). In addition, if \(\alpha_{k}=1\) for each \(k=0,1,\ldots,n\), the set is said to be **orthonormal**. For periodic problems, the natural choice of basis functions is the _trigonometric functions_, and the function is ideally represented by its **Fourier series**. Here the method is referred to as the **Fourier Spectral** method [83, 84]. #### Fourier Spectral Methods Fourier analysis occurs in the modeling of time-dependent phenomena that are exactly or approximately periodic. Examples of this include the digital processing of information such as speech; the analysis of natural phenomena such as earthquakes; in the study of vibrations of spherical, circular or rectangular structures; and in the processing of pictures. In a typical case, Fourier spectral methods write the solution to the PDE as its Fourier series. Fourier series decomposes a periodic real-valued function of real argument into a sum of simple oscillating trigonometric functions (_sines_ and _cosines_) that can be recombined to obtain the original function. Substituting this series into the PDE gives a system of ODEs for the time-dependent coefficients of the trigonometric terms in the series (this series is usually written in complex exponential form); then we choose a time-stepping method to solve those ODEs. 1. **Fourier Series**. The Fourier series of a smooth and periodic real-valued function \(f(x)\in C[0,2L]\) with period \(2L\) is \[f(x)=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}(a_{n}\cos{(n\pi x/L)}+b_{n}\sin{(n\pi x /L)}).\] Since the basis functions \(\cos(n\pi x/L)\) and \(\sin(n\pi x/L)\) are orthogonal, i.e. \[\int_{0}^{2L}\cos(n\pi x/L)\sin(m\pi x/L)\ dx = 0,\] \[\int_{0}^{2L}\cos(n\pi x/L)\cos(m\pi x/L)\ dx = L\delta_{mn},\] \[\int_{0}^{2L}\sin(n\pi x/L)\sin(m\pi x/L)\ dx = L\delta_{mn},\] where \[\delta_{mn}=\left\{\begin{array}{ll}0&\quad\text{for }m\neq n,\\ 1&\quad\text{for }m=n,\end{array}\right.\] the coefficients are given by \[a_{n}=\frac{1}{L}\int_{0}^{2L}f(x)\cos{(n\pi x/L)}\ dx,\quad\text{ for each }n=0,1,\ldots,\]and \[b_{n}=\frac{1}{L}\int_{0}^{2L}f(x)\sin\left(n\pi x/L\right)\,dx,\quad\mbox{ for each }n=1,2,\ldots.\] If \(f(x)\) is odd (\(f(-x)=-f(x)\)) then \(a_{n}=0\). Similarly, if \(f(x)\) is even (\(f(-x)=f(x)\)) then \(b_{n}=0\). 2. **Complex Form**. Fourier series can be expressed neatly in complex form as follows \[f(x)=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}\left[\frac{a_{n}}{2}(e^{in\pi x/L}+e^{ -in\pi x/L})+\frac{b_{n}}{2i}(e^{in\pi x/L}-e^{-in\pi x/L})\right].\] If we define \[c_{0}=\frac{a_{0}}{2},\ c_{n}=\frac{a_{n}-ib_{n}}{2},\ c_{-n}=\frac{a_{n}+ib_{ n}}{2},\] where \(c_{-n}\) is the complex conjugate of \(c_{n}\), i.e. \(c_{-n}=c_{n}^{*}\), then \(f(x)\) can be written as \[f(x)=\sum_{n=-\infty}^{\infty}c_{n}e^{in\pi x/L},\] (2.6) where the coefficients \(c_{n}\) can be determined from the formulas of \(a_{n}\) and \(b_{n}\) as \[c_{n}=\frac{1}{2L}\int_{0}^{2L}f(x)e^{-in\pi x/L}\ dx.\] (2.7) In many applications, particularly in analyzing of real situations, the function \(f(x)\) to be approximated is known only on a discrete set of "sampling points" of \(x\). Hence, the integral (2.7) cannot be evaluated in a closed form and Fourier analysis cannot be applied directly. It then becomes necessary to replace continuous Fourier analysis by a discrete version of it. 3. **Discrete Fourier Transform**. The linear _discrete Fourier transform_[84] of a periodic (discrete) sequence of complex values \(u_{0},\ldots,u_{N_{\mathcal{F}}-1}\) with period \(N_{\mathcal{F}}\), is a sequence of periodic complex values \(\hat{u}_{0},\ldots,\hat{u}_{N_{\mathcal{F}}-1}\) defined by \[\hat{u}_{k}=\frac{1}{N_{\mathcal{F}}}\sum_{j=0}^{N_{\mathcal{F}}-1}u_{j}e^{-2 \pi ijk/N_{\mathcal{F}}},\quad\ k=0,1,\ldots,N_{\mathcal{F}}-1.\] (2.8) The linear inverse transformation is \[u_{j}=\sum_{k=0}^{N_{\mathcal{F}}-1}\hat{u}_{k}e^{2\pi ijk/N_{\mathcal{F}}}, \quad\ j=0,1,\ldots,N_{\mathcal{F}}-1.\] (2.9) The most obvious application of discrete Fourier analysis consists in the numerical calculation of Fourier coefficients. Suppose we want to approximate a real-valued periodic function \(f(x)\), defined on the interval \([0,2L]\) that is sampled with an even number \(N_{\mathcal{F}}\) of grid points \[x_{j}=jh,\ h=2L/N_{\mathcal{F}},\ j=0,1,\ldots,N_{\mathcal{F}}-1,\] by its Fourier series (2.6). First we compute approximate values of the Fourier coefficients \(c_{n}\) (2.7) by the discrete Fourier transform (2.8) \[\hat{c}_{k}\approx\frac{1}{N_{\mathcal{F}}}\sum_{j=0}^{N_{\mathcal{F}}-1}f(x_{j })e^{-2\pi ijk/N_{\mathcal{F}}},\] (2.10) and then truncate the Fourier series (2.6) formed with these approximate coefficients. Because the discrete Fourier transform and its inverse exhibit periodicity with period \(N_{\mathcal{F}}\), i.e. \(\hat{u}_{k+N_{\mathcal{F}}}=\hat{u}_{k}\) (this property results from the periodic nature of \(e^{2\pi ijk/N_{\mathcal{F}}}\)), it makes no sense to use more than \(N_{\mathcal{F}}\) terms in the series, and it suffices to calculate one full period. Thus, the range of Fourier modes distinguishable on the grid, discretized with an even number \(N_{\mathcal{F}}\) of grid points, is \(k=-N_{\mathcal{F}}/2+1,\ldots,N_{\mathcal{F}}/2\) and the Fourier series (2.6) formed with the approximate coefficients (2.10) is \[\hat{f}(x)\approx\sum_{k=-N_{\mathcal{F}}/2+1}^{N_{\mathcal{F}}/2}\hat{c}_{k}e ^{ik\pi x/L}.\] (2.11) The function \(\hat{f}(x)\) not only approximates, but actually interpolates \(f(x)\) at the sampling grid points \(x_{j}\). The approximation of large amounts of equally spaced data by trigonometric polynomials can produce very accurate results. Fourier's theorem states that at a point where the function \(f(x)\) is continuous, the Fourier series converges to the value of \(f(x)\), with a speed related to the smoothness of the function. The smoother the function \(f(x)\) is, the more rapidly the series converges. The subsequent rapid decay of the coefficients implies that the Fourier series can be truncated after a few terms. For discontinuous functions with bounded variation, at the point of discontinuity the Fourier series converges to the average of the values on either side of the discontinuity, and the rate of the coefficients' decay is \(O(1/n)\); and if \(f(x)\) is continuous but the first derivative \(df(x)/dx\) is discontinuous, then the rate is \(O(1/n^{2})\) and so on. 4. **Matrix Form**. In matrix form, the discrete Fourier transform (2.8) can be written as \[\hat{u}_{k}=\frac{1}{N_{\mathcal{F}}}M_{kj}u_{j},\hskip 14.226378ptk,j=0,1, \ldots,N_{\mathcal{F}}-1,\] (2.12) where \(M_{kj}=\omega^{kj}\) and \(\omega=e^{-2\pi i/N_{\mathcal{F}}}\) is the \(N_{\mathcal{F}}\)th root of unity, so \[M=\left(\begin{array}{cccccc}1&1&1&1&\ldots&1\\ 1&\omega&\omega^{2}&\omega^{3}&\ldots&\omega^{N_{\mathcal{F}}-1}\\ 1&\omega^{2}&\omega^{4}&\omega^{6}&\ldots&\omega^{2(N_{\mathcal{F}}-1)}\\ \vdots&\vdots&\vdots&\vdots&\ldots&\vdots\\ 1&\omega^{N_{\mathcal{F}}-1}&\omega^{2(N_{\mathcal{F}}-1)}&\omega^{3(N_{ \mathcal{F}}-1)}&\ldots&\omega^{(N_{\mathcal{F}}-1)(N_{\mathcal{F}}-1)}\\ \end{array}\right).\] Similarly, the inverse discrete Fourier transform (2.9) has the form \[u_{j}=M^{*}_{kj}\hat{a}_{k},\hskip 14.226378ptk,j=0,1,\ldots,N_{\mathcal{F}}-1,\] (2.13) where \(M^{*}_{kj}=(\omega^{*})^{kj}\) and \(\omega^{*}\) is the complex conjugate of \(\omega\). In the early years, the impact of discrete Fourier analysis was limited by the very large number of arithmetic calculations required by the theory in its naive form (number of multiplications required is \(O(N_{\mathcal{F}}^{2})\)). This was changed in 1965 by the invention of the mathematical algorithm of **Cooley** and **Tukey**[18], which has become known as the "Fast Fourier Transform" (FFT). The FFT algorithm reduces the computational work required to carry out a discrete Fourier transform by reducing the number of multiplications and additions of (2.13) (computational time is reduced from \(O(N_{\mathcal{F}}^{2})\) to \(O(N_{\mathcal{F}}\log N_{\mathcal{F}})\)). This algorithm is useful in situations where the number of grid points can be chosen to be a highly composite number. Since 1965, the FFT usage has expanded and led to a revolution in the use of trigonometric polynomial approximations. #### Numerical Derivatives To apply spectral methods to a partial differential equation we need to evaluate derivatives of functions. Suppose that we have a periodic real-valued function \(f(x)\) with period \(2L\), defined on the interval \([0,2L]\) that is discretized with an even number \(N_{\mathcal{F}}\) of grid points, so that the grid size \(h=2L/N_{\mathcal{F}}\). The complex form of the Fourier series representation of \(f(x)\) is \[\hat{f}(x)\approx\sum_{k=-N_{\mathcal{F}}/2+1}^{N_{\mathcal{F}}/2}\hat{c}_{k}e ^{ik\pi x/L}. \tag{2.14}\]At \(k=N_{\mathcal{F}}/2\), the above series (2.14) gives a term \(\hat{c}_{N_{\mathcal{F}}/2}e^{iN\pi x/(2L)}\), which alternates between \(\pm\hat{c}_{N_{\mathcal{F}}/2}\) at the grid point \(x_{j}=jh,\ j=0,1,\ldots,N_{\mathcal{F}}-1\), and since it cannot be differentiated, we should set its derivative to be zero at the grid points. The numerical derivatives of the function \(f(x)\) can be illustrated as a matrix multiplication. For the first derivative, we multiply the Fourier coefficients (2.10) by the corresponding differentiation matrix \[D_{1}=\mathrm{Diag}\left(0,1,2,3,\ldots,\frac{N_{\mathcal{F}}}{2}-1,0,-\left( \frac{N_{\mathcal{F}}}{2}-1\right),\ldots,-3,-2,-1\right)\frac{i\pi}{L},\] for an even number \(N_{\mathcal{F}}\) of grid points. This matrix has non-zero elements only on the diagonal. For an odd number \(N_{\mathcal{F}}\) of grid points, the differentiation matrix corresponding to the first derivative is diagonal with elements \[(0,1,2,\ldots,(N_{\mathcal{F}}-1)/2,-(N_{\mathcal{F}}-1)/2,\ldots,-1)i\pi/L.\] Then, we compute an inverse discrete Fourier transform using (2.11) to return to the physical space and deduce the first derivative of \(f(x)\) on the grid. Similarly, taking the second derivative corresponds to the multiplication of the Fourier coefficients (2.10) by the corresponding differentiation matrix \(D_{2}\) which is diagonal with elements \[-\left(0,1,4,\ldots,\left(\frac{N_{\mathcal{F}}}{2}-1\right)^{2},\left(\frac{N _{\mathcal{F}}}{2}\right)^{2},\left(\frac{N_{\mathcal{F}}}{2}-1\right)^{2}, \ldots,4,1\right)\frac{\pi^{2}}{L^{2}},\] for an even number \(N_{\mathcal{F}}\) of grid points. In general, in case of an even number \(N_{\mathcal{F}}\) of grid points, approximating the \(m\)th numerical derivatives of a grid function \(f(x)\) corresponds to the multiplication of the Fourier coefficients (2.10) by the corresponding differentiation matrix which is diagonal with elements (\((ik\pi/L)^{m}\)) for \[k=0,1,2,3,\ldots,\frac{N_{\mathcal{F}}}{2}-1,\frac{N_{\mathcal{F}}}{2},-\left( \frac{N_{\mathcal{F}}}{2}-1\right),\ldots,-3,-2,-1,\] with the exception that for odd derivatives we set the derivative of the highest mode \(k=N_{\mathcal{F}}/2\) to be zero. #### An Example Discrete Fourier transforms (2.10) are often used to solve partial differential equations. The exponential basis functions are eigenfunctions of differentiation, which means that this representation transforms any linear differential equation with constant coefficients into ordinary differential equations. One then uses the inverse DFT (2.11) to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method. Consider again the linear diffusion equation \[\frac{\partial u(x,t)}{\partial t}=\nu\frac{\partial^{2}u(x,t)}{\partial x^{2}},\hskip 14.226378ptt_{0}\leq t\leq T,\ 0\leq x\leq 2L, \tag{2.15}\] subject to periodic boundary conditions and an initial condition at \(t_{0}=0\) \[u(x,t=0)=u_{0}(x).\] Suppose that the space interval \([0,2L]\) is discretized with an even number \(N_{\mathcal{F}}\) of grid points and the solution \(u(x,t)\) is represented by its Fourier series as follows \[u(x,t)=\frac{a_{0}(t)}{2}+\sum_{n=1}^{N_{\mathcal{F}}/2}a_{n}(t)\cos{(n\pi x/L )}+\sum_{n=1}^{N_{\mathcal{F}}/2-1}b_{n}(t)\sin{(n\pi x/L)}. \tag{2.16}\] Note that the Fourier series (2.16) satisfies the boundary conditions of the problem, i.e. \(u(0,t)=u(2L,t),\ u(x,t)=u(x+2L,t)\). Differentiating (2.16) with respect to \(t\) once, and with respect to \(x\) twice yields \[\frac{\partial u(x,t)}{\partial t}=\frac{1}{2}\frac{da_{0}(t)}{dt}+\sum_{n=1}^ {N_{\mathcal{F}}/2}\frac{da_{n}(t)}{dt}\cos{(n\pi x/L)}+\sum_{n=1}^{N_{ \mathcal{F}}/2-1}\frac{db_{n}(t)}{dt}\sin{(n\pi x/L)}, \tag{2.17}\] and \[\frac{\partial^{2}u(x,t)}{\partial x^{2}}=-\sum_{n=1}^{N_{\mathcal{F}}/2}a_{n} (t)(n\pi/L)^{2}\cos{(n\pi x/L)}-\sum_{n=1}^{N_{\mathcal{F}}/2-1}b_{n}(t)(n\pi /L)^{2}\sin{(n\pi x/L)}, \tag{2.18}\] respectively. Substituting (2.17) and (2.18) into the diffusion equation (2.15), and equating for the Fourier coefficients reduces the PDE to an uncoupled system of ODEs \[\frac{da_{0}(t)}{dt} = 0,\] \[\frac{da_{n}(t)}{dt} = -\nu(n\pi/L)^{2}a_{n}(t),\] \[\frac{db_{n}(t)}{dt} = -\nu(n\pi/L)^{2}b_{n}(t).\] This system can be solved analytically, i.e. \(a_{n}(t)=a_{n}(0)\exp(-\nu(n\pi/L)^{2}t)\), etc., where \(a_{n}(0)\) are the Fourier coefficients of the Fourier series representation of the initial condition \(u_{0}(x)\), and so no numerical solution is needed. For nonlinear PDEs, the nonlinear terms are evaluated by transforming from spectral space to physical space to find the values of these terms at the grid points. Then one transforms back to Fourier space to work out derivatives. The resulting system of ODEs is coupled through the nonlinear terms, while the linear part is represented by a diagonal matrix in the Fourier basis. In this case, the system is not trivial to solve analytically and a numerical method is needed. ## Chapter 1 Introduction ## Outline of Chapter The basic idea of the method of lines is to replace the spatial derivatives in a partial differential equation (PDE) with algebraic approximations. Thus, we have a coupled system of ODEs with only time remaining as an independent variable. Now we can apply any existing well established numerical methods to compute an approximate numerical solution to the PDE. **Exponential Time Differencing** (ETD) schemes are time integration methods that can be efficiently combined with spatial approximations to provide accurate smooth solutions for stiff or highly oscillatory semi-linear PDEs. The work reported in this chapter gives the derivation of the explicit ETD schemes for arbitrary order following the approach in [9, 17, 19, 63], and presents the explicit Runge-Kutta (ETD-RK) versions of these schemes constructed by **Cox** and **Matthews**[19]. In addition, the work contains an analytical examination of the methods' stability properties, which determines the range of time step for which the method is numerically stable. The approach computes the boundaries of the stability regions for a general test problem for the explicit ETD methods of multi-step or RK type up to fourth-order. The stability regions are illustrated in two-dimensional plots for various negative and purely real stiff parameters of the test problem. The plots demonstrate the ability of these methods to use large time-step sizes. This gives them an advantage over the ordinary explicit time-discretization methods which have severe restrictions on the selection of time step size for reason of stability when solving stiff problems. ### 3.1 Introduction Many physical phenomena can be represented by partial differential equations (PDEs). When discretizing the spatial part of these equations (see SS2), one commonly obtains a stiff system of coupled ordinary differential equations (ODEs) in time \(t\). Stiff systems are routinely encountered in scientific applications and are characterized by having a large range of time scales. Often the large-scale solution sought varies much more slowly in time than small scales that decay or disperse rapidly, or have both features of rapid decay and rapid oscillation. In other words, stiff problems arise in areas where vastly different time scales all play a role in the overall solution of the PDE. Stiffness can also be inherent in the problem due to the wide range of the eigenvalues, (i.e. the eigenvalues differ greatly in magnitude), of the differentiation matrix applied to discretize the spatial derivatives in a PDE (see SS2.2.3 and SS2.3.2). These eigenvalues spread out and become even larger as we increase the number of points with which we are discretizing the spatial operator. The stiffness problem is also exacerbated when a PDE has higher order spatial derivatives than the second. For such problems, numerical integrators require particular handling to achieve a precise solution to the problem. As mentioned in SS1, applying conventional explicit time stepping schemes to a stiff system requires the least number of computations per time step, but the stability requirement restricts the size of the time step to be very small to resolve the transient (rapidly-varying) part of the solution. Implicit time stepping schemes have much better stability properties compared to conventional explicit integrators, and allow significantly larger time steps that do not introduce instabilities. However, the number of computations required to solve a large nonlinear system of ODEs at each time step increases significantly. Numerous time discretization methods that are designed to handle stiff systems have been developed. One example is the family of **Exponential Time Differencing** (ETD) schemes. This class of schemes is especially suited to semi-linear problems which can be split into a linear part, which contains the stiffest part of the dynamics of the problem, and a nonlinear part, which varies more slowly than the linear part. These schemes have been rediscovered several times in various forms and under various names [15, 17, 37, 49, 53, 61, 63, 82]. An example is the **Exact Linear Part (ELP)** schemes that were derived in [9] for arbitrary order. However, the authors of [9] did not give explicit formulas for the methods' coefficients. Ina subsequent paper, **Cox** and **Matthews**[19] gave an explicit derivation of the explicit ELP methods, for arbitrary order \(s\), with explicit formulas for the methods' coefficients and referred to these methods as the Exponential Time Differencing (ETD) schemes (the term used arose originally in the field of computational electrodynamics [40, 65, 71]). In addition, the authors of [19] further constructed new explicit **Runge-Kutta** (ETD-RK) versions of these schemes up to fourth-order. In SS3.2, we follow the approach in [9, 17, 19, 63] and present the algorithm derivation for the explicit ETD schemes. In the first step, the ETD schemes recover the exact solution to the linear part, which (numerically) is the difficult part (stiff or oscillatory in nature) of the differential equation, in a similar way to the approach of the **Integrating Factor (IF)** schemes [7, 8, 11, 19, 44, 45, 52, 57, 84] (see SS3.2.1 for further details concerning the approach of the IF methods). The next step in the derivation is to integrate exactly an approximation of the nonlinear terms. We may approximate the nonlinear parts by some polynomial in time \(t\) that may be calculated using previous steps of the integration process, thus producing multi-step ETD methods (see SS3.2.2) or by RK-like stages, resulting in ETD schemes of Runge-Kutta type (see SS3.2.3). The coefficients of the ETD methods are the exponential and related functions of the linear operators. These coefficients can be evaluated once before the integration begins if a constant time step is used throughout the integration, see SS4 for further details. The convergence analysis for the explicit \(s\)-step exponential schemes was carried out in [7, 15, 57] for solving semi-linear equations. The analysis showed that the schemes achieve order of accuracy \(s\), for appropriate starting values at the \(n\)th and previous time steps. In addition, the authors of [37, 39] analyzed the convergence behavior of the explicit exponential Runge-Kutta methods for integrating semi-linear parabolic problems. They gave a new derivation of the classical order conditions and showed convergence for these methods up to order four. In SS3.3, we illustrate some key features of the explicit ETD schemes such as their stability properties. We follow the approach developed in [9, 19] for constructing stability regions of the ETD (in SS3.3.1) and the ETD-RK (in SS3.3.2) schemes of orders up to four. The stability regions are plotted in two dimensions, considering the case where the stiffness parameter in a general test problem is negative and purely real. We analyze these plots in order to show that these methods are capable of avoiding the rigorous ceiling imposed on the selection of the time step size when solving stiff problems with conventional explicit time discretization methods. The overall results are summarized in SS3.4. ### 3.2 Algorithm Derivation We begin by giving briefly the main idea behind the Lawson Integrating Factor IF methods [52], then we give, in detail, the algorithm derivation for the explicit ETD schemes [9, 17, 19, 63]. Consider stiff semi-linear PDEs that can be written in the form \[\frac{\partial u(x,t)}{\partial t}=\mathcal{L}u(x,t)+\mathcal{F}(u(x,t),t), \tag{3.1}\] where the linear operator \(\mathcal{L}\) contains higher-order spatial derivatives than those contained in the nonlinear operator \(\mathcal{F}\), and is mainly the term responsible for stiffness. For problems with spatially periodic boundary conditions, we use Fourier spectral methods [25, 83, 84] to discretize the spatial derivatives of (3.1) (see SS2 for more details), and hence obtain a stiff system of coupled ODEs in time \(t\) \[\frac{du(t)}{dt}=\mathbb{L}u(t)+F(u(t),t). \tag{3.2}\] The linear part \(\mathbb{L}\) of the system is represented by a diagonal matrix, and \(F\) represents the action of the nonlinear operator on \(u\) on the grid. For problems where the boundary conditions are not periodic, we use finite difference formulas [58, 83] or Chebyshev polynomials [11, 25, 83, 84], and in this case, the linearized system is represented by a non-diagonal matrix. For dissipative PDEs, the eigenvalues of the matrix \(\mathbb{L}\) are negative and real, whereas they are imaginary for dispersive PDEs. Dissipation in a dynamical system represents the concept of important mechanical modes, such as waves or oscillations, losing energy over time. Such systems are called dissipative systems. On the other hand, a dispersive PDE represents a system in which waves of different frequencies propagate at different phase velocities (the phase velocity is the rate at which the phase of the wave propagates in space). For the stiff system of ODEs (3.2), the eigenvalues of the matrix \(\mathbb{L}\) vary widely in magnitude, and the stiffness is caused by the eigenvalues of large magnitude. A competitive time stepping method should be able to integrate the system (3.2) accurately without requiring very small time steps for the largest magnitude eigenvalue. Simultaneously it should be able to handle small eigenvalues. The nonlinear term requires an explicit treatment since fully implicit methods are too costly for a large system of ODEs. To derive the time discretization methods (IF and ETD methods), we consider for simplicity a single model of a stiff ODE \[\frac{du(t)}{dt}=cu(t)+F(u(t),t), \tag{3.3}\] where the stiffness parameter \(c\) is either large, negative and real, or large and imaginary, or complex with large, negative real part and \(F(u(t),t)\) is the nonlinear forcing term. #### Integrating Factor Methods The main idea behind the IF schemes is to use a change of variables \[w(t)=u(t)e^{-ct},\] so that when differentiating both sides of this equation we obtain \[\frac{dw(t)}{dt}=\Big{(}\frac{du(t)}{dt}-cu(t)\Big{)}e^{-ct},\] and then substituting from equation (3.3) we get \[\frac{dw(t)}{dt} = F(u(t),t)e^{-ct}, \tag{3.4}\] \[= F(w(t)e^{ct},t)e^{-ct}.\] The aim now is to use any numerical integrator (IF schemes can be generalized to arbitrary order by applying any multi-step or Runge-Kutta methods) on the transformed nonlinear differential equation (3.4). The approximated solution is then transformed back to provide an approximate solution for the original \(u\) variable. For example, we can choose to apply the Euler method [14] to the transformed differential equation (3.4) as follows \[w_{n+1}=w_{n}+\Delta tF(w_{n}e^{ct_{n}},t_{n})e^{-ct_{n}},\] where \(\Delta t\) is the time step size and \(w_{n}\) denotes the numerical approximation to \(w(t_{n})\), and then transform back to the original variable to obtain the solution approximation. This yields the first-order **Integrating Factor Euler (IFEULER)** method [11, 84] \[u_{n+1}=(u_{n}+\Delta tF_{n})e^{c\Delta t}, \tag{3.5}\]where \(u_{n}\) and \(F_{n}\) denote the numerical approximation to \(u(t_{n})\) and \(F(u(t_{n}),t_{n})\) respectively. The purpose of transforming the differential equation (3.3) to equation (3.4), is to remove the explicit dependence in the differential equation on the operator \(c\), except inside the exponential. Now the problem is no longer stiff since the linear "stiff" term of the differential equation (3.3), that constrains the stability, is gone. Therefore, it can be solved exactly with the possibility of larger time steps. However, for PDEs with slowly varying nonlinear terms, the introduction of the fast decay time scale into the nonlinear term introduces large errors [7, 11, 19, 49] into the system. #### Exponential Time Differencing Methods To derive the \(s\)-step ETD schemes (the derivation is taken from [9, 17, 19, 63]), we follow an approach similar to that of deriving the IF schemes, i.e. we multiply (3.3) through by the integrating factor \(e^{-ct}\), and then integrate the equation over a single time step from \(t=t_{n}\) to \(t=t_{n+1}=t_{n}+\Delta t\) to get \[u(t_{n+1})=u(t_{n})e^{c\Delta t}+e^{c\Delta t}\int_{0}^{\Delta t}e^{-c\tau}F(u (t_{n}+\tau),t_{n}+\tau)d\tau. \tag{3.6}\] This formula is _exact_, and the next step is to derive approximations to the integral in equation (3.6). This procedure does not introduce an unwanted fast time scale into the solution and the schemes can be generalized to arbitrary order. If we apply the Newton Backward Difference Formula [14], using information about \(F(u(t),t)\) at the \(n\)th and previous time steps, we can write a polynomial approximation to \(F(u(t_{n}+\tau),t_{n}+\tau)\) in the form \[F(u(t_{n}+\tau),t_{n}+\tau)\approx G_{n}(t_{n}+\tau)=\sum_{m=0}^{s-1}(-1)^{m} \binom{-\tau/\Delta t}{m}\nabla^{m}G_{n}(t_{n}), \tag{3.7}\] where \(\nabla\) is the backward difference operator defined as follows \[\nabla^{m}G_{n}(t_{n}) = \sum_{k=0}^{m}(-1)^{k}\binom{m}{k}G_{n-k}(t_{n-k}), \tag{3.8}\] \[\approx \sum_{k=0}^{m}(-1)^{k}\binom{m}{k}F(u(t_{n-k}),t_{n-k}),\] and \[m!\binom{-\Lambda}{m}=(-\Lambda)(-\Lambda-1)\cdots(-\Lambda-m+1),\ m=1,\ldots, s-1.\](note that \(0!{-\Lambda\choose 0}=1\)). If we substitute the approximation (3.7) in the integrand (3.6), we get \[u(t_{n+1})-u(t_{n})e^{c\Delta t}\approx\Delta t\sum_{m=0}^{s-1}(-1)^{m}\int_{0}^{ 1}e^{c\Delta t(1-\Lambda)}{-\Lambda\choose m}d\Lambda\nabla^{m}G_{n}(t_{n}), \tag{3.9}\] where \(\Lambda=\tau/\Delta t\). We will indicate the integral in (3.9) by \[\mathrm{g}_{m}=(-1)^{m}\int_{0}^{1}e^{c\Delta t(1-\Lambda)}{-\Lambda\choose m} d\Lambda, \tag{3.10}\] and then calculate the \(\mathrm{g}_{m}\) by bringing in the generating function. For \(z\in\mathbb{R},\ |z|<1\), we define the generating function \[\Gamma(z) = \sum_{m=0}^{\infty}\mathrm{g}_{m}z^{m}, \tag{3.11}\] \[= \int_{0}^{1}e^{c\Delta t(1-\Lambda)}\sum_{m=0}^{\infty}{-\Lambda \choose m}(-z)^{m}d\Lambda,\] \[= \int_{0}^{1}e^{c\Delta t(1-\Lambda)}(1-z)^{-\Lambda}d\Lambda,\] \[= \frac{e^{c\Delta t}(1-z-e^{-c\Delta t})}{(1-z)(c\Delta t+\log(1- z))}.\] Rearranging (3.11) to the form \[(c\Delta t+\log(1-z))\Gamma(z)=e^{c\Delta t}-(1-z)^{-1},\] and expanding as a power series in \(z\) \[\left(c\Delta t-z-\frac{z^{2}}{2}-\frac{z^{3}}{3}-\cdots\right)(\mathrm{g}_{0} +\mathrm{g}_{1}z+\mathrm{g}_{2}z^{2}+\cdots)=e^{c\Delta t}-1-z-z^{2}-z^{3}-\cdots,\] we can find a recurrence relation for the \(\mathrm{g}_{m}\) for \(m\geq 0\) by equating like powers of \(z\) \[c\Delta t\mathrm{g}_{0} =e^{c\Delta t}-1, \tag{3.12}\] \[c\Delta t\mathrm{g}_{m+1}+1 =\mathrm{g}_{m}+\tfrac{1}{2}\mathrm{g}_{m-1}+\tfrac{1}{3}\mathrm{ g}_{m-2}+\cdots+\tfrac{1}{m+1}\mathrm{g}_{0}=\sum_{k=0}^{m}\tfrac{1}{m+1-k} \ \mathrm{g}_{k}.\] Having determined the \(\mathrm{g}_{m}\), the ETD schemes (3.9) then can be given in explicit forms. Substituting (3.8) and (3.10) in (3.9), we deduce the general generating formula of the ETD schemes of order \(s\)[19] \[u_{n+1}=u_{n}e^{c\Delta t}+\Delta t\sum_{m=0}^{s-1}\mathrm{g}_{m}\sum_{k=0}^{ m}(-1)^{k}{m\choose k}F_{n-k}, \tag{3.13}\] where \(u_{n}\) and \(F_{n}\) denote the numerical approximation to \(u(t_{n})\) and \(F(u(t_{n}),t_{n})\) respectively, and the \(\mathrm{g}_{m}\) are given by (3.12). #### ETD Schemes The first-order **ETD1** scheme [9, 15, 19, 61] \[u_{n+1}=u_{n}e^{c\Delta t}+(e^{c\Delta t}-1)F_{n}/c, \tag{3.14}\] is obtained by setting \(s=1\) in the explicit generating formula (3.13). Setting \(s=2\) in (3.13) gives us the second-order **ETD2** scheme [19] \[u_{n+1}=u_{n}e^{c\Delta t}+\{((c\Delta t+1)e^{c\Delta t}-2c\Delta t-1)F_{n}+(-e ^{c\Delta t}+c\Delta t+1)F_{n-1}\}/(c^{2}\Delta t). \tag{3.15}\] If \(s=3\) in (3.13), we obtain the third-order **ETD3** scheme \[\begin{split} u_{n+1}&=u_{n}e^{c\Delta t}+\{((2c^ {2}\Delta t^{2}+3c\Delta t+2)e^{c\Delta t}-6c^{2}\Delta t^{2}-5c\Delta t-2)F_{ n}\\ &+(-(4c\Delta t+4)e^{c\Delta t}+6c^{2}\Delta t^{2}+8c\Delta t+4)F_ {n-1}\\ &+((c\Delta t+2)e^{c\Delta t}-2c^{2}\Delta t^{2}-3c\Delta t-2)F_{ n-2}\}/(2c^{3}\Delta t^{2}).\end{split} \tag{3.16}\] Set \(s=4\) in (3.13) to achieve the fourth-order **ETD4** scheme \[u_{n+1}=u_{n}e^{c\Delta t}+(\Phi_{1}F_{n}-\Phi_{2}F_{n-1}+\Phi_{3}F_{n-2}-\Phi _{4}F_{n-3})/(6c^{4}\Delta t^{3}), \tag{3.17}\] where \[\begin{split}\Phi_{1}&=(6c^{3}\Delta t^{3}+11c^{2} \Delta t^{2}+12c\Delta t+6)e^{c\Delta t}-24c^{3}\Delta t^{3}-26c^{2}\Delta t^{ 2}-18c\Delta t-6,\\ \Phi_{2}&=(18c^{2}\Delta t^{2}+30c\Delta t+18)e^{c \Delta t}-36c^{3}\Delta t^{3}-57c^{2}\Delta t^{2}-48c\Delta t-18,\\ \Phi_{3}&=(6c^{2}\Delta t^{2}+24c\Delta t+18)e^{c \Delta t}-24c^{3}\Delta t^{3}-42c^{2}\Delta t^{2}-42c\Delta t-18,\\ \Phi_{4}&=(2c^{2}\Delta t^{2}+6c\Delta t+6)e^{c \Delta t}-6c^{3}\Delta t^{3}-11c^{2}\Delta t^{2}-12c\Delta t-6.\end{split}\] Note that as \(c\to 0\) in the coefficients of the \(s\)-order ETD methods, the methods reduce to the corresponding order of the **Adams-Bashforth** schemes [43, 84]. For example, if we expand the exponential function, using Taylor series, in the first-order ETD1 method (3.14) as follows \[u_{n+1}=u_{n}\left(1+c\Delta t+\frac{(c\Delta t)^{2}}{2}+\frac{(c\Delta t)^{3} }{3!}+\cdots\right)+F_{n}\left(\Delta t+\frac{c\Delta t^{2}}{2}+\frac{c^{2} \Delta t^{3}}{3!}+\cdots\right),\] and then take the limit as \(c\to 0\), while keeping terms of \(O(\Delta t)\), we obtain \[u_{n+1}=u_{n}+\Delta t(cu_{n}+F_{n})=u_{n}+\Delta tdu(t)/dt,\] which corresponds to the forward Euler method. In fact, in the case of \(c=0\), the explicit formulas of the coefficients involve division by zero, and for very small values of \(|c|\), the coefficients suffer from rounding errors due to the large amount of cancellation in the formulas. To tackle this problem we can use the Taylor series instead of using the explicit formula of the coefficients, see SS4 for a detailed discussion of this issue. #### Exponential Time Differencing Runge-Kutta Methods Generally, for the one-step time-discretization methods and the Runge-Kutta (RK) methods, all the information required to start the integration is available. However, for the multi-step time-discretization methods this is not true. These methods require the evaluations of a certain number of starting values of the nonlinear term \(F(u(t),t)\) at the \(n\)th and previous time steps to build the history required for the calculations. Therefore, it is desirable to construct ETD methods that are based on RK methods. **ETD Runge-Kutta Schemes** **Cox** and **Matthews**[19] and **Friedli**[28] constructed a second-order ETD Runge-Kutta method, analogous to the "improved Euler" method given in [78], as follows. Putting \(s=1\) in (3.13) gives the first step \[a_{n}=u_{n}e^{c\Delta t}+(e^{c\Delta t}-1)F_{n}/c. \tag{3.18}\] The term \(a_{n}\) approximates the value of \(u\) at \(t_{n}+\Delta t\). The next step is to approximate \(F\) in the interval \(t_{n}\leq t\leq t_{n+1}\), with \[F=F_{n}+(t-t_{n})(F(a_{n},t_{n}+\Delta t)-F_{n})/\Delta t+O(\Delta t^{2}),\] and substitute into (3.6) to give the **ETD2RK1** scheme \[u_{n+1}=a_{n}+(e^{c\Delta t}-c\Delta t-1)(F(a_{n},t_{n}+\Delta t)-F_{n})/(c^{ 2}\Delta t). \tag{3.19}\] In a similar way, we can also form an **ETD2RK2** scheme analogous to the "modified Euler" method [78]. The first step \[a_{n}=u_{n}e^{c\Delta t/2}+(e^{c\Delta t/2}-1)F_{n}/c,\] is formed by taking half a step of (3.18); then use the approximation \[F=F_{n}+\frac{(t-t_{n})}{\Delta t/2}(F(a_{n},t_{n}+\Delta t/2)-F_{n})+O(\Delta t ^{2}),\] in the interval \([t_{n},t_{n}+\Delta t]\) in (3.6) to deduce the **ETD2RK2** scheme \[u_{n+1}=u_{n}e^{c\Delta t}+\{((c\Delta t-2)e^{c\Delta t}+c\Delta t+2)F_{n}+2(e ^{c\Delta t}-c\Delta t-1)F(a_{n},t_{n}+\Delta t/2)\}/c^{2}\Delta t. \tag{3.20}\]In fact there is a one-parameter family of such **ETD2RK\({}_{\!\!J}\)** schemes. For \({\jmath}\in{\mathbb{R}}^{+}\), one can start with any fraction \(1/{\jmath}\) of \(\Delta t\) for the first step (3.18) which gives \[a_{n}=u_{n}e^{c\Delta t/{\jmath}}+(e^{c\Delta t/{\jmath}}-1)F_{n}/c.\] The term \(a_{n}\) approximates the value of \(u\) at \(t_{n}+\Delta t/{\jmath}\). Next use the approximation \[F=F_{n}+\frac{(t-t_{n})}{\Delta t/{\jmath}}(F(a_{n},t_{n}+\Delta t/{\jmath})-F_ {n})+O(\Delta t^{2}),\] in the interval \([t_{n},t_{n}+\Delta t]\) in (3.6) to deduce the general ETD2RK\({}_{\!\!J}\) schemes as follows \[u_{n+1}=u_{n}e^{c\Delta t}+\{((c\Delta t-{\jmath})e^{c\Delta t}+({\jmath}-1)c \Delta t+{\jmath})F_{n}+{\jmath}(e^{c\Delta t}-c\Delta t-1)F(a_{n},t_{n}+ \Delta t/{\jmath})\}/(c^{2}\Delta t).\] In a similar way, for different values of the fraction \(1/{\jmath}\) there are infinitely many third-order and fourth-order ETD-RK schemes. For example, the third-order **ETD3RK** scheme [19] which is analogous to the classical third-order RK method [14] is given by \[a_{n}=u_{n}e^{c\Delta t/{2}}+(e^{c\Delta t/{2}}-1)F_{n}/c,\] \[b_{n}=u_{n}e^{c\Delta t}+(e^{c\Delta t}-1)(2F(a_{n},t_{n}+ \Delta t/{2})-F_{n})/c,\] \[u_{n+1}=u_{n}e^{c\Delta t}+\{((c^{2}\Delta t^{2}-3c\Delta t+4)e^ {c\Delta t}-c\Delta t-4)F_{n}\] \[+4((c\Delta t-2)e^{c\Delta t}+c\Delta t+2)F(a_{n},t_{n}+\Delta t/ 2) \tag{3.21}\] \[+((-c\Delta t+4)e^{c\Delta t}-c^{2}\Delta t^{2}-3c\Delta t-4)F(b_ {n},t_{n}+\Delta t)\}/(c^{3}\Delta t^{2}).\] The terms \(a_{n}\) and \(b_{n}\) approximate the values of \(u\) at \(t_{n}+\Delta t/{2}\) and \(t_{n}+\Delta t\) respectively. The formula (3.21) is the quadrature formula for (3.6) derived from quadratic interpolation through the points \(t_{n}\), \(t_{n}+\Delta t/{2}\) and \(t_{n}+\Delta t\). Introducing a further parameter, a fourth-order scheme **ETD4RK** (taken from [19]) is obtained as follows: \[a_{n}=u_{n}e^{c\Delta t/{2}}+(e^{c\Delta t/{2}}-1)F_{n}/c,\] \[b_{n}=u_{n}e^{c\Delta t/{2}}+(e^{c\Delta t/{2}}-1)F(a_{n},t_{n}+ \Delta t/{2})/c,\] \[c_{n}=a_{n}e^{c\Delta t/{2}}+(e^{c\Delta t/{2}}-1)(2F(b_{n},t_{n }+\Delta t/{2})-F_{n})/c,\] \[u_{n+1}=u_{n}e^{c\Delta t}+\{((c^{2}\Delta t^{2}-3c\Delta t+4)e^ {c\Delta t}-c\Delta t-4)F_{n}\] \[+2((c\Delta t-2)e^{c\Delta t}+c\Delta t+2)(F(a_{n},t_{n}+\Delta t /{2})+F(b_{n},t_{n}+\Delta t/{2})) \tag{3.22}\] \[+((-c\Delta t+4)e^{c\Delta t}-c^{2}\Delta t^{2}-3c\Delta t-4)F(c_ {n},t_{n}+\Delta t)\}/(c^{3}\Delta t^{2}).\] The terms \(a_{n}\) and \(b_{n}\) approximate the values of \(u\) at \(t_{n}+\Delta t/{2}\) and the term \(c_{n}\) approximates the value of \(u\) at \(t_{n}+\Delta t\). The formula (3.22) is the quadrature formula for (3.6) derived from quadratic interpolation through the points \(t_{n}\), \(t_{n}+\Delta t/2\) and \(t_{n}+\Delta t\), using average values of \(F\) at \(a_{n}\) and \(b_{n}\). In general, the ETD4RK method (3.22) has classical order four, but Hochbruck and Ostermann [38] showed that this method suffers from an order reduction. This is due to not satisfying some of the stiff order conditions. These conditions were derived [38] for explicit exponential Runge-Kutta methods applied to stiff semi-linear parabolic problems with homogeneous Dirichlet boundary condition and under appropriate temporal smoothness of the exact solution. They also presented numerical experiments which show that the order reduction, predicted by their theory, may in fact arise in practical examples. In the worst case, this leads to an order reduction to order three for the Cox and Matthews method (3.22) [19] and gives order four for Krogstad's method (ETDRK4-B) [49]. However, for certain problems, such as the numerical experiments conducted by Kassam and Trefethen [44, 45] for solving various one-dimensional diffusion-type problems, and the numerical results obtained in SS5 for solving some dissipative and dispersive PDEs, the fourth-order convergence of the ETD4RK method [19] is confirmed numerically. Finally, we note that as \(c\to 0\) in the coefficients of the \(s\)-order ETD-RK methods, the methods reduce to the corresponding order of the **Runge-Kutta** schemes. ### 3.3 Stability Analysis The stability of a given method for solving a system of ODEs is a theoretical measure of the extent to which the method produces satisfactory approximations. Stability is related to the accuracy of the methods and refers to errors not growing in subsequent steps. Such methods are called numerically stable. The stability analysis determines the range of time step for which the method is numerically stable. The stability region is the subset of the complex plane consisting of those \(\Delta t\lambda\in\mathbb{C}\) for which, with time step \(\Delta t\), the numerical approximation produces bounded solutions when applied to the scalar linear model problem \(du(t)/dt=\lambda u(t)\). In general, the linear stability analysis of time discretization methods is valid for a linear autonomous system of ODEs, linearized about a fixed point. This analysis only gives an indicator as to how stable the numerical methods are. It cannot be directly applied to solutions of nonlinear time-dependent PDEs with large amplitude since convergence and stability are solution-dependent issues. **Beylikin** et al. [9] studied the stability for a family of explicit and implicit ELP schemes, and showed that these schemes have significantly better stability properties when compared with known Implicit-Explicit schemes. In addition, **Krogstad**[49] analyzed the stability regions of various time integrating methods, including the fourth-order ETDRK4-B method and multi-step generalizations of the IF methods, all of which he proposed, and the ETD4RK method of Cox and Matthews [19]. He deduced that the ETDRK4-B method has the largest stability region. Cox and Matthews [19] also studied the stability properties of the second-order ETD type schemes, while in [23], the study was for the ETD-RK schemes of [19] of orders up to and including the fourth. All authors concluded that ETD type schemes maintain good stability properties and can be widely applicable to dissipative PDEs and nonlinear wave equations. The approach developed in [9, 19] for the stability analysis of composite schemes, i.e. schemes that use different methods for the linear and nonlinear parts of the equation, computes the boundaries of the stability regions for a general test problem. That is, to analyze the stability of the ETD schemes, we linearize the autonomous ODE \[\frac{dv(t)}{dt}=cv(t)+F(v(t)), \tag{3.23}\] about a fixed point \(u_{0}\) (so that \(cu_{0}+F(u_{0})=0\)), to obtain \[\frac{du(t)}{dt}=cu(t)+\lambda u(t), \tag{3.24}\] where \(u(t)\) is the perturbation to \(u_{0}\) and \[\lambda=\left.\frac{dF(u(t))}{du}\right|_{u(t)=u_{0}}.\] In order to keep the fixed point \(u_{0}\) stable, we require \(Re(c+\lambda)<0\) (note that the fixed points of the ETD methods are the same as those of the ODE (3.23), in contrast to the IF methods which do not preserve the fixed points for the ODE that they discretize [19]. It seems desirable for a numerical method to fulfill this property with respect to capturing as much of the dynamics of the system as possible). If both \(c\) and \(\lambda\) are complex, the stability region is four-dimensional. But if both \(c\) and \(\lambda\) are purely imaginary [26] or purely real [19], or if \(\lambda\) is complex and \(c\) is fixed and real [9] then the stability region is two-dimensional. Our study concentrates on two cases, where first, \(\lambda\) is complex and \(c\) is fixed, negative and purely real and second, \(c\) is negative and both \(c\) and \(\lambda\) are purely real. The stability regions are constructed for the ETD and the ETD-RK methods in SS3.3.1 and SS3.3.2 respectively. #### Stability of Exponential Time Differencing Methods When applying the first-order ETD1 method (3.14) to the linearized problem (3.24), we obtain \[u_{n+1}=u_{n}e^{c\Delta t}+\lambda u_{n}(e^{c\Delta t}-1)/c.\] Defining \(r=u_{n+1}/u_{n},\ x=\lambda\Delta t\) and \(y=c\Delta t\), leads to \[r=e^{y}+\frac{x}{y}(e^{y}-1). \tag{3.25}\] If the second-order ETD2 method (3.15) is applied to (3.24), the linearization of the nonlinear term in the numerical method yields a recurrence relation involving \(u_{n+1},u_{n}\) and \(u_{n-1}\), and the equation for the factor \(r\)[19] by which the solution is multiplied after each step is \[y^{2}r^{2}-(y^{2}e^{y}+[(y+1)e^{y}-2y-1]x)r+(e^{y}-y-1)x=0. \tag{3.26}\] In a similar way, when applying the ETD3 (3.16) and the ETD4 (3.17) schemes to (3.24), a recurrence relation is obtained from the linearization, and the equation for \(r\) is a third and fourth-order polynomial for the ETD3 and the ETD4 schemes respectively. The formulas of the factor \(r\) for the ETD3 and the ETD4 methods are not given explicitly here since they are very cumbersome. We commence our analysis by choosing real negative values of the constant \(c\), i.e. varying \(y=c\Delta t\), and looking for a region of stability in the complex \(x\) plane where the solution \(u_{n}\) remains bounded as \(n\rightarrow\infty\). The solution for \(r=u_{n+1}/u_{n}\) can be sought in the form \(r=r_{1}e^{i\theta}\). Evidently, the solution decays if \(r_{1}<1\). Hence, the boundary of the stability region is determined by writing \(r=e^{i\theta},\ \theta\in[0,2\pi]\) in each equation for the factor \(r\) for the ETD1 (3.14), the ETD2 (3.15), the ETD3 (3.16) and the ETD4 (3.17) methods and then by solving for \(x=\lambda\Delta t\). The corresponding family of stability regions are plotted in the complex \(x\) plane and displayed in figures 3.1, 3.2 and 3.3. Note that, in these figures, the horizontal and the vertical axes represent real \(x\) (Re\((x)\)) and imaginary \(x\) (Im\((x)\)) respectively. In figure 3.1, notice first that, for each fixed value of \(y=-1,-5,-10,-15\), the stability region of the ETD1 and the ETD2 schemes is the interior of the curves, which are simple and closed; while for the ETD3 and the ETD4 schemes, it is only the interior of those portions of the curves that contain the origin. Next, observe that the ETD1 scheme has the largest stability region while the ETD4 scheme has the smallest. In fact, as shown in figure 3.1, the stability region, for each fixed value of \(y\), shrinks as the method's order increases. Figures 3.2 and 3.3 illustrate the complex plot of the same four methods with different values of \(y\). The outer curves for the ETD1, the ETD2 and the ETD3 schemes correspond to \(y=-6\), and to \(y=-3\) for the ETD4 scheme. The inner blue Figure 3.1: Stability regions in the complex \(x\) plane. The four methods are: ETD1 (blue-solid), ETD2 (blue-dashed), ETD3 (red-solid), ETD4 (green-solid). curves for all four schemes correspond to \(y=-1\). The inner red curves correspond to the case \(y=0\), where the stability regions coincide with those of the corresponding order **Adams-Bashforth** schemes [43, 84]. This is expected since in the limit \(y\to 0\), ETD schemes turn into the corresponding order explicit **Adams-Bashforth** schemes [9]. In the limit \(y\to-\infty\), we find that the stability region of each of the four methods preserves its shape and each grows larger, which allows the methods to use a large time-step size (\(\Delta t=O(1)\) as \(|c|\to\infty\)) when solving stiff problems. On the contrary, the stability regions of the conventional explicit numerical methods preserve their size whatever the value of the stiffness parameter is (for example, the stability region of the Euler methods is a fixed circle of radius 1), which forces the methods to use very small time-step sizes (\(\Delta t=O(1/|c|)\)) when integrating stiff problems, see SS5 for illustrative examples. As shown in figure 3.2, the boundary of the stability region for the ETD1 and the ETD2 schemes passes through the point \(x=-y\) (this is true for any fixed value of \(y\)), which agrees with the result found in [19] for the ETD2 schemes. This feature is consistent with the true stability boundary of the differential equation (3.24) of the Figure 3.2: Stability regions in the complex \(x\) plane. The curves for the ETD1 and the ETD2 schemes correspond to \(y=-6,-5,-4,-3,-2,-1\), from the outer curve to the inner curve respectively. The inner red curves correspond to \(y=0\). test problem, namely, the solution decays for \(Re(c+\lambda)<0\) and it grows otherwise. For the ETD3 method, we find that for values of \(y\in[-6,0)\) the curves of its region do not cross and hence the stability region has a simple structure and passes through the point \(x=-y\), see figure 3.3. As \(y\) decreases, the curves of the region cross over and the region develops to a more complicated structure, separating into several portions. The stability regions for the ETD3 method for values of \(y\in[-10,-6)\) (not shown) are the interior of those portions of the curves that contain the origin, and the interior of those portions of the curves where the boundary passes through the point \(x=-y\), whereas for values of \(y=-10,\ldots,-\infty\) they are only the interior of those portions of the curves that contain the origin, see figure 3.1 for \(y=-10,-15\). the curves where the boundary passes through the point \(x=-y\). Finally for values of \(y=-3,\ldots,-\infty\) the stability regions are the interior of those portions of the curves that contain the origin, see figure 3.1 for \(y=-10,-15\). In the real \((x,y)\) plane, the right-hand boundary for both the ETD1 (3.14) and the ETD2 (3.15) schemes, corresponding to substituting \(r=1\) in equations (3.25) and (3.26) respectively, is the line \(x+y=0\). The left-hand boundaries for the ETD1 and the ETD2 schemes, corresponding to substituting \(r=-1\) in equations (3.25) and (3.26) respectively, are the curve \[x=\frac{-y(e^{y}+1)}{e^{y}-1},\] Figure 3.4: Stability regions (shaded) in the real \((x,y)\) plane for the ETD1 (3.14) and the ETD2 (3.15) methods. Figure 3.5: Stability regions (shaded) in the real \((x,y)\) plane for the ETD3 (3.16) and the ETD4 (3.17) methods. and the curve [19] \[x=\frac{-y^{2}(e^{y}+1)}{(y+2)e^{y}-3y-2},\] respectively. Similarly, the right-hand boundary for both the third and fourth-order ETD methods, corresponding to \(r=1\), is the line \(x+y=0\), but only for the specified values of \(y\) in the ranges mentioned previously (for values outside the ranges, there is no simple formula for the right-hand boundaries due to the complicated structure of the stability regions, and so, the plots of the stability regions of the ETD3 and the ETD4 methods in the real \((x,y)\) plane are produced using Maple). For \(r=-1\), the left-hand boundaries for the ETD3 (3.16) and the ETD4 (3.17) methods are the curves \[x=\frac{-y^{3}(e^{y}+1)}{(y^{2}+4y+4)e^{y}-7y^{2}-8y-4},\] and \[x=\frac{-3y^{4}(e^{y}+1)}{(3y^{3}+20y^{2}+36y+24)e^{y}-45y^{3}-68y^{2}-60y-24},\] respectively. The stability regions for the ETD1 and the ETD2 schemes are illustrated in figure 3.4, whereas those for the ETD3 and the ETD4 methods are illustrated in figure 3.5. Note that the horizontal and the vertical axes in these figures represent \(\mathrm{Re}(x)\) and \(\mathrm{Re}(y)\) respectively. In line with the previous case of the complex \(x\) plane, we find that the stability region of the ETD1 method is broader than those of the other three higher-order methods, whereas it is very narrow for the ETD4 method. Additionally, we find that the region of stability for all schemes includes the negative \(y\)-axis [19] and grows larger as \(y\) decreases. #### Stability of RK Exponential Time Differencing Methods The basic question of stability is again addressed by applying each of the ETD-RK schemes to the linearized problem (3.24), and determining the boundary separating growing and decaying solutions \(u_{n}\). When applying the ETD2RK1 method (3.19) to (3.24), we obtain \[u_{n+1} = e^{c\Delta t}u_{n}+\{((c\Delta t-1)e^{c\Delta t}+1)\lambda u_{n}\] \[+ (e^{c\Delta t}-c\Delta t-1)(\lambda e^{c\Delta t}+\lambda^{2}(e^{ c\Delta t}-1)/c)u_{n}\}/(c^{2}\Delta t).\]Defining \(r=u_{n+1}/u_{n},\ x=\lambda\Delta t\) and \(y=c\Delta t\), leads to [19] \[r=e^{y}+\left(\frac{e^{y}-1}{y}\right)^{2}x+\left(\frac{(e^{y}-1)(e^{y}-y-1)}{y ^{3}}\right)x^{2}. \tag{3.27}\] Similarly, for the ETD2RK2 scheme (3.20), \(r\) satisfies \[r=e^{y}+\left(\frac{2(e^{y}-y-1)e^{y/2}+(y-2)e^{y}+y+2}{y^{2}}\right)x+\left( \frac{2(e^{y}-y-1)(e^{y/2}-1)}{y^{3}}\right)x^{2}. \tag{3.28}\] In a similar way, when applying the third-order ETD3RK (3.21) and the fourth-order ETD4RK (3.22) schemes to (3.24), the equation is linear for the factor \(r\) for each scheme. The formulas for the factor \(r\) for the ETD3RK and the ETD4RK methods are not neat and simple, hence, they are not given explicitly here. We note that the equations for the factor \(r\) are different for the different formulas of an \(s\)-order ETD-RK scheme. This is in contrast to the fact that the different formulas of an explicit \(s\)-order (\(s=1,2,3,4\)) RK scheme have the same equation for the factor \(r\)[84], that is \(r=|1+y+y^{2}/2+\cdots+y^{s}/s!|\). This curve for \(r\) is obtained by applying an explicit \(s\)-order RK scheme to the linearized problem \(du(t)/dt=cu(t)\) where \(r=u_{n+1}/u_{n}\) and \(y=c\Delta t\). In the limit \(y\to-\infty\), equations (3.27) and (3.28) become \[\frac{x^{2}}{y^{2}}\approx r, \tag{3.29}\] for the ETD2RK1 scheme, and \[\frac{2x^{2}}{y^{2}}+\frac{x}{y}\approx r, \tag{3.30}\] for the ETD2RK2 scheme, respectively, and the equations for the factor \(r\) for the ETD3RK (3.21) and the ETD4RK (3.22) schemes become \[-\frac{2x^{3}}{y^{3}}-\frac{x^{2}}{y^{2}}\approx r, \tag{3.31}\] and \[\frac{2x^{4}}{y^{4}}-\frac{x^{2}}{y^{2}}\approx r, \tag{3.32}\] respectively. Our analysis below depends on choosing real negative values of the constant \(c\) and looking for a region of stability in the complex \(x\) plane where the solution \(u_{n}\) remains bounded as \(n\to\infty\). The boundary of the stability region is determined by writing \(r=e^{i\theta},\ \theta\in[0,2\pi]\) in each equation for the factor \(r=u_{n+1}/u_{n}\) in the ETD2RK1 (3.19), the ETD2RK2 (3.20), the ETD3RK (3.21) and the ETD4RK (3.22) methods and then by solving for \(x=\lambda\Delta t\). The corresponding families of stability regions are plotted in the complex \(x\) plane and displayed in figures 3. \(\mathrm{Re}(x)\) and \(\mathrm{Im}(x)\) respectively. Figure 3.6 exhibits the complex plot of the stability regions of the four schemes with different values of \(y\). The outer curves correspond to \(y=-6\) and the inner blue curves correspond to \(y=-1\). Generally, the stability regions of the ETD-RK Figure 3.7: Stability regions in the complex \(x\) plane for different values of \(y\). The four methods are: ETD1 (red-solid), ETD2RK1 (circle), ETD3RK (cross), ETD4RK (point). Clearly, as shown in figure 3.6, the region of stability for all ETD-RK schemes grows larger as \(y\) decreases. The red curves correspond to the case \(y=0\), where the stability regions of the ETD-RK schemes coincide with the those of the corresponding order RK schemes [84]. This is expected since in the limit \(y\to 0\), ETD-RK schemes reduce to the corresponding order explicit RK schemes (this result was also found in [23] for up to fourth-order ETD-RK schemes and by **Krogstad**[49] for the ETD4RK and the ETDRK4-B methods). Note that the stability regions of the RK schemes increase as the order of the methods increases. Also note the fact that the different formulas of an explicit \(s\)-order RK scheme have the same stability regions [84]. However, we find, as shown in figure 3.6, that the stability region of the ETD2RK2 scheme is smaller than that of the ETD2RK1 scheme for any given value of \(y\), and thus, generally the different formulas of an \(s\)-order ETD-RK scheme do not have the same stability region. Figure 3.7 illustrates the plot of the ETD1, the ETD2RK1, the ETD3RK and the ETD4RK methods all in one diagram for different values of \(y\). For \(y=-1\), we find that the stability region increases as the order of the ETD-RK schemes increases, that is, the ETD2RK1 scheme (3.19) has the smallest stability region while the ETD4RK scheme (3.22) has the largest. As \(y\) decreases, see figure 3.7, we find that the stability regions of the ETD1 and the ETD2RK1 schemes become slightly larger than those of the other schemes. As \(y\to-\infty\), the explicit ETD1 and ETD2RK1 schemes coincide in their stability regions and become the largest, and simultaneously simplify to the disc \(|x|<|y|\)[19] (this corresponds to substituting \(|r|=1\) in equation (3.29)), while the stability region of the ETD3RK scheme becomes the smallest. Figure 3.8 illustrates the plot of the ETD1, the ETD2RK2, the ETD3RK and the ETD4RK methods all in one diagram for different values of \(y\). We notice that as the order of these schemes increases the stability regions increase in size for any given value of \(y\). Furthermore, as \(y\to-\infty\) the stability region of the ETD4RK scheme contains those of the ETD3RK and the ETD2RK2 schemes of which the latter is the smallest, though, the stability region of the ETD4RK schemes is contained by those of the ETD2RK1 and the ETD1 schemes [23], see figures 3.7 and 3.8. In the real \((x,y)\) plane, the left-hand boundaries for the ETD2RK1 (3.19) and ETD2RK2 (3.20) schemes, corresponding to substituting \(r=1\) in equations (3.27)and (3.28) respectively1, are the curve [19] Footnote 1: No stability boundaries corresponding to \(r=-1\) are obtained for the ETD2RK1, ETD2RK2 and ETD4RK schemes. \[x=\frac{-y^{2}}{e^{y}-y-1},\] Figure 3.9: Stability regions (shaded) in the real \((x,y)\) plane for four methods. and the curve \[x=\frac{-y^{2}(e^{y/2}+1)}{2(e^{y}-y-1)},\] respectively, while in the same situation, the right-hand boundary for both schemes is the line \(x+y=0\). Similarly, the equation for the factor \(r\) is a third and fourth order polynomial in \(x\) for the ETD3RK (3.21) and the ETD4RK (3.22) schemes respectively, and for both schemes, the right-hand boundaries (corresponding to \(r=1\)) are the line \(x+y=0\) (note that the right-hand boundary is the same for all four schemes [23]). The formulas of the left-hand boundary for the ETD3RK and the ETD4RK methods, corresponding to \(r=-1\) and \(r=1\) respectively, are complicated and hence not given explicitly. The real stability regions for these four methods are shown in figure 3.9, where the horizontal and the vertical axes represent \(\mathrm{Re}(x)\) and \(\mathrm{Re}(y)\) respectively. In figure 3.9, we notice that the region of stability for all schemes includes the negative \(y\) axis [19] and grows larger as \(y\) decreases. In addition, the stability region of the ETD2RK1 scheme is broader than that of the ETD2RK2 scheme, which agrees with the previous case of the complex \(x\) plane. In the limit \(y\to-\infty\), the right-hand boundaries for the ETD2RK1 (3.19) [19], the ETD2RK2 (3.20) and the ETD4RK (3.22) schemes correspond to \[x\approx-y,\] which is the same for all schemes. The left-hand boundaries for the ETD2RK1, the ETD2RK2 and the ETD4RK schemes, corresponding to substituting \(r=1\) in equations (3.29), (3.30) and (3.32) respectively, are \[x\approx y,\] and \[\Big{(}\frac{2x}{y}-1\Big{)}\Big{(}\frac{x}{y}+1\Big{)}\approx 0\Longrightarrow x \approx\frac{y}{2},\] and \[(2x^{2}+y^{2})(x^{2}-y^{2})\approx 0\Longrightarrow x\approx y,\] respectively, see figure 3.9. In the same limit, the right-hand and the left-hand boundaries for the ETD3RK (3.21) scheme, corresponding to substituting \(r=1\) and \(r=-1\) respectively in equation (3.31), are \[(2x^{2}-xy+y^{2})(x+y)\approx 0\Longrightarrow x\approx-y,\]and \[x\approx 0.657y,\] respectively. ### 3.4 Conclusion In our study of the ETD methods we have found that these methods possess the following features: * If the nonlinear part \(F(u(t),t)\) of the differential equation (3.3) is zero, the integrator produces the exact solution to the ODE and so is automatically A-stable. * If the linear part is zero (\(c=0\) in (3.3)), the ETD and the ETD-RK integrators reduce to linear multi-step or classical explicit Runge-Kutta methods respectively. We have also discussed the stability properties of the ETD and ETD-RK schemes up to fourth-order. We have found that the various formulas of the explicit second-order ETD-RK schemes do not have the same stability region, in contrast to the fact that all RK schemes of a given order have the same stability region. In addition, we have found that as the order of the ETD-RK methods increases the stability regions increase in size, i.e. the stability region of the ETD4RK scheme contains those of the ETD3RK and the ETD2RK2 methods. However, as the stiffness parameter \(c\to-\infty\) in (3.23), the stability region of the ETD2RK1 method contains those of the ETD4RK, the ETD3RK and the ETD2RK2 methods. This is in contrast to noted properties of the stability regions of the multi-step ETD methods. We have found that the stability region of the fourth-order ETD scheme is contained by those of the lower order ones, i.e. the stability regions of the ETD methods shrink as the order of the methods increases. In general, we have found that the stability regions of the ETD-RK methods are larger than those of the multi-step ETD methods. To conclude, the stability characteristics of the ETD and the ETD-RK methods (the stability regions grow larger as the stiffness parameter \(c\to-\infty\)) reveal that when solving stiff problems the selection of the time step size for these methods is only limited by accuracy and not stability. This indicates the possibility of using a large time step and consequently these methods provide computational savings over conventional explicit methods. ## Chapter 1 Various Algorithms for Evaluating the CTD Coefficients ## Outline of Chapter The coefficients of the Exponential Time Differencing (ETD) methods are the exponential and related functions of the linear operators of a semi-discretized partial differential equation (PDE). When applying the ETD methods, the computations of the coefficients need only to be carried out once at the start of the integration if a constant time step is used during the integration. The computation of these functions depends significantly on the structure and the range of the eigenvalues of the linear operator and the dimensionality of the semi-discretized PDE. The linear part should not be explicitly time dependent and, if possible, should be represented as a diagonal matrix in order for the exponential integrators to be computationally competitive. On the other hand, the linear part might have eigenvalues equal to or approaching zero, which leads to complications in the computation of the coefficients. In this chapter, we discuss methods for the accurate computation of the ETD coefficients and the efficiency of implementation. We first explain why the ETD methods need further development, and then address ourselves to describing the various algorithms. We analyze their performance and their computational cost, and weigh their advantages for improving the numerical difficulties in approximating the ETD coefficients. This gives us the chance to distinguish between the algorithms and choose the one that is best for the success of the methods. ### 4.1 Introduction When a stiff partial differential equation (PDE) with periodic boundary conditions is discretized in space using Fourier spectral methods [25, 83, 84] (see SS2.3), a system of coupled ordinary differential equations (ODEs) in time \(t\), for the Fourier coefficients, is obtained. The linear part of this system is represented by a diagonal matrix in the Fourier basis, which might have eigenvalues of both large and small magnitude. A complication [19] arises in using the time discretization ETD methods (see SS3) for problems which have eigenvalues equal or close to zero in the diagonal linear operator. These difficulties are twofold: firstly, when some of the eigenvalues are equal to zero, the explicit formulas (3.12) for the coefficients \(\mathrm{g}_{m}\) cannot be used directly since they involve division by zero, i.e. \(c=0\). Instead, the limiting form of the coefficients as \(c\to 0\) must be used. Secondly, these methods suffer from rounding errors occurring due to the large amount of cancellation in the ETD coefficients \(\mathrm{g}_{m}\) (3.12) for eigenvalues approaching zero. To identify the problem, consider evaluating numerically the expression \[f_{1}(z)=\frac{e^{z}-1}{z}, \tag{4.1}\] that appears in the ETD1 scheme (3.14), for \(z\) a scalar. \(f_{1}(z)\) is an analytic function and has a removable singularity at \(z=0\). The limiting form of the expression \(f_{1}(z)\) as \(z\to 0^{\pm}\) should result in 1. Undesirably, as \(z\) gets close to zero, the expression does not approach 1 when evaluated numerically. The terms in the expression do not cancel precisely and the small errors of cancellation become substantial as we are dividing the result by a number approaching zero. This problem gets worse in higher order methods. The expressions \[f_{k}(z)=\frac{e^{z}-G_{k}(z)}{z^{k}},\ \ \ \ k=1,2,\ldots,s, \tag{4.2}\] where \[G_{k}(z)=\sum_{j=0}^{k-1}\frac{z^{j}}{j!}, \tag{4.3}\] is the first \(k\) terms in the Taylor series approximation to the exponential function \(f_{0}(z)=e^{z}\), are at the core of the ETD and ETD-RK methods (3.13) of order \(s\). In fact the coefficients of these methods are really a combination of the expressions \(f_{k}(z)\) (4.2). As with \(f_{1}(z)\) (4.1), the expressions \(f_{k}(z)\) for \(k>1\) suffer from numerical evaluation errors as \(z\to 0^{\pm}\). In fact, in the limiting form of the expressions as \(z\to 0^{\pm}\), the numerator and denominator are of \(O(z^{k})\). Hence, in order to implement theETD and ETD-RK methods accurately, we need an accurate algorithm to evaluate the \(f_{k}\). Figure 4.1 shows a plot of the exponential function \(f_{0}(z)\), the formulas \(f_{1}(z)\) (4.1), and \[f_{2}(z)=\frac{e^{z}-1-z}{z^{2}}, \tag{4.4}\] and \[f_{3}(z)=\frac{(e^{z}-1-z-z^{2}/2)}{z^{3}}, \tag{4.5}\] for values of \(z\) over the range \([-2,2]\). Analytically, for values of \(z\rightarrow\infty\), \(f_{1}(z)\approx e^{z}/z\), \(f_{2}(z)\approx e^{z}/z^{2}\) and \(f_{3}(z)\approx e^{z}/z^{3}\) and generally, as \(z\rightarrow\infty\) \[f_{k}(z)\approx\frac{e^{z}}{z^{k}},\ \ \ \ k=1,2,3,\ldots,s. \tag{4.6}\] Also, for values of \(z\to 0^{\pm}\), \(f_{1}(z)\approx 1\), \(f_{2}(z)\approx 1/2\) and \(f_{3}(z)\approx 1/6\) and in general, as \(z\to 0^{\pm}\) \[f_{k}(z)\approx\frac{1}{k!},\ \ \ \ k=1,2,3,\ldots,s. \tag{4.7}\] And as \(z\rightarrow-\infty\), \(f_{1}(z)\approx-1/z\), \(f_{2}(z)\approx-1/z\) and \(f_{3}(z)\approx-1/(2z)\) and in general, as \(z\rightarrow-\infty\) \[f_{k}(z)\approx\frac{-1}{(k-1)!z},\ \ \ \ k=1,2,3,\ldots,s. \tag{4.8}\] Numerically however, as \(z\) gets close to zero, these formulas \(f_{k}(z)\) (4.2) suffer from serious cancellation errors1, as we are dividing the result by a number approaching zero and raised to a power, see for example the plot of the function \(f_{3}(z)\) (4.5) in figure 4.2 over a range of very small values in magnitude of \(z\). Footnote 1: These errors depend on the computer precision. So, in single computer precision these errors get worse. In order to make the ETD and ETD-RK methods (3.13) practical in this case (where the linear part of a discretized PDE is represented by a diagonal matrix in the Fourier basis) only the scalar form of \(f_{k}(z)\) (4.2) is required, since the exponential of a diagonal matrix can be obtained by just exponentiating every entry on the main diagonal independently. A simple approach here is to use a Taylor series expansion [19] to approximate such expressions for values of \(z\) less than some chosen threshold value \(z_{th}\), as follows \[f_{k}(z)=\sum_{j=k}^{\infty}\frac{z^{j-k}}{j!},\ \ z\leq z_{th},\ k=1,2,\ldots,s, \tag{4.9}\]and to use the explicit formulas of the ETD coefficients \(\mathrm{g}_{m}\) (3.12) for values of \(z\) larger than \(z_{th}\). On the other hand, if we discretize a PDE in space using finite difference formulas [58, 83] (see SS2.2) or Chebyshev polynomials [11, 25, 83, 84], for instance, a system of coupled ODEs is obtained. Thus, the linear operator is represented by a non-diagonal matrix that might have eigenvalues with values of both large and small magnitude for stiff problems. Applying the ETD methods here requires the computation of a non-diagonal matrix exponential, which in itself is not a straightforward task [60]. Furthermore, having eigenvalues equal to or close to zero in the non-diagonal matrix, again leads to inaccuracies in evaluating expressions of the form (4.2). In this case we cannot distinguish between eigenvalues of small and large magnitude and simply switch between using the Taylor series expansion and the explicit formulas of the ETD coefficients (3.12) respectively. It is therefore important to have an accurate numerical algorithm for evaluating the function \(f_{k}(z)\) (4.2) in both scalar and non-diagonal matrix cases. One would like a single algo Figure 4.1: The values of the exponential function \(f_{0}(z)\) and the function \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\) versus the values of \(z\). rithm that is simultaneously accurate for all values of \(z\) in the scalar case, and also performs well in the non-diagonal matrix cases. In SS4.2, we describe some of the algorithms that appear to be practical for approximating the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\), since these expressions are the most frequently used in the ETD methods (3.13), for the scalar \(z\) with values of large and small magnitude. Then in SS4.3 we set up some tests on the second-order centered difference differentiation matrix (see SS2.2) for the second derivative, to represent the non-diagonal matrix case. We also conduct similar tests on the Chebyshev differentiation matrix for the second derivative and the second-order centered difference differentiation matrix for the first derivative, in SS4.4 and SS4.5 respectively. The aim is to show that the algorithms also work well for these non-diagonal matrices, and that their efficiency is by no means restricted to any special structure of certain matrices. The algorithms considered are Taylor series, an algorithm based on the Cauchy Integral Formula [44, 45], different forms of the Scaling and Squaring algorithms Figure 4.2: The values of the function \(f_{3}(z)\) (4.5) versus a range of very small values in magnitude of \(z\), evaluated numerically with 16-digit precision. [8, 9, 35, 37, 47, 53, 60, 76], the Composite Matrix algorithm [2, 54, 67], and the Matrix Decomposition algorithm [60] for non-diagonal matrix cases. We assess the effectiveness of these algorithms by considering their stability, accuracy, efficiency, ease of use and simplicity. The accuracy of an algorithm refers primarily to the error introduced by the algorithm. Efficiency of the algorithms is measured by the amount of computer time required to approximate such expressions, and this is the primary focus of SS4.6. Also, we outline the issues that contribute to our understanding of the limitations of the algorithms if they fail to produce accurate enough results. The overall conclusions of the comparison tests are given in SS4.7. ### 4.2 The Scalar Case To demonstrate the effectiveness of the algorithms, we test them against each other for the scalar \(z\) with values of large and small magnitude to approximate the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\). We compute the relative error of each algorithm, given by \[relative\ error=\frac{|exact\ value-approximate\ value|}{|exact\ value|}, \tag{4.10}\] where the exact values of the expressions were approximated using 50 digit arithmetic in Matlab code. Figures 4.3 and 4.4 show the relative errors of each algorithm to approximate the expressions \(f_{1}(z)\) (4.1) and \(f_{3}(z)\) (4.5) versus the values of \(z\) (we find that for \(f_{2}(z)\) (4.4), the algorithms behave in a qualitatively similar way to \(f_{3}(z)\)). The figures also show the errors for the use of the explicit formulas; this means simply evaluating the formulas \(f_{1}(z)\) and \(f_{3}(z)\) with standard double precision (16 digits) arithmetic. #### Taylor Series The formula \[e^{z}\approx 1+z+\frac{z^{2}}{2!}+\frac{z^{3}}{3!}+\cdots+\frac{z^{m}}{m!},\] for some integer \(m\) may be used to approximate the exponential in the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\), so that \(f_{k}(z)\) becomes \[f_{k}(z)\approx\sum_{j=k}^{m-1}\frac{z^{j-k}}{j!}.\]Attention to where to truncate the series is important if efficiency is being considered. We can simply sum the series until adding another term does not alter the accuracy of the algorithm. In doing the test described in SS4.2, we find that for \(|z|\ll 1\), the explicit formulas for \(f_{1}(z)\) (4.1) and \(f_{3}(z)\) (4.5) are imperfect due to the cancellation errors, but the Taylor expansion with 30 terms is remarkably good, see figures 4.3 and 4.4. For \(|z|\gg 1\), the explicit formulas \(f_{1}(z)\) and \(f_{3}(z)\) give acceptable results but the Taylor expansion is imprecise. For \(z\ll-1\), the errors in using the Taylor series are due to the "catastrophic cancellation". This term refers to the extreme loss of accuracy when small numbers are additively computed from large numbers. The errors in using the Taylor expansion can actually be larger than the correct exponential, and the answer will not be correct, no matter how many terms in the series are summed (it should be emphasized here, that the difficulty is not the truncation of the series, but the truncation of the arithmetic). In the limit \(z\to\infty\), see figure 4.3, the numerical relative errors for the Taylor approximation approach 1 since the exponential values in the explicit formulas \(f_{1}(z)\) and \(f_{3}(z)\) dominate over the Taylor expansion. The primary advantage of this algorithm is the simplicity and ease of implementation. Note that figures 4.3 and 4.4 show that, in the scalar case, it is possible to use the Taylor series for \(|z|<1\) and the explicit formulas for \(f_{1}(z)\) (4.1) and \(f_{3}(z)\) (4.5) for \(|z|\geq 1\), without significant loss of accuracy. #### The Cauchy Integral Formula To overcome the numerical difficulties in the ETD and ETD-RK methods (3.13) of order \(s\), a different tactic for evaluating the function \(f_{k}(z)\) (4.2) of orders \(k=1,2,\ldots,s\) was proposed by **Kassam** and **Trefethen** in [44, 45]. The key idea is to approximate the functions (for matrices or scalars) by means of contour integrals in the complex plane. The well-known **Cauchy Integral Formula**[55] \[f(z)=\frac{1}{2\pi i}\int_{\Gamma}\frac{f(T)}{T-z}dT, \tag{4.11}\] evaluates the analytic function \(f\) via an integral along a closed contour \(\Gamma\) that encloses \(z\). The Cauchy integral formula (4.11) says that the values of \(f\) on \(\Gamma\) completely determine the values of \(f\) inside \(\Gamma\). The simplest choice of the contour \(\Gamma\) is a circle with radius \(R\) centered at some point \(z_{0}\), \[\Gamma=\{T(\theta)=z_{0}+Re^{i\theta}:\ 0\leq\theta\leq 2\pi\}.\] Then by the definition of the contour integral for any function \(H\) \[\int_{\Gamma}H(T)dT=\int_{\theta}H(T(\theta))dT(\theta)d\theta, \tag{4.12}\] the Cauchy integral (4.11) of a function of a scalar \(z\) along the circular contour \(\Gamma\) becomes \[f(z)=\frac{1}{2\pi i}\int_{0}^{2\pi}\frac{f(z_{0}+Re^{i\theta})}{T(\theta)-z} Rie^{i\theta}d\theta=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{f(z_{0}+Re^{i\theta})}{T( \theta)-z}(T(\theta)-z_{0})d\theta, \tag{4.13}\] which is a periodic integral of our function evaluated at points on the circumference of the circular contour. If we employ the periodic **Trapezium Rule** defined by \[\int_{0}^{2\pi}P(\theta)d\theta\approx\frac{2\pi}{N}\sum_{j=1}^{N}P(\theta_{j }),\ \theta_{j}=\frac{2\pi j}{N}, \tag{4.14}\] to approximate the integral on the right-hand side of (4.13), we obtain the formula proposed by **Kassam** and **Trefethen**[44, 45] for a circular contour, \[f(z)\approx\frac{1}{N}\sum_{j=1}^{N}\frac{f(T(\theta_{j}))}{T(\theta_{j})-z}( T(\theta_{j})-z_{0}). \tag{4.15}\] Referring to [44, 45, 84], the authors stated that the periodic Trapezium Rule is simply the Fourier spectral method for integrating a periodic function. The convergence of spectral methods in general, and Fourier methods in particular, depends on the smoothness of the function that is being interpolated. For analytic functions, the Fourier coefficients decay exponentially [79] and we have correspondingly exponential convergence of spectral methods, including the periodic Trapezium Rule. This algorithm has turned out to be very powerful, as figures 4.3 and 4.4 show. Testing the algorithm as described in SS4.2 shows that the algorithm performs very well when approximating the expression \(f_{k}(z)\) (4.2) of orders \(k=1,3\) for the scalar \(z\) with values of large and small magnitude (qualitatively similar results are found for \(k=2\)). For each value of \(z\), the chosen contour is a circle of radius \(R=1\), centered at \(z_{0}=z\), and sampled at \(N=32\) equally spaced points \(\{\theta_{j}\}\), and \(f_{k}(z)\) is approximated by (4.15) as follows \[f_{k}(z)\approx\frac{1}{N}\sum_{j=1}^{N}f_{k}(T(\theta_{j})), \tag{4.16}\]which is an average of the function values at the \(N\) points \(T(\theta_{j})=z+Re^{i\theta_{j}}\) around the discretized circumference of the circle (it is important to ensure that none of the points on the contour are close to or at the origin, otherwise the original problem of rounding errors reappears). In the case where the linear part of a discretized PDE is represented by a diagonal matrix in the Fourier basis which may have eigenvalues that are zero or of small magnitude, we can compute the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,\ldots,s\) for each element on the diagonal independently by again using the above formula (4.16) [44, 45], for circles centered at each element on the matrix diagonal. #### Scaling and Squaring Algorithm: Type I One of the most widely used of the algorithms that have been proposed for approximating expressions such as \(f_{k}(z)\) (4.2), \(k=1,2,\ldots,s\) that appear in the ETD and ETD-RK methods (3.13) of order \(s\), is the Scaling and Squaring algorithm [8, 9, 35, 37, 47, 53, 60, 76]. This section considers the algorithm in the form in which we scale up from small values of \(|z|\); the alternative approach of scaling down from large values of \(|z|\) is discussed in SS4.2.4 (for ease of presentation we outline the theory for the scalar case, but the algorithm is equally applicable to matrices, to be described in SS4.3.4). Consider first the accurate evaluation of the exponential function, \(f_{0}(z)=e^{z}\). It is possible to use the Taylor series or Pade approximation (4.94) [35, 37, 47, 76] for \(|z|\leq 1\), but to avoid loss of accuracy for \(|z|>1\), we use the Scaling and Squaring algorithm [60], based on the identity \[f_{0}(2z)=(f_{0}(z))^{2}=(e^{z})^{2}. \tag{4.17}\] First we compute \(f_{0}(2^{-l}z)\) for some \(l\) chosen to be the smallest integer such that \[l\geq\frac{\log(|z|/\delta)}{\log 2}, \tag{4.18}\] so that for some threshold value \(\delta\) we have \(|2^{-l}z|\leq\delta\). This computation is efficiently and accurately performed using the Taylor expansion or Pade approximation. Using (4.17), the resulting value is then squared \(l\) times to obtain the final answer \[f_{0}(z)=[f_{0}(2^{-l}z)]^{2^{l}}. \tag{4.19}\]A similar approach can be used for computing the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\). The algorithm uses either the identities (taken from [9]) \[f_{1}(2z) = \frac{1}{2}\left(f_{0}(z)f_{1}(z)+f_{1}(z)\right), \tag{4.20}\] \[f_{2}(2z) = \frac{1}{4}\left(f_{1}(z)f_{1}(z)+2f_{2}(z)\right),\] (4.21) \[f_{3}(2z) = \frac{1}{8}\left(f_{1}(z)f_{2}(z)+f_{2}(z)+2f_{3}(z)\right), \tag{4.22}\] or the identities (taken from [54]) \[f_{1}(2z) = \frac{1}{2}\left(f_{0}(z)f_{1}(z)+f_{1}(z)\right), \tag{4.23}\] \[f_{2}(2z) = \frac{1}{4}\left(f_{0}(z)f_{2}(z)+f_{1}(z)+f_{2}(z)\right),\] (4.24) \[f_{3}(2z) = \frac{1}{8}\left(f_{0}(z)f_{3}(z)+\frac{1}{2}f_{1}(z)+f_{2}(z)+f _{3}(z)\right). \tag{4.25}\] A general form of the squaring relations (4.23) - (4.25) stated with proof in [76] is \[f_{k}(2z)=\frac{1}{2^{k}}\biggl{[}f_{0}(z)f_{k}(z)+\sum_{j=1}^{k}\frac{1}{(k- j)!}f_{j}(z)\biggr{]},\ k=1,2,\ldots,s. \tag{4.26}\] The authors of [76] pointed out that the choice of the squaring laws is very important as generally this is the main source of errors committed in the algorithm (as will be explained later in this section) and concluded from their experiments that their choice (4.26) results in the minimum accumulation of errors. In addition, the algorithm can also be based on the identity (taken from [37, 53]) \[f_{1}(2z)=\Bigl{(}\frac{1}{2}zf_{1}(z)+1\Bigr{)}f_{1}(z), \tag{4.27}\] and either the identities (4.21) - (4.22) or (4.24) - (4.25). Note that we refer to the algorithm based on the identities (4.20) - (4.22) or (4.23) - (4.25) or (4.27) as the Scaling and Squaring algorithm. Before we illustrate the algorithm, we verify the identities (4.20) - (4.22) and (4.27). For (4.20) \[f_{1}(2z)=\frac{(e^{z}-1)(e^{z}+1)}{2z}=\frac{1}{2}f_{1}(z)\left(f_{0}(z)+1 \right).\] For (4.21) \[f_{2}(2z) = \frac{(e^{z}-1)(e^{z}+1)}{4z^{2}}-\frac{1}{2z},\] \[= \frac{1}{4}f_{1}(z)\left(\frac{e^{z}+1}{z}\right)-\frac{1}{2z},\] \[= \frac{1}{4}f_{1}(z)f_{1}(z)+\frac{1}{2z}\left(\frac{e^{z}-1-z}{z} \right),\]And for (4.22) \[f_{3}(2z) = \frac{(e^{z}-1)(e^{z}+1)}{8z^{3}}-\frac{1+z}{4z^{2}},\] \[= \frac{1}{8}f_{1}(z)f_{2}(z)+\frac{(e^{z}-1)(2+z)}{8z^{3}}-\frac{1+ z}{4z^{2}},\] \[= \frac{1}{8}f_{1}(z)f_{2}(z)+\frac{e^{z}-1-z-z^{2}/2-z^{2}/2}{4z^{3 }}+\frac{e^{z}-1}{8z^{2}},\] \[= \frac{1}{8}f_{1}(z)f_{2}(z)+\frac{1}{4}f_{3}(z)+\frac{e^{z}-1-z}{ 8z^{2}},\] \[= \frac{1}{8}\left(f_{1}(z)f_{2}(z)+f_{2}(z)+2f_{3}(z)\right).\] The identity (4.27) can be derived from the relation \[f_{1}(2z)=\frac{e^{z}+1}{2}f_{1}(z),\] and the relation \[f_{1}(z)=\frac{e^{z}-1}{z}\Rightarrow e^{z}=zf_{1}(z)+1,\] so that \[f_{1}(2z)=\frac{zf_{1}(z)+2}{2}f_{1}(z)=\Big{(}\frac{zf_{1}(z)}{2}+1\Big{)}f_ {1}(z).\] In implementing the Scaling and Squaring algorithm, we use a 30-term Taylor series to compute the expression \(f_{k}(z)\) (4.2), \(k=1,2,3\) (as explained in SS4.2.1) for values \(|z|\leq\delta\), for some threshold value \(\delta\). But for values \(|z|>\delta\), the algorithm starts by the computation of \(f_{1}(2^{-l}z)\), \(f_{2}(2^{-l}z)\) and \(f_{3}(2^{-l}z)\) for some \(l\), again selected by the formula (4.18) so that the value of \(|2^{-l}z|\leq\delta\). For this evaluation we use a 30-term Taylor series2. We then proceed by applying the identities (4.20) - (4.22) or (4.23) - (4.25) (or (4.27) and either (4.21) - (4.22) or (4.24) - (4.25)) \(l\) times to compute the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\), for the required values of \(z\). Footnote 2: Reasons for favoring the Taylor series than the Padé approximation are explained in §4.3.5. To demonstrate the algorithm's validity, we compute the relative error (4.10) of using this algorithm based on the identities (4.20) - (4.22) to approximate the expression \(f_{k}(z)\) (4.2), \(k=1,3\) (qualitatively similar results hold for \(k=2\)) for values of \(z\) with small and large magnitude and with the choice of the threshold value \(\delta=1\). As displayed in figures 4.3 and 4.4, this algorithm is one of the most effective and powerful algorithms. It is stable for small positive values and for all negative values of \(z^{3}\). However, this algorithm is one of the most complex to Figure 4.3: Relative errors in \(f_{1}(z)\) (4.1) and \(f_{3}(z)\) (4.5) versus the values \(z>0\) in the scalar case. The algorithms are: Explicit Formula (red circles), 30-term Taylor series (blue diamonds), the Cauchy Integral Formula (magenta stars), Scaling and Squaring Type **I** based on the identities (4.20) - (4.22) (black stars), Scaling and Squaring Type **II** based on the identities (4.48) - (4.50) (green circles) and Composite Matrix (cyan squares). Figure 4.4: Relative errors in \(f_{1}(z)\) (4.1) and \(f_{3}(z)\) (4.5) versus the values \(z<0\) in the scalar case. The algorithms are: Explicit Formula (red circles), 30-term Taylor series (blue diamonds), the Cauchy Integral Formula (magenta stars), Scaling and Squaring Type **I** based on the identities (4.20) - (4.22) (black stars), Scaling and Squaring Type **II** based on the identities (4.48) - (4.50) (green circles) and Composite Matrix (cyan squares). implement and its accuracy decreases as the value of \(z>1\) increases. This is due to the amplification of the truncation errors and the rounding errors (resulting from using the Taylor series) by the scaling and squaring process (these errors will be analyze shortly in this section). It has been noted [35] that for a better performance of the algorithm, we should increase the threshold value \(\delta\) as well as increasing the number of terms used in the Taylor series, so that the algorithm has fewer squarings to undo the effect of the scaling in approximating \(f_{0}(z)=e^{z}\) using (4.19). These computed squares can be contaminated by rounding errors that are doubled at each scaling. To examine the effects of the squaring phase, in using the relation (4.19), on the approximated rounding errors, assume that the function \(f_{0}(2^{-l}z)\) (for \(l\) selected by (4.18) so that \(|2^{-l}z|\leq\delta\)) is contaminated by some error \(\epsilon\) in its computation and that the relative error is \(\epsilon/|f_{0}(2^{-l}z)|\). Then, squaring \(f_{0}(2^{-l}z)\) using the identity (4.19) \(l\) times to approximate \(f_{0}(z)\) at \(|z|\gg 1\) has rounding errors with \[relative\ error\approx\frac{|(f_{0}(2^{-l}z)+\epsilon)^{2^{l}}-f_{0}(2^{-l}z)^{ 2^{l}}|}{|f_{0}(2^{-l}z)^{2^{l}}|}. \tag{4.28}\] Applying the binomial series \[(x+y)^{n}=\sum_{j=0}^{n}{n\choose j}x^{n-j}y^{j},\ \ \ \ {n\choose j}=\frac{n!}{j!(n-j)!}, \tag{4.29}\] to the relative error (4.28) gives \[relative\ error \approx \frac{|2^{l}f_{0}(2^{-l}z)^{2^{l}-1}\epsilon+O(\epsilon^{2})|}{|f _{0}(2^{-l}z)^{2^{l}}|}, \tag{4.30}\] \[\approx \frac{2^{l}\epsilon}{|f_{0}(2^{-l}z)|}\approx\frac{|z|\epsilon/ \delta}{|f_{0}(2^{-l}z)|}\propto|z|,\] which shows that the errors are doubled at each scaling and we expect the relative error to increase linearly, by a factor of \(2^{l}=|z|/\delta\) (see formula (4.18)), as \(|z|\) increases. Figure 4.5 confirms the above analysis and illustrates the linear increase of the computed relative errors (4.30) of using (4.19) to approximate \(f_{0}(z)=e^{z}\) for \(|z|\gg 1\) with threshold value \(\delta=1\). Hence, it seems desirable to minimize the number of squarings in the algorithm. For more analysis of this algorithm, the reader is referred to a paper by **Higham**[35] who gave a backward error analysis (in exact arithmetic) of the algorithm combined with Pade approximation (for computing the matrix exponential) that employs sharp bounds for the truncation errors in the approximFigure 4.5: Relative errors of using the Scaling and Squaring Type **I** algorithm based on the identity (4.19), versus the values (a) \(z<0\) and (b) \(z>0\), for approximating the function \(f_{0}(z)\) in the scalar case. the loss of accuracy in the computed results is related to the number of squaring steps used and that larger values of the threshold may be optimal for the algorithm's optimal efficiency. By looking at the relations (4.20) - (4.22) and (4.23) - (4.25), we find that they involve \(f_{0}(z)=e^{z}\), which for its approximation depends on the scaling and squaring process using the identity (4.19). Therefore, the errors (4.30) in the squaring process will directly affect the use of these relations for approximating \(f_{k}(z)\) (4.2), \(k=1,2,3\) at large positive values of \(z\). So if \(z\gg 1\), then \(e^{z}\gg 1\) and therefore according to (4.6) the identities (4.20) - (4.22) become \[f_{1}(2z) \approx \frac{1}{2}f_{0}(z)f_{1}(z), \tag{4.31}\] \[f_{2}(2z) \approx \frac{1}{4}f_{1}(z)f_{1}(z),\] (4.32) \[f_{3}(2z) \approx \frac{1}{8}f_{1}(z)f_{2}(z), \tag{4.33}\] respectively, and (4.24) - (4.25) become \[f_{2}(2z) \approx \frac{1}{4}f_{0}(z)f_{2}(z), \tag{4.34}\] \[f_{3}(2z) \approx \frac{1}{8}f_{0}(z)f_{3}(z), \tag{4.35}\] respectively. This shows that applying the above identities to compute \(f_{k}(z)\) (4.2), \(k=1,2,3\) for the required large positive value of \(z\), will be affected by the error (4.30). But if \(z\ll-1\) then \(e^{z}\ll 1\) and therefore according to (4.8) the identities (4.20) - (4.22) become \[f_{1}(2z) \approx \frac{1}{2}f_{1}(z), \tag{4.36}\] \[f_{2}(2z) \approx \frac{1}{2}f_{2}(z),\] (4.37) \[f_{3}(2z) \approx \frac{1}{8}f_{2}(z)+\frac{1}{4}f_{3}(z), \tag{4.38}\] respectively, and (4.24) - (4.25) become \[f_{2}(2z) \approx \frac{1}{4}(f_{1}(z)+f_{2}(z)), \tag{4.39}\] \[f_{3}(2z) \approx \frac{1}{8}(\frac{1}{2}f_{1}(z)+f_{2}(z)+f_{3}(z)), \tag{4.40}\] respectively. Hence, the identities (4.36) - (4.40) do not involve \(f_{0}(z)=e^{z}\), the error (4.30) in applying the identity (4.19) has no effect on these identities when they are applied for all required values \(z<0\), and thus the algorithm becomes more stable and accurate. The above analysis explains the behavior of the algorithm based on the identities (4.20) - (4.22), displayed in figures 4.3 and 4.4, where the errors grow for \(z\gg 1\) but not for \(z\ll-1\). To analyze the rounding errors resulting from applying the identity (4.27) \[f_{1}(2z)=\Big{(}\frac{1}{2}zf_{1}(z)+1\Big{)}f_{1}(z),\] to compute \(f_{1}(z)\) (4.1), assume that the exact value of \(f_{1}(z)\) is contaminated by some error \(\epsilon_{1}\) in its computation so that \[relative\ error=\frac{\epsilon_{1}}{|f_{1}(z)|}. \tag{4.41}\] Applying the identity (4.27) then has rounding errors with \[relative\ error \approx \frac{|f_{1_{approx}}(2z)-f_{1_{exact}}(2z)|}{|f_{1_{exact}}(2z)|}, \tag{4.42}\] \[\approx \frac{|f_{1}(z)z\epsilon_{1}/2+\epsilon_{1}zf_{1}(z)/2+\epsilon_ {1}|}{|f_{1}(z)zf_{1}(z)/2+f_{1}(z)|}.\] As \(z\rightarrow\infty\), \(e^{z}\gg 1\), \(f_{1}(z)\approx e^{z}/z\), the relative error (4.41) becomes \[relative\ error\approx\frac{\epsilon_{1}z}{e^{z}}, \tag{4.43}\] and (4.42) becomes \[relative\ error \approx \frac{|e^{z}\epsilon_{1}/2+\epsilon_{1}e^{z}/2+\epsilon_{1}|}{|e ^{2z}/2z+e^{z}/z|}, \tag{4.44}\] \[\approx \frac{2\epsilon_{1}z}{e^{z}},\] which shows that the errors (4.43) are doubled at each scaling and that the algorithm becomes less accurate as the value of positive \(z\) increases. In this case, the identities (4.32) and (4.33) will also be affected by the error (4.44) when they are applied to compute \(f_{2}(z)\) (4.4) and \(f_{3}(z)\) (4.5). On the other hand, for values of \(z\rightarrow-\infty\), \(e^{z}\ll 1\) and \(f_{1}(z)\approx-1/z\) and therefore the order \(\epsilon_{1}\) terms in (4.42) simplify to \[relative\ error\approx 0,\] which shows the algorithm's validity when applying the identities (4.27) (and when subsequently applying either identities (4.21) - (4.22) or (4.24) - (4.25) for approximating \(f_{k}(z)\) (4.2), \(k=1,2,3\) for all values \(z<0\)). In further experiments on the Scaling and Squaring algorithm, we investigate what the best choice of the threshold value \(\delta\) is, among certain chosen values\(0.5,1,2,3\). In figure 4.6, and for each chosen value of the threshold \(\delta\), we plot the computed relative errors (4.10) of using the algorithm, based on the identities (4.20) - (4.22), to approximate \(f_{3}(z)\) (4.5) (qualitatively similar results hold for the expressions \(f_{1}(z)\) (4.1) and \(f_{2}(z)\) (4.4)) for positive values of \(z\)4. The figure reveals that, the choices of threshold values \(\delta=0.5,1,2\) are all good, giving better results than the value \(\delta=3\). At larger value \(\delta\geq 3\), we experimentally find that increasing the value of the threshold requires an increase in the number of terms used in the Taylor series combined with the algorithm for better accuracy. Footnote 4: Qualitatively similar results hold when the algorithm is based on the identities (4.23) - (4.25) or (4.27) and either (4.21) - (4.22) or (4.24) - (4.25) and for negative values of \(z\). In addition, in figure 4.7, we note that the computed relative errors (4.10) of using the Scaling and Squaring algorithm to approximate \(f_{3}(z)\) (qualitatively similar results hold for the expressions \(f_{1}(z)\) and \(f_{2}(z)\)) for positive values of \(z\) are, firstly, similar regardless of which relations (the identities (4.20) - (4.22) or (4.23) - (4.25) or (4.27) and either (4.21) - (4.22) or (4.24) - (4.25)) are used and whatever the chosen value of the threshold \(\delta\) is (in the figure \(\delta=1\)). Secondly, these errors increase linearly for \(z\gg 1\), which agrees with the above analysis. On the other hand, in doing the same experiment for negative values of \(z\) (figures are not shown), we find that the errors are smaller (errors of \(O(10^{-15})\)) and they do not grow linearly for values of \(z\to-\infty\) as our above analysis suggests. #### Scaling and Squaring Algorithm: Type II Recall that the numerical evaluation of the explicit formula \(f_{k}(z)\) (4.2), \(k=1,3\) is accurate for scalar values \(|z|>1\) but not for \(|z|<1\) (the same qualitatively holds for \(k=2\)), see figures 4.3 and 4.4. This suggests a second type of Scaling and Squaring algorithm, based on scaling down from \(|z|>1\). Consider again the evaluation of the exponential function. For values of \(|z|\geq\gamma\), for some threshold value \(\gamma\), we use the function \(f_{0}(z)=e^{z}\), but for values \(|z|<\gamma\), we use the Scaling and Squaring algorithm based on the identity, which we now write in the form \[f_{0}(z)=(f_{0}(2z))^{1/2}=(e^{2z})^{1/2}. \tag{4.45}\] First we compute \(f_{0}(2^{l_{1}}z)\) using the exponential function for some \(l_{1}\) chosen to be the smallest integer such that \[l_{1}\geq\frac{\log(\gamma/|z|)}{\log 2}, \tag{4.46}\] so that the value of \(|2^{l_{1}}z|\geq\gamma\). Using (4.45) the resulting value is then square-rooted \(l_{1}\) times to obtain the final answer \[f_{0}(z)=[f_{0}(2^{l_{1}}z)]^{1/2^{l_{1}}}. \tag{4.47}\] A similar approach can be used for computing the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\). For values of \(z\) with large or moderate magnitude we can simply use the formula \(f_{k}(z)\) (4.2), which give accurate results, but for values of \(z\) with small magnitude we use either the identities \[f_{1}(z) = 2f_{1}(2z)/(f_{0}(z)+1), \tag{4.48}\] \[f_{2}(z) = 2f_{2}(2z)-\frac{1}{2}f_{1}(z)f_{1}(z),\] (4.49) \[f_{3}(z) = 4f_{3}(2z)-\frac{1}{2}f_{1}(z)f_{2}(z)-\frac{1}{2}f_{2}(z), \tag{4.50}\] Figure 4.6: Relative errors of using the Scaling and Squaring Type \(\mathbf{I}\) algorithm based on the identities (4.20) - (4.22), versus the values of \(z\), for approximating the expression \(f_{3}(z)\) (4.5), for different values of threshold \(\delta\) (see formula (4.18)). or \[f_{1}(z) = 2f_{1}(2z)/(f_{0}(z)+1), \tag{4.51}\] \[f_{2}(z) = (4f_{2}(2z)-f_{1}(z))/(f_{0}(z)+1),\] (4.52) \[f_{3}(z) = (8f_{3}(2z)-\frac{1}{2}f_{1}(z)-f_{2}(z))/(f_{0}(z)+1). \tag{4.53}\] The identities (4.48) - (4.50) and (4.51) - (4.53) are formed by rearranging the identities (4.20) - (4.22) and (4.23) - (4.25) respectively. We start by computing \(f_{1}(2^{l_{1}}z)\), \(f_{2}(2^{l_{1}}z)\) and \(f_{3}(2^{l_{1}}z)\) using the formula \(f_{k}(z)\) (4.2) for \(k=1,2,3\) respectively, which will be accurate, for some \(l_{1}\) selected by the formula (4.46), so that the value of \(|2^{l_{1}}z|\geq\gamma\), which we choose here to be \(\gamma=1\). The identities (4.48) - (4.50) or (4.51) - (4.53) are then applied \(l_{1}\) times to compute the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\) for the required values of \(z\). Figure 4.7: Relative errors of using the Scaling and Squaring Type **I** algorithm, versus the values of \(z\), for approximating the expression \(f_{3}(z)\) (4.5). The blue line (circles) uses the identities (4.20) - (4.22), the cyan line (stars) uses the identities (4.23) - (4.25), the green line (diamonds) uses the identities (4.27), (4.21) and (4.22) and the black line (squares) uses the identities (4.27), (4.24) and (4.25). To examine the effects of using (4.47) to compute \(f_{0}(z)=e^{z}\) on rounding errors, assume that the function \(f_{0}(2^{l_{1}}z)\), for \(l_{1}\) selected by the formula (4.46), is contaminated by some error \(\epsilon\) in its computation so that the relative error is \(\epsilon/|f_{0}(2^{l_{1}}z)|\). Taking the square-root of \(f_{0}(2^{l_{1}}z)\)\(l_{1}\) times, it follows that using the identity (4.47), to approximate \(f_{0}(z)\) at \(|z|\ll 1\) has rounding errors with \[relative\ error\approx\frac{|(f_{0}(2^{l_{1}}z)+\epsilon)^{2^{-l_{1}}}-f_{0}(2^{ l_{1}}z)^{2^{-l_{1}}}|}{|f_{0}(2^{l_{1}}z)^{2^{-l_{1}}}|}. \tag{4.54}\] Applying the binomial series (4.29) to the relative error (4.54) gives \[relative\ error \approx \frac{|2^{-l_{1}}f_{0}(2^{l_{1}}z)^{2^{-l_{1}}-1}\epsilon+O( \epsilon^{2})|}{|f_{0}(2^{l_{1}}z)^{2^{-l_{1}}}|}, \tag{4.55}\] \[\approx \frac{2^{-l_{1}}\epsilon}{|f_{0}(2^{l_{1}}z)|}\approx\frac{|z| \epsilon/\gamma}{|f_{0}(2^{l_{1}}z)|}\propto|z|,\] which shows that the errors are halved at each scaling and we expect the relative error to decrease linearly with \(|z|\), by a factor of \(2^{l_{1}}\), as \(|z|\) is halved \(l_{1}\) times. We may carry out a similar analysis to analyze the rounding errors resulting from applying the identity (4.48) \[f_{1}(z)=\frac{2f_{1}(2z)}{f_{0}(z)+1},\] to compute the function \(f_{1}(z)\) (4.1) for the required values of \(z\). To do this, we first assume that errors in approximating \(f_{0}(z)\) by applying (4.47) are negligible, due to the result (4.55), and that the exact value of the function \(f_{1}(2z)\) is contaminated by some error \(\epsilon_{1}\), with relative error \(\epsilon_{1}/|f_{1}(2z)|\). Thus, applying the identity (4.48) has rounding errors with \[relative\ error \approx \frac{|f_{1_{approx}}(z)-f_{1_{exact}}(z)|}{|f_{1_{exact}}(z)|}, \tag{4.56}\] \[\approx \Big{|}\frac{2f_{1}(2z)+2\epsilon_{1}-2f_{1}(2z)}{f_{0}(z)+1} \Big{|}\Big{/}\Big{|}\frac{2f_{1}(2z)}{f_{0}(z)+1}\Big{|},\] \[\approx \frac{\epsilon_{1}}{|f_{1}(2z)|}.\] This shows that there is no amplification of the errors at each scaling and that the algorithm's accuracy remains the same. For \(|z|\ll 1\) and according to (4.7), \(f_{1}(2z)\approx 1\) and the relative error (4.56) becomes \[relative\ error\approx\epsilon_{1}. \tag{4.57}\] When applying the above ideas to analyze the rounding errors resulting from applying the identities (4.49) and (4.48) to compute \(f_{2}(z)\) (4.4), we now assume that the errors in applying the identity (4.48) are negligible, because these errors are not amplified (not growing) according to (4.57). Then if the relative error in approximating \(f_{2}(2z)\) is \[relative\ error=\frac{\epsilon_{2}}{|f_{2}(2z)|}, \tag{4.58}\] for some error \(\epsilon_{2}\), the relative error in applying the identity \(f_{2}(z)\) (4.49) is \[relative\ error\approx\frac{2\epsilon_{2}}{|2f_{2}(2z)-\frac{1}{2}f_{1}(z)f_{1} (z)|}. \tag{4.59}\] As \(z\to 0^{\pm}\) and according to (4.7), \(f_{1}(z)\approx 1\), \(f_{2}(2z)\approx 1/2\), the relative error (4.58) becomes \[relative\ error\approx 2\epsilon_{2}, \tag{4.60}\] and (4.59) becomes \[relative\ error\approx 4\epsilon_{2},\] which shows that errors (4.60) are doubled at each scaling and we expect the relative error to increase linearly, by a factor of \(2^{l_{1}}\), as \(|z|\) is halved \(l_{1}\) times, i.e. the \(relative\ error\propto 1/|z|\). As above, we may analyze the rounding errors resulting from applying the identities (4.48) - (4.50), assuming again that the errors in applying (4.48) are negligible, because these errors are not amplified according to (4.57). If the relative error in approximating \(f_{3}(2z)\) is \[relative\ error=\frac{\epsilon_{3}}{|f_{3}(2z)|}, \tag{4.61}\] for some error \(\epsilon_{3}\), then the relative error in applying the identity (4.50) is \[relative\ error\approx\frac{4\epsilon_{3}-\epsilon_{2}(f_{1}(z)+1)/2}{|4f_{3}( 2z)-\frac{1}{2}f_{1}(z)f_{2}(z)-\frac{1}{2}f_{2}(z)|}. \tag{4.62}\] Since the relative error in approximating \(f_{3}(z)\) is growing faster by a factor of 2 than that of approximating \(f_{2}(z)\) (due to (4.59) and (4.62)), we assume that the relative error of approximating \(f_{2}(z)\) is small compared to that of approximating \(f_{3}(z)\) and therefore it can be ignored, and so (4.62) becomes \[relative\ error\approx\frac{4\epsilon_{3}}{|4f_{3}(2z)-\frac{1}{2}f_{1}(z)f_{2} (z)-\frac{1}{2}f_{2}(z)|}. \tag{4.63}\] As \(z\to 0^{\pm}\) and according to (4.7), \(f_{1}(z)\approx 1\), \(f_{2}(z)\approx 1/2\), \(f_{3}(2z)\approx 1/6\), the relative error (4.61) becomes \[relative\ error\approx 6\epsilon_{3}, \tag{4.64}\]and (4.63) becomes \[relative\ error\approx 24\epsilon_{3},\] which shows that errors (4.64) are amplified by a factor of 4 at each scaling, and we expect the relative error to increase by a factor of \((2^{l_{1}})^{2}\) as \(|z|\) is halved \(l_{1}\) times, i.e. the \(relative\ error\propto 1/z^{2}\). Regarding the test described in SS4.2, and according to figures 4.3 and 4.4, the Scaling and Squaring algorithm based on the identity (4.48) performs well overall when evaluating the simplest expression \(f_{1}(z)\) (4.1). But when numerically computing the relative error (4.10) of applying the identities (4.48) - (4.50)5 to compute the function \(f_{3}(z)\) (4.5) for values of \(z\) with small magnitude, we find that the results are inaccurate6. The results of using the algorithm shown in figures 4.3 and 4.4 agree well with the above analysis, and thus, for values of \(z\to 0^{\pm}\), the Scaling and Squaring type **II** algorithm is not a very stable nor a useful algorithm. Footnote 5: Qualitatively similar results are found when using the identities (4.51) - (4.53). Footnote 6: Qualitatively similar results are found when approximating the function \(f_{2}(z)\) (4.4). #### Composite Matrix Algorithm Although this algorithm is not explicitly given in earlier work, related algorithms appear in [2, 56, 67, 73]. The algorithm starts with the construction of an \((s+1)\times(s+1)\) matrix with the structure \[B1_{s}=\left(\begin{array}{cccccc}z&1&0&0&0&\ldots&0\\ 0&0&1&0&0&\ldots&0\\ 0&0&0&1&0&\ldots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ldots&\vdots\\ 0&0&0&0&0&\ldots&1\\ 0&0&0&0&0&\ldots&0\end{array}\right). \tag{4.65}\] If we exponentiate the matrix \(B1_{s}\), the resulting matrix is \[e^{B1_{s}}=\left(\begin{array}{cccccc}e^{z}&f_{1}(z)&f_{2}(z)&f_{3}(z)&f_{4 }(z)&\cdots&f_{s}(z)\\ 0&1&1&1/2&1/3!&\cdots&1/(s-1)!\\ 0&0&1&1&1/2&\cdots&1/(s-2)!\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ldots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ldots&\vdots\\ 0&0&0&0&0&\cdots&1\end{array}\right), \tag{4.66}\]which can be verified directly using the Taylor series expansion of the exponential function. We note in particular that, due to the structure of \(B1_{s}\), any power of the matrix \(B1_{s}\) contains as an element the corresponding power of the value \(z\) in the same position where \(B1_{s}\) contains \(z\), and therefore, the exponential of \(z\) will be generated in the same position. To prove the result (4.66), we start by exponentiating the matrix \(B1_{s}\) (4.65) using the Taylor series expansion which gives \[e^{B1_{s}} = \sum_{n=0}^{\infty}\frac{B1_{s}^{n}}{n!}, \tag{4.67}\] \[= I+B1_{s}+B1_{s}^{2}/2!+B1_{s}^{3}/3!+B1_{s}^{4}/4!+\ldots.\] Note that \(B1_{s}^{0}=I\) and \[B1_{s}^{n}=\left\{\begin{array}{ccccccccc}\left(\begin{array}{ccccccccc}z^ {n}&z^{n-1}&\cdots&z^{0}&0&0&0&\cdots&0\\ 0&0&\cdots&0&1&0&0&\cdots&0\\ 0&0&\cdots&0&0&1&0&\cdots&0\\ \vdots&\vdots&\cdots&\vdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ 0&0&\cdots&0&0&0&0&\cdots&1_{(s-n+1)\times(s+1)}\\ 0&0&\cdots&0&0&0&0&\cdots&0\\ \vdots&\vdots&\cdots&\vdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ 0&0&\cdots&0&0&0&0&\cdots&0\\ z^{n}&z^{n-1}&z^{n-2}&z^{n-3}&z^{n-4}&z^{n-5}&\cdots&z^{n-s}\\ 0&0&0&0&0&0&\cdots&0\\ 0&0&0&0&0&0&\cdots&0\\ 0&0&0&0&0&0&\cdots&0\\ 0&0&0&0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&0&0&0&\cdots&0\\ 0&0&0&0&0&0&\cdots&0\\ \end{array}\right),\ \text{if}\ n\geq s.\]Therefore, using (4.68) we can rewrite (4.67) as follows \[e^{B1_{s}}=\left(\begin{array}{cccccccc}e^{z}&f_{1}(z)&f_{2}(z)&f_{3}(z)&\cdots &f_{s-2}(z)&f_{s-1}(z)&f_{s}(z)\\ 0&1&1/1!&1/2!&\cdots&1/(s-3)!&1/(s-2)!&1/(s-1)!\\ 0&0&1&1/1!&\cdots&1/(s-4)!&1/(s-3)!&1/(s-2)!\\ \vdots&\vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\vdots\\ 0&0&0&0&\cdots&1&1/1!&1/2!\\ 0&0&0&0&\cdots&0&1&1/1!\\ 0&0&0&0&\cdots&0&0&1\end{array}\right).\] This algorithm evaluates the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,\ldots,s\), which are contained in the matrix (4.66) and can be extracted easily, assuming that we have a reliable function for computing the matrix exponential (such as the Matlab function _expm_, which uses a scaling and squaring method combined with Pade approximation (4.94) [35, 37, 47, 76]). This algorithm is very attractive, being very simple and easily programmed. The approximations of the expressions \(f_{1}(z)\) (4.1) and \(f_{3}(z)\) (4.5) for small positive values of \(z\), shown in figure 4.3, and for all values \(z<0\), shown in figure 4.4, are accurate to within machine precision (qualitatively similar results are found for the expression \(f_{2}(z)\) (4.4)). As the value of positive \(z\) increases, the performance of the algorithm deteriorates, see figure 4.3. This is due to the increase in the norm of the matrix \(B1_{s}\) (4.65), which leads to an increase in the number of scalings needed to approximate the matrix exponential \(e^{B1_{s}}\) (4.66). This scaling and squaring process amplifies the truncation errors and the rounding errors resulting from the matrix inversion and the repeated matrix multiplications when using the Pade approximation, see SS4.3.5. In fact these errors are doubled at each scaling, as shown in (4.30), and we expect the relative error to increase linearly as the value of positive \(z\) increases (see in SS4.2.3 the analysis of the rounding errors in using the Scaling and Squaring algorithm for approximating the exponential function). ### 4.3 Non-Diagonal Matrix Case Implementing the ETD methods [19] as a time discretizing method for a system of ODEs, where the linear operator is represented by a non-diagonal matrix, requires the computation of matrix functions that involve the matrix exponential. As discussed at the start of the chapter, in addition to the difficulties inherent in computing the matrix exponential itself, accurate evaluation of the matrix functions can be problematic when the matrix has small eigenvalues. This is a well known problem in numerical analysis. Various algorithms have been proposed by many authors [2, 8, 35, 47, 54, 56, 57, 67, 80, 81], and have been investigated in terms of their practical efficiency. For example, **Schmelzer** and **Trefethen**[69, 70] discussed the efficient computation of matrix functions. They proposed two methods for the fast evaluation of these functions building on previous work by **Trefethen** and **Gutknecht, Minchev**, and **Lu**. The first method is based on computing optimal rational approximations to the matrix functions on the negative real axis using the **Caratheodory-Fejer** procedure [85]. The second method is an application of the Trapezium rule on a Talbot-type contour encircling the eigenvalues of the matrix. Computing the matrix exponential alone has also attracted several authors' attention. For example, **Beylikin** et al. [9] used the algorithm that is based on scaling and squaring to approximate a matrix exponential. Also, following an original paper on this problem [59], **Moler** and **Van Loan**[60] recently revisited this problem in "Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later", in which they described recent developments in computing the exponential of a matrix, and provided some interesting analysis and applications of some of the algorithms mentioned previously in this chapter. They cautioned that practical implementations are 'dubious' in the sense that implementation of a sole algorithm might not be entirely reliable for all classes of problems. To investigate the algorithms' performance in the non-diagonal matrix case, we set up a large number of computational experiments on various orders \(q\) of the second-order centered difference differentiation matrix (see SS2.2) for the second derivative, \[M_{2}=\left(\begin{array}{ccccccccc}-2&1&0&0&0&\ldots&0&0\\ 1&-2&1&0&0&\ldots&0&0\\ 0&1&-2&1&0&\ldots&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ldots&\vdots&\vdots\\ 0&0&0&0&0&\ldots&1&-2\end{array}\right), \tag{4.69}\] (note that if the order of the matrix \(M_{2}\) (4.69) is \(q\), the scaling of \(M_{2}\) is such that it corresponds to the second derivative on an interval of length \(q+1\)). Tests on the Chebyshev differentiation matrix for the second derivative [11, 25, 83, 84] and the second-order centered difference differentiation matrix for the first derivative are described in SS4.4 and SS4.5 respectively. We use the Matlab function _expm_ to approximate the exponential function \(e^{\Delta tM}\) of a matrix \(M\), and the function _inv_ to find \((\Delta tM)^{-1}\), and 50 digit arithmetic to approximate the exact values of the expressions \[f_{1}(\Delta tM) = \frac{e^{\Delta tM}-I}{\Delta tM}, \tag{4.70}\] \[f_{2}(\Delta tM) = \frac{e^{\Delta tM}-I-\Delta tM}{(\Delta tM)^{2}}, \tag{4.71}\] and \[f_{3}(\Delta tM)=\frac{e^{\Delta tM}-I-\Delta tM-(\Delta tM)^{2}/2}{(\Delta tM )^{3}}, \tag{4.72}\] where \(I\) is the \(q\times q\) identity matrix and \(\Delta t\) is the time step, that are required for the ETD1 (3.14) and the ETD2 (3.15) methods, respectively, in the matrix case. For the ETD3 (3.16) and higher order methods (also the ETD-RK methods), the coefficients are really a combination of the expression \[f_{k}(\Delta tM)=\frac{e^{\Delta tM}-G_{k}(\Delta tM)}{(\Delta tM)^{k}},\ \ \ \ k=1,2,\ldots,s, \tag{4.73}\] where \[G_{k}(\Delta tM)=\sum_{j=0}^{k-1}\frac{(\Delta tM)^{j}}{j!}, \tag{4.74}\] is the first \(k\) terms in the Taylor series approximation to the exponential function \(f_{0}(\Delta tM)=e^{\Delta tM}\) and \((\Delta tM)^{0}=I\). These coefficients can be evaluated using the algorithms (to be explained later in this section), in a manner similar to evaluating the expression \(f_{k}(\Delta tM)\) (4.73), \(k=1,2,\ldots,s\) by those algorithms. The definition of the \(2-\)norm of a matrix [78] \[||\Delta tM||_{2}=max\frac{||\Delta tMx||_{2}}{||x||_{2}},\] where \(x\in\mathbb{R}_{*}^{q}=\mathbb{R}^{q}\backslash\{0\}\), is equivalent to the formula \[||\Delta tM||_{2}=\sqrt{\zeta_{max}((\Delta tM)^{T}(\Delta tM))}, \tag{4.75}\] (the square root of the maximum eigenvalue \(\zeta_{max}\) of the matrix multiplied by its transpose). Formula (4.75) are used in our experiments to find the numerical relative errors (4.10) of using each algorithm to approximate the expression \(f_{k}(\Delta tM)\) (4.73), \(k=1,2,3\) for large and small values of the time step \(\Delta t\). In figure 4.8, we present only the results of our experiments for the expressions \(f_{2}(\Delta tM_{2})\) (4.71) and \(f_{3}(\Delta tM_{2})\) (4.72) in the \(40\times 40\) matrix case, since those for the expression \(f_{1}(\Delta tM_{2})\) (4.70) are qualitatively similar. The size of the matrix used is limited not by the time used by the algorithms but by the much greater time needed to obtain the 'exact' 50-digit results. Results for smaller and larger matrices are qualitatively similar. Note that figure 4.8 also shows the errors for the use of the explicit formulas; in the matrix case this means simply evaluating the formulas \(f_{2}(\Delta tM_{2})\) and \(f_{3}(\Delta tM_{2})\) using the Matlab commands _expm_ and _inv_ with standard double precision (16 digits) arithmetic (the function _expm_ uses a scaling and squaring method combined with Pade approximation (4.94) [35, 37, 47, 76], and therefore is not quite explicit). #### Taylor Series The approximation \[e^{\Delta tM}\approx I+\Delta tM+\frac{(\Delta tM)^{2}}{2!}+\frac{(\Delta tM)^ {3}}{3!}+\cdots+\frac{(\Delta tM)^{m}}{m!},\] where \(I\) is the \(q\times q\) identity matrix, for some integer \(m\) may be used to approximate the exponential in the expression \(f_{k}(\Delta tM)\) (4.73), so that \[f_{k}(\Delta tM)\approx\sum_{j=k}^{m}\frac{(\Delta tM)^{j-k}}{j!},\ \ \ \ k=1,2,3, \tag{4.76}\] where \((\Delta tM)^{0}=I\). However, it is well known that although in principle this series is convergent, in practice the algorithm is very inaccurate when \(||\Delta tM||\) is large (see, for example [60]). The 30-term Taylor series algorithm is one of the easiest algorithms to implement in the matrix case. However, as expected, it does not perform very well for large values of \(\Delta t\), as is indicated in figure 4.8. The problem in using the Taylor expansion directly is that it results in a loss of accuracy, because some of the eigenvalues of the \(q\times q\) matrix \(\Delta tM_{2}\) are negative and much less than \(-1\) for large values of \(\Delta t\). Therefore the problem of cancellation reappears (see SS4.2.1). The eigenvalues \(\lambda_{j}\) of the matrix \(\Delta tM_{2}\) (4.69) can be derived analytically (see [43]) in the form \[\lambda_{j}=\left(-2+2\cos\left(\frac{j\pi}{q+1}\right)\right)\Delta t,\ \ \ \ j=1,\cdots,q,\] so the eigenvalue of largest magnitude is \(\lambda_{q}\approx-4\Delta t\) and the smallest is \(\lambda_{1}\approx-\pi^{2}\Delta t/(q+1)^{2}\approx-0.0059\Delta t\) for \(q=40\). Figure 4.8: Relative errors in \(f_{2}(\Delta tM_{2})\) (4.71) and \(f_{3}(\Delta tM_{2})\) (4.72) versus the values of \(\Delta t\) in the \(40\times 40\) matrix case. The algorithms are: Explicit Formula (red stars), 30-term Taylor series (blue circles), the Cauchy Integral Formula (magenta circles), Scaling and Squaring Type **I** based on the identities (4.20) - (4.22) (black stars), Composite Matrix (cyan diamonds) and Matrix Decomposition (green squares). Figure 4.8 also shows that, for small values of \(\Delta t\), the explicit formulas \(f_{k}(\Delta tM_{2})\) (4.73), \(k=2,3\) are inaccurate (qualitatively similar results are found for formula \(f_{1}(\Delta tM_{2})\) (4.70)) due to the cancellation errors arising from the small eigenvalues that are close to zero. For large values of \(\Delta t\), the norm of the matrix \(\Delta tM_{2}\) (4.69) gets larger, and as already noted, the computation of the matrix exponential \(e^{\Delta tM_{2}}\), in the explicit formula, depends on the Matlab function _expm_, which is based on the Scaling and Squaring algorithm combined with Pade approximation (4.94). In this case, the algorithm also yields inaccurate results due to the increase in the number of scalings needed to approximate the matrix exponential \(e^{\Delta tM_{2}}\). Each scaling doubles the errors due to cancellation, truncation and rounding, resulting from the matrix inversion and the repeated matrix multiplications when using the Pade approximation. The analysis of the rounding errors in using the Scaling and Squaring algorithm for approximating the exponential function (see also formula (4.30)) is explained in SS4.2.3. In the matrix case, there is a large range of values of \(\Delta t\) for which both the explicit formulas and the Taylor series algorithm are inaccurate, so we cannot simply switch between the two algorithms in this case as we proposed in the scalar case in SS4.2.1. #### the Cauchy Integral Formula A less well known Cauchy integral formula is the matrix form \[f(\Delta tM)=\frac{1}{2\pi i}\int_{\Gamma}\frac{f(T)}{TI-\Delta tM}dT, \tag{4.77}\] where \(f\) is an analytic function of the matrix \(\Delta tM\), \(I\) is the \(q\times q\) identity matrix and the contour \(\Gamma\) is sufficiently large to enclose all the eigenvalues of the matrix \(\Delta tM\) (see [32, 44, 45]). Formula (4.77) is an analogous to the formula (4.11), for the scalar case, presented in SS4.2.2. Suitable contours \(\Gamma\) may vary from one problem to another. For example, elliptical contours were investigated by **Kassam** and **Trefethen**[44, 45] and **Livermore**[53]. The ellipse is centered at some point \(z_{0}=x_{0}+iy_{0}\) in the complex plane and has a semi major axis \(a\) and a semi minor axis \(b\) and can be expressed parametrically as \[T(\theta)=z_{0}+a\cos\theta+ib\sin\theta,\ \ \ \ 0\leq\theta\leq 2\pi.\]Plugging this into the Cauchy integral formula (4.77) and employing the periodic **Trapezium Rule** (4.14) to approximate the integral we obtain the formula for an elliptical contour, \[f(\Delta tM)\approx\frac{1}{N}\sum_{j=1}^{N}(b\cos\theta_{j}+ia\sin\theta_{j})(T (\theta_{j})I-\Delta tM)^{-1}f(T(\theta_{j})), \tag{4.78}\] where \(T(\theta_{j})=z_{0}+a\cos\theta_{j}+ib\sin\theta_{j},\ \theta_{j}=2\pi j/N\) are \(N\) points along the bounding ellipse. The simplest choice of the contour \(\Gamma\) is a circle with radius \(R\) centered at some point \(z_{0}\) on the real line. By making the substitution \(dT(\theta)=T_{\theta}(\theta)d\theta\), where \(T(\theta)=z_{0}+Re^{i\theta}:0\leq\theta\leq 2\pi\), the Cauchy integral (4.77) becomes \[f(\Delta tM) = \frac{1}{2\pi i}\int_{0}^{2\pi}\frac{f(z_{0}+Re^{i\theta})}{T( \theta)I-\Delta tM}Rie^{i\theta}d\theta, \tag{4.79}\] \[= \frac{1}{2\pi}\int_{0}^{2\pi}(T(\theta)-z_{0})(T(\theta)I-\Delta t M )^{-1}f(T(\theta))d\theta.\] Employing the periodic **Trapezium Rule** (4.14) to approximate the integral on the right-hand side of (4.79), we obtain the corresponding formula proposed by **Kassam** and **Trefethen**[44, 45] for a circular contour \[f(\Delta tM)\approx\frac{1}{N}\sum_{j=1}^{N}(T(\theta_{j})-z_{0})(T(\theta_{j })I-\Delta tM)^{-1}f(T(\theta_{j})), \tag{4.80}\] where \(T(\theta_{j})=z_{0}+Re^{i\theta_{j}},\ \theta_{j}=\frac{2\pi j}{N}\) are the \(N\) points around the circumference of the circle centered at \(z_{0}\). To approximate the function \(f_{k}(\Delta tM),k=1,2,\ldots,s\) (required for the ETD methods of order \(s\)) with this algorithm, we simply evaluate the scalar function \(f_{k}(z)\) (4.2), \(k=1,2,\ldots,s\) respectively at a set of \(N\) points \(T(\theta_{j})=z_{0}+Re^{i\theta_{j}}\) in the complex plane, and then apply (4.80) \[f_{k}(\Delta tM)\approx\frac{1}{N}\sum_{j=1}^{N}(T(\theta_{j})-z_{0})(T(\theta _{j})I-\Delta tM)^{-1}f_{k}(T(\theta_{j})). \tag{4.81}\] Our experience shows that many different choices of the contour work well, so long as one is careful to ensure that none of the points on the contour are close to or at the origin (otherwise the original problem of rounding errors reappears), and that all the eigenvalues of the matrix \(\Delta tM\) are indeed enclosed by \(\Gamma\). However, formula (4.81) shows that, in order to do this, we need to work out \(N\) matrix inverses \((T(\theta_{j})I-\Delta tM)^{-1}\), and this consequently restricts the good performance of the algorithm to matrices of moderate norm. This is because of the approximation of the integral (4.77) for matrices with large norm (the spread of the eigenvalues increases) via the circular contour algorithm (4.81), requires us to enlarge the circle so that it encloses all the eigenvalues of the matrices. Consequently we must increase the number \(N\) of points around the circle required to give accurate results. We therefore also increase the amount of work required for computing the large number \(N\) of matrix inverses (one for each point on the discretized circle). This adds a disadvantage in terms of the high cost in computer time (see SS4.6). In addition to the difficulties mentioned above, the eigenvalues (if not already known) must be computed beforehand - or at least, the eigenvalue of largest absolute value must be determined - in order to choose a suitable integration contour (for the matrices we consider, the eigenvalues are already known). Some of these difficulties were also noted by **Livermore**[53]. However, **Kassam** and **Trefethen**[44, 45] recommended that, if the functions that we want to calculate are real, we can halve the amount of work by exploiting the \(\pm i\) symmetry of the algorithm (4.81) and evaluate in equally spaced points on the upper half of a circle centered on the real axis, then take the real part of the results. Also, **Schmelzer** and **Trefethen**[69, 70] had a new perspective on contour integrals that improves some of these difficulties. The authors have shown that the function \(f_{k}(\Delta tM),k=1,2,\ldots,s\) can be evaluated efficiently using a **Hankel** contour and a different form of the integral (4.77). Rather than working with circles and ellipses as contours, they enclosed the eigenvalues by open contours winding around the negative real line. The authors claimed that the use of **Hankel** contours in the Cauchy integral avoids the expensive computation of eigenvalues to estimate the shape of an enclosing contour, and overcomes the algebraic decay of the functions in the left half-plane which makes this approach flexible and efficient. Unfortunately, we received this information too late to incorporate it in our experiments. In our experiments, we take the contour to be a circle centered at half the minimum eigenvalue (\(\lambda_{min}\)) of the matrix \(\Delta tM_{2}\) (4.69) (the eigenvalues of the matrix \(\Delta tM_{2}\) are on the negative real axis), and sampled at 128 equally spaced points in (4.81). The radius \[R=-\frac{\lambda_{min}}{2}+5,\] is varying for each value of \(\Delta t\) to ensure that the circular contour encloses all the eigenvalues of the matrix \(\Delta tM_{2}\) and does not pass too close to any. The above choice of \(R\) was found to be suitable for values of \(\Delta t>0.6\), but less accurate for small values of \(\Delta t\). An interesting observation from our practical experiments is that the algorithm is sensitive to the choices of the center and the radius of the circular contour relative to the range of the eigenvalues of the matrix \(\Delta tM_{2}\). For values of \(\Delta t\leq 0.6\) the contour is a circle centered at the minimum eigenvalue (\(\lambda_{min}\)) of the matrix \(\Delta tM_{2}\) sampled at 128 equally spaced points. The radius in this case \[R=-\lambda_{min}+1,\] also varies for each value of \(\Delta t\) to ensure that the circular contour encloses all the eigenvalues of the matrix \(\Delta tM_{2}\) and that the algorithm yields the desired error levels. Regarding the test described in SS4.3, we find that, when computing the numerical relative errors (4.10) of using this algorithm to approximate the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=2,3\) for matrix size \(q=40\) and small values of \(\Delta t\), the algorithm (4.81) performs very well and the results are very satisfactory, see figure 4.8 (qualitatively similar results are found for formula \(f_{1}(\Delta tM_{2})\) (4.70)). However, this algorithm is slightly less accurate than the Scaling and Squaring algorithm type \(\mathbf{I}\) and the Composite Matrix algorithm (to be described in SS4.3.4 and SS4.3.6 respectively), and the deficiency of its performance is particularly pronounced for large values of \(\Delta t\). As is apparent in figure 4.8, there is a sharp increase of the relative errors, due to enlarging the circular contour to enclose all the eigenvalues of the matrix \(\Delta tM_{2}\), without increasing the number \(N\) of points around the circle (more than 128 points are needed to give accurate results; in fact 512 points are needed for \(\Delta t=100\)). The form of this error is analyzed in the following section. #### Varying the Radius of the Circular Contour To investigate the effects of varying the circular contour radius, we set up two experiments, one for the scalar case and one for the matrix. For the first, we use the Cauchy integral algorithm (4.16) to compute the scalar expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\) for a fixed number of points \(N=32\) and a fixed value of \(z=10^{-1}\) (the circle center). We start with a radius \(R=1\) and work up to a radius \(R=20\). In figure 4.9 we plot the relative errors (4.10) for each value of the radius \(R\), where the 'exact' values of these expressions were calculated using 50 digit arithmetic. With increasing the radius \(R\) for a fixed number of discretization points \(N\), we observe the huge growth of the errors. For the second experiment, we use the matrix Cauchy integral formula (4.81) to compute the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=1,2,3\) and \(q=40\), for a fixed number of points \(N=32\) and fixed value of \(\Delta t=0.25\), with the circle centered at zero. We again start with a radius \(R=1\) and work up to a radius \(R=20\). In figure 4.10 we plot the relative errors (4.10), using the \(2-\)norm (4.75) of a matrix in Matlab code for each value of the radius. As usual, the 'exact' values of these expressions were calculated using 50 digit arithmetic. The experiment shows that changing the radius \(R\) of the contour for a fixed number of discretization points \(N\) has a dramatic effect on the errors. Firstly, if we decrease the radius \(R\) so that it is too small to enclose all the eigenvalues of the matrix, we see the huge growth of the errors. Secondly, when the radius \(R\) is just enough to enclose all the matrix eigenvalues, the errors are minimized and the accuracy is good. Thirdly, with increasing the radius Figure 4.9: Relative errors of using the Cauchy integral formula (4.16), for \(z=10^{-1}\) and fixed number of points \(N=32\), versus the contour radius \(R\), for approximating in the scalar case, the expressions: \(f_{1}(z)\) (4.1) (blue diamonds), \(f_{2}(z)\) (4.4) (black circles) and \(f_{3}(z)\) (4.5) (red squares). The estimated error lines are \(E_{1}\) (4.86) (cyan), \(E_{2}\) (4.87) (green) and \(E_{3}\) (4.88) (magenta). \(R\) far beyond the eigenvalue with maximum absolute value, we see the errors grow unboundedly again, in the same way as in the scalar case. We can explain this increase in the error of the algorithm with \(R\) by an examination of the leading error term in the periodic Trapezium rule (4.14) \[\frac{1}{2\pi}\int_{0}^{2\pi}P(\theta)d\theta\approx\frac{1}{N}\sum_{j=1}^{N}P( \theta_{j}),\ \ \ \ \theta_{j}=\frac{2\pi j}{N}. \tag{4.82}\] \(P(\theta)\) is a periodic function of \(\theta\), so it can be written as a Fourier series \[P(\theta)=\sum_{n=0}^{\infty}a_{n}e^{in\theta}.\] Plugging this into (4.82) and interchanging the order of summation, we have \[\frac{1}{N}\sum_{j=1}^{N}\sum_{n=0}^{\infty}a_{n}e^{in\theta_{j}}=\frac{1}{N} \sum_{n=0}^{\infty}a_{n}\sum_{j=1}^{N}e^{2\pi jni/N}. \tag{4.83}\] The second summation in the last expression above is simply the sum of the \(N\) roots of unity. This is zero in general, unless the exponent \(n\) is an integer multiple \(K\) of \(N\), i.e. \(n=NK\). Therefore, the periodic Trapezium rule (4.83) gives us \[\frac{1}{N}(Na_{0}+Na_{N}+Na_{2N}+\cdots). \tag{4.84}\] Equivalently, in terms of aliasing errors [84], with \(N\) points we cannot distinguish between the constant function \(1\) and the functions \((e^{2\pi jni/N},\ n=NK)\), since these functions are \(1\) at all mesh points \(\theta_{j}\). In addition, because of the exponential decay [79] of the Fourier coefficients, we deduce that the coefficient \(a_{2N}\) is much less than \(a_{N}\). Therefore, since the true value in the periodic Trapezium rule (4.84) is \(a_{0}\), the leading error term is just \(a_{N}\) and the relative leading error term is \(|a_{N}/a_{0}|\). We use this theory to estimate the error when using the Cauchy integral formula to approximate the scalar expression \(f_{1}(z)\) (4.1) with a fixed number of points \(N\), while increasing the contour radius \(R\). We have \[f_{1}(z+Re^{i\theta})=\frac{e^{z}e^{Re^{i\theta}}-1}{z+Re^{i\theta}}, \tag{4.85}\] and if we assume that \(|z|\ll R\), we can neglect \(z\) and write the right-hand side of (4.85) as a Fourier series \[1+Re^{i\theta}/2!+R^{2}e^{2i\theta}/3!+\cdots+R^{N}e^{Ni\theta}/(N+1)!+\cdots.\] Hence the estimated leading relative error in the trapezium rule, \(|a_{N}/a_{0}|\), is the coefficient of \(e^{Ni\theta}\), which is \[E_{1}=R^{N}/(N+1)!. \tag{4.86}\]Similar calculations can be made for the expression \(f_{k}(z)\) (4.2) of orders \(k=2,3\), and the leading relative errors for these cases are found to be \[E_{2}=2R^{N}/(N+2)!, \tag{4.87}\] and \[E_{3}=6R^{N}/(N+3)!, \tag{4.88}\] respectively. Figure 4.9 shows that the theoretically estimated errors \(E_{1}\) (4.86), \(E_{2}\) (4.87) and \(E_{3}\) (4.88) agree very well with the numerical relative errors of using the Cauchy integral algorithm (4.16) for approximating the expression \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\) respectively, for large radius \(R\) at fixed values of discretization points \(N\). Figure 4.10: Relative errors of using the Cauchy integral formula (4.81), for \(\Delta t=0.25,\;q=40\) and fixed number of points \(N=32\), versus the contour radius \(R\), for approximating in the matrix case, the expressions: \(f_{1}(\Delta tM_{2})\) (4.70) (blue diamonds), \(f_{2}(\Delta tM_{2})\) (4.71) (black circles) and \(f_{3}(\Delta tM_{2})\) (4.72) (red squares). The estimated error lines are \(E_{1}\) (4.86) (cyan), \(E_{2}\) (4.87) (green) and \(E_{3}\) (4.88) (magenta). On the other hand, applying the same theory to estimate the error when using the Cauchy integral formula to approximate the expression \(f_{k}(\Delta tM),\;k=1,2,\ldots,s\) (4.73) in the matrix case is cumbersome, though our numerical experiments show that the above analysis and results hold. Figure 4.10 shows that the theoretically estimated errors \(E_{1}\), \(E_{2}\) and \(E_{3}\) agree very well with the numerical relative errors of using the Cauchy integral algorithm (4.81) for approximating the expressions \(f_{1}(\Delta tM_{2})\) (4.70), \(f_{2}(\Delta tM_{2})\) (4.71) and \(f_{3}(\Delta tM_{2})(4.72)\) respectively. In a third experiment, we found that two criteria need to be met for the error formulas \(E_{1}\) (4.86), \(E_{2}\) (4.87) and \(E_{3}\) (4.88) to agree accurately with the numerical relative errors of using the Cauchy integral algorithm (4.16) for approximating the expressions7: Footnote 7: These criteria are not required for the accuracy of the Cauchy integral algorithm when approximating the expressions. 1. In the scalar case \(|z|\ll 1\) and \(|z|\ll R\), 2. The center of the circular contour, in the non-diagonal matrix case, should be zero. If one of the criteria is breached, the theoretically estimated errors \(E_{1}\), \(E_{2}\) and \(E_{3}\) will not agree with the numerical relative errors of using the Cauchy integral algorithm for approximating the expression \(f_{k}(z)\), for large radius \(R\) at fixed values of discretization points \(N\). Figure 4.11 shows a case of testing the Cauchy integral formula (4.81) for computing \(f_{1}(\Delta tM_{2})\) (4.70), \(f_{2}(\Delta tM_{2})\) (4.71) and \(f_{3}(\Delta tM_{2})\) (4.72) with \(q=40\), for a fixed number of points \(N=128\) and fixed value of \(\Delta t=10\). Here the circular contour is centered at half the minimum eigenvalue (\(\lambda_{min}\)) of the matrix \(\Delta tM_{2}\) (4.69). In the plot, we can see that the estimated error lines \(E_{1}\) (4.86), \(E_{2}\) (4.87) and \(E_{3}\) (4.88) do not agree with the numerical relative errors for each value of the radius, ranging from \(R=-\frac{\lambda_{min}}{2}+1\) up to \(R=-\frac{\lambda_{min}}{2}+60\). Our error formulas can be used to determine the value of the radius \(R\) at which the algorithm becomes inaccurate for a given value of \(N\). More usefully, for larger values of the radius \(R\), we can also estimate the number of points \(N\) required to achieve a relative error of some chosen tolerance \(\epsilon\), in terms of the radius \(R\) and \(\epsilon\). For large integers \(N\), we use **Stirling's** formula [1] \[N!\approx\sqrt{2\pi N}\frac{N^{N}}{e^{N}},\]to approximate \((N+1)!\), in the formula \(E_{1}=R^{N}/(N+1)!\) (4.86), so that \[R^{N} \approx \epsilon\sqrt{2\pi}(N+1)^{N+\frac{3}{2}}e^{-(N+1)},\] \[N\log R \approx \log\epsilon+\log\sqrt{2\pi}+\Big{(}N+\frac{3}{2}\Big{)}\Big{[} \log N+\log\Big{(}1+\frac{1}{N}\Big{)}\Big{]}-(N+1). \tag{4.89}\] Applying the series expansion to the logarithmic function \[\log\Big{(}1+\frac{1}{N}\Big{)}=\sum_{j=1}^{\infty}\frac{(-1)^{j+1}}{j}\Big{(} \frac{1}{N}\Big{)}^{j}=\frac{1}{N}+O\Big{(}\frac{1}{N^{2}}\Big{)},\ \Big{|} \frac{1}{N}\Big{|}<1,\] and substituting in (4.89), ignoring the terms of \(O(1/N)\), since they are small compared to our assumption that \(R\) and \(N\) are large gives us \[N\log R\approxeq\log\epsilon+\log\sqrt{2\pi}+\Big{(}N+\frac{3}{2}\Big{)}\log N -N. \tag{4.90}\] Figure 4.11: Relative errors of using the Cauchy integral formula (4.81), for \(\Delta t=10,\ q=40\) and fixed number of points \(N=128\), versus the contour radius \(R\), for approximating in the matrix case, the expressions: \(f_{1}(\Delta tM_{2})\) (4.70) (blue diamonds), \(f_{2}(\Delta tM_{2})\) (4.71) (black circles) and \(f_{3}(\Delta tM_{2})\) (4.72) (red squares). The estimated error lines are \(E_{1}\) (4.86) (cyan), \(E_{2}\) (4.87) (green) and \(E_{3}\) (4.88) (magenta). Equating the largest terms in (4.90) leads to \[\log R\approx\log N\Rightarrow R\approx c_{0}N,\] for some constant \(c_{0}\). If we substitute this result in (4.90) we obtain \[N\log c_{0}\approxeq\log\epsilon+\log\sqrt{2\pi}+\frac{3}{2}\log N-N, \tag{4.91}\] and again equating large terms leads us to \[\log c_{0}\approx-1\Rightarrow N\approx eR.\] Now set \[N\approx eR+\varepsilon,\hskip 14.226378pt\varepsilon\ll eR, \tag{4.92}\] so that the added \(\varepsilon\) term provides a more accurate approximation. If we again substitute in (4.90), we get \[(eR+\varepsilon)\log R \approxeq\log\epsilon+\log\sqrt{2\pi}+\Big{(}eR+\varepsilon+ \frac{3}{2}\Big{)}\Big{[}\log eR+\log\left(1+\frac{\varepsilon}{eR}\right) \Big{]}\] \[-\ (eR+\varepsilon),\] and applying again the series expansion to the logarithmic function in the equation above gives \[(eR+\varepsilon)\log R \approxeq\log\epsilon+\log\sqrt{2\pi}+\Big{(}eR+\varepsilon+ \frac{3}{2}\Big{)}\Big{[}1+\log R+\frac{\varepsilon}{eR}+O\Big{(}\frac{ \varepsilon}{eR}\Big{)}^{2}\Big{]}\] \[-\ (eR+\varepsilon),\] \[0 \approxeq\log\epsilon+\log\sqrt{2\pi}+\varepsilon+\frac{3}{2}+ \frac{3}{2}\log R,\] \[\varepsilon \approxeq-\log\epsilon-\log\sqrt{2\pi}-\frac{3}{2}-\frac{3}{2} \log R.\] Substituting the last result for \(\varepsilon\) in (4.92) leads to the approximate condition \[N\approxeq eR-\log\epsilon-\log\sqrt{2\pi}-\frac{3}{2}-\frac{3}{2}\log R,\] for the error \(E_{1}\) to be of order \(\epsilon\), assuming that the radius \(R\) is large. #### Scaling and Squaring Algorithm: Type I In the non-diagonal matrix case, we use a 30-term Taylor series, as explained in SS4.3.1, to compute the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=1,2,3\) if the largest absolute eigenvalue \(\lambda_{max}\) of the matrix \(\Delta tM_{2}\) (4.69) is less than some threshold value \(\delta_{1}\). If not, we use the following Scaling and Squaring algorithm. In a manner similar to the scalar case discussed in SS4.2.3, we first use a 30-term Taylor series to compute \(f_{1}(2^{-l_{2}}\Delta tM_{2})\), \(f_{2}(2^{-l_{2}}\Delta tM_{2})\) and \(f_{3}(2^{-l_{2}}\Delta tM_{2})\), for some \(l_{2}\) chosen to be the smallest integer such that \[l_{2}\geq\frac{\log(\lambda_{max}/\delta_{1})}{\log 2}, \tag{4.93}\] so that the largest absolute eigenvalue of the matrix \(2^{-l_{2}}\Delta tM_{2}\) is less than the threshold \(\delta_{1}\), which we choose to be \(\delta_{1}=1\) in our experiments. We then proceed by using the identities (4.20) - (4.22) or (4.23) - (4.25), or (4.27) and either the identities (4.21) - (4.22) or (4.24) - (4.25), \(l_{2}\) times to compute the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=1,2,3\) to obtain the final answer. Note that as for the Cauchy integral algorithm, the Scaling and Squaring algorithm requires the knowledge of the eigenvalue of largest magnitude. Regarding the test described in SS4.3, we find that the Scaling and Squaring Figure 4.12: Relative errors of using the Scaling and Squaring Type **I** algorithm based on the identities (4.20) - (4.22), versus the values of \(\Delta t\), for approximating the expression \(f_{3}(\Delta tM_{2})\) (4.72) for \(q=40\), for different values of threshold \(\delta_{1}\) (see formula (4.93)). algorithm based on the identities (4.20) - (4.22) is very good in the non-diagonal matrix case, being the most accurate for approximating the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=2,3\) for matrix size \(q=40\), for small values of \(\Delta t\), as displayed in figure 4.8 (the same holds qualitatively for the expression \(f_{1}(\Delta tM_{2})\) (4.70)). The reasons for favoring this algorithm are that it is accurate and efficient for both diagonal and non-diagonal matrix problems (for small values of \(\Delta t\)), compared with the other algorithms. The accuracy depends on the norm of the matrix \(\Delta tM_{2}\) (4.69), however. As the value of \(\Delta t\) increases, the norm of the matrix increases. Therefore, more scaling operations are needed, leading to an amplification of the cancellation errors and the rounding errors resulting in the repeated matrix multiplication when using the Taylor expansion. In fact these errors are doubled at each scaling, and we Figure 4.13: Relative errors of using the Scaling and Squaring Type \(\mathbf{I}\) algorithm, versus the values of \(\Delta t\), for approximating the expression \(f_{3}(\Delta tM_{2})\) (4.72) for \(q=40\). The blue line (circles) uses the identities (4.20) - (4.22), the cyan line (stars) uses the identities (4.23) - (4.25), the green line (diamonds) uses the identities (4.27), (4.21) and (4.22), and the black line (squares) uses the identities (4.27), (4.24) and (4.25). expect the relative error to increase linearly as the value of \(\Delta t\) increases (in fact, the simple arguments for the scalar case in SS4.2.3, about how errors do not grow for \(z\ll-1\), cannot be applied directly to the matrix case). In further tests, we compute the relative errors of using the Scaling and Squaring algorithm, based on the relations (4.20) - (4.22) or (4.23) - (4.25), or (4.27) and either the identities (4.21) - (4.22) or (4.24) - (4.25), to approximate the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=1,2,3\), for different choices of the threshold values \(\delta_{1}=0.5,1,2\). We find, firstly, that any choice of the threshold values \(\delta_{1}<3\) is desirable. Figure 4.12 illustrates the relative errors of using the algorithm based on (4.20) - (4.22) for approximating \(f_{3}(\Delta tM_{2})\) (4.72) with \(q=40\), demonstrating that the accuracies of the algorithm for the threshold values \(\delta_{1}=0.5,1,2\) are more acceptable than that for the threshold \(\delta_{1}=3\). In addition, we find that there is a direct relation between larger values of the threshold and the number of terms used in the Taylor series combined with the algorithm. As the value of the threshold gets larger, it is necessary to increase the number of terms in the Taylor series to maintain the efficiency of the algorithm. Secondly, we find that a similar level of accuracy is achieved when computing the relative errors for both families of the Scaling and Squaring formulas (4.20) - (4.22) and (4.23) - (4.25) for approximating the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=1,2,3\). However, these errors are found to be larger than those resulting from using the identities (4.27), (4.21) and (4.22). These last formulas have turned out to be the most accurate out of all other formulas tested in this chapter and have the property that we need never compute a matrix exponential (the analysis, see SS4.2.3, in using the Scaling and Squaring algorithm type **I**, shows that there are rounding errors (4.30) in applying the identity (4.19) to approximate a matrix exponential. However, our analysis also shows that these errors have no effect on the Scaling and Squaring algorithm type **I** based on (4.27), (4.21) and (4.22), since they do not involve computing a matrix exponential to approximate the expression \(f_{k}(\Delta tM_{2})\) for \(z\ll-1\) in the scalar case. And hence, they have shown the best accuracy). Figure 4.13 provides numerical evidence of the algorithms' validity when using the identities (4.27), (4.21) and (4.22) for approximating the expression \(f_{3}(\Delta tM_{2})\) (4.72), for matrix size \(q=40\), and a threshold value \(\delta_{1}=1\), and also illustrates no significant differences between the errors for the two different forms of the scaling identities (4.20) - (4.22) and (4.23) - (4.25). However, the relative errors of using all the Scaling and Squaring formulas, except the formulas (4.27), (4.21) and (4.22), are seen to increase significantly as \(\Delta t\) increases8. This is due to the increase in the number of scalings needed (due to the increase in the norm of the matrix \(\Delta tM_{2}\) (4.69)) to approximate the expression \(f_{3}(\Delta tM_{2})\). This process doubles (see formula (4.30)), at each scaling, the cancellation errors and the rounding errors resulting in the repeated matrix multiplication when using the Taylor expansion, and we expect the relative error to increase linearly as the value of \(\Delta t\) increases (the simple arguments for the scalar case in SS4.2.3, about how errors do not grow for \(z\ll-1\), cannot be applied directly to the matrix case). This leads us to the conclusion that, the smaller the norm of the matrix, the fewer the number of required matrix squarings, and the smaller the errors. Footnote 8: We also test formulas (4.27), (4.21) and (4.22), with different values of the threshold \(\delta_{1}\), in approximating the function \(f_{k}(\Delta tM)\) (4.73), \(k=1,2,3\), for different matrix sizes \(q\) of the Chebyshev differentiation matrix for the second derivative [11, 25, 83, 84] and of the second-order centered difference differentiation matrix \(\Delta tM_{1}\) (4.101) for the first derivative (results are not shown). Our tests show that these formulas are the most accurate ones out of all identities used in this chapter, and that errors are seen not to increase as \(\Delta t\) increases. This confirms our analysis in §4.2.3. A similar conclusion was arrived at by **Higham**[35] who gave a new rounding error analysis that shows that the computed Pade approximant of the scaled matrix, for computing the matrix exponential, is highly accurate owing to the fact that it requires fewer matrix squarings. #### 4.3.5 Pade Approximation and the Taylor Series It is more common in the literature (especially in the matrix case) to use a Pade approximation [35, 37, 47, 76] rather than Taylor series. The \((n,m)\) Pade approximation to the exponential function \(e^{\Delta tM}\) is defined by \[r_{nm}(\Delta tM)=U_{nm}(\Delta tM)/W_{nm}(\Delta tM), \tag{4.94}\] where \(U_{nm}(\Delta tM)\) and \(W_{nm}(\Delta tM)\) are polynomials of degrees at most \(n\) and \(m\) respectively, both defined as follows \[U_{nm}(\Delta tM)=\sum_{j=0}^{n}\frac{(n+m-j)!n!}{(n+m)!j!(n-j)!}(\Delta tM)^{ j}, \tag{4.95}\] and \[W_{nm}(\Delta tM)=\sum_{j=0}^{m}\frac{(n+m-j)!m!}{(n+m)!j!(m-j)!}(-\Delta tM)^ {j}. \tag{4.96}\]Nonsingular behavior of \(W_{nm}(\Delta tM)\) (4.96) is assured if the eigenvalues of the matrix \(\Delta tM\) are negative [60]. The order of the approximation is equal to the sum of the degrees of the numerator and the denominator, which matches the Taylor series expansion up to order \(n+m\). The function \(f_{k}(\Delta tM)\) (4.73) can be approximated accurately using the Pade approximation (4.94) near the origin i.e. when the norm of the matrix \(\Delta tM\) is not too large. Moreover, the diagonal Pade approximation, which uses equal degree in the numerator and the denominator is, in general, more accurate and computationally economical for a matrix argument than the off-diagonal approximation. However, we favor the Taylor series combined with the Scaling and Squaring algorithm type **I** over the Pade approximation, for three reasons. Firstly, we find that the Pade approximations lead to rounding errors roughly double those of the Taylor series, which is significant in view of the amplification of these errors caused by the scaling and squaring process, discussed in SS4.2.3. For large \(m\), \(W_{mm}(\Delta tM)\) (4.96) Figure 4.14: Relative errors of using the 16-term Taylor expansion (blue line) and the \((8,8)\) Pade approximation (green line) versus the values of \(\Delta t\), for approximating the function \(f_{0}(\Delta tM_{2})=e^{\Delta tM_{2}}\) for matrix size \(q=20\). approaches the series for \(e^{-\Delta tM/2}\), whereas \(U_{mm}(\Delta tM)\) (4.95) tends to the series for \(e^{\Delta tM/2}\). Hence, cancellation error can reduce the accuracy. This is illustrated in figure 4.14, where we plot the relative errors of using the 16-term Taylor expansion and the \((8,8)\) Pade approximation to the exponential function \(f_{0}(\Delta tM_{2})=e^{\Delta tM_{2}}\), of the matrix \(M_{2}\) (4.69) of order \(q=20\), versus the values of \(\Delta t\). The exact values of the exponential function \(e^{\Delta tM_{2}}\) are approximated using the Matlab code _expm_ and 50 digit arithmetic. Secondly, in addition to the cancellation problem, the Pade approximation requires a more expensive matrix inversion. The denominator matrix \(W_{nm}(\Delta tM)\) may be very poorly conditioned with respect to inversion, and this is particularly true when the matrix \(\Delta tM\) has widely spread eigenvalues [60]. Thirdly, it is possible to keep the number of matrix multiplications reasonably small because of the relation \(U_{nm}(\Delta tM)=W_{mn}(-\Delta tM)\), which reflects the property \(1/e^{\Delta tM}=e^{-\Delta tM}\), and by using the Paterson-Stockmeyer [64] algorithm (this algorithm minimizes the number of matrix multiplications in an efficient way, by grouping the terms together and using the partitioning within a matrix polynomial; see [86] for more detail). However, we find that, when the Paterson-Stockmeyer algorithm is used, the \((n,n)\) Pade approximation for a general function requires a number of matrix multiplications that scales as \(2\sqrt{2n}\), which is exactly the same as for the corresponding Taylor series of degree \(2n\). To sum up, for the reasons mentioned above (the Pade approximation is less accurate than the Taylor series and requires a matrix inversion), we favor the Taylor series combined with the Scaling and Squaring algorithm type \(\mathbf{I}\) in all of our experiments. #### Composite Matrix Algorithm Analogous to the scalar case (see SS4.2.5), we now consider the \(((s+1)q)\times((s+1)q)\) composite matrix \[B_{s}=\left(\begin{array}{ccccccc}\Delta tM&I&\underline{0}&\underline{0}& \underline{0}&\ldots&\underline{0}\\ \underline{0}&\underline{0}&I&\underline{0}&\underline{0}&\ldots&\underline{0} \\ \underline{0}&\underline{0}&\underline{0}&I&\underline{0}&\ldots&\underline{0} \\ \vdots&\vdots&\vdots&\vdots&\vdots&\ldots&\vdots\\ \underline{0}&\underline{0}&\underline{0}&\underline{0}&\underline{0}&\ldots&I \\ \underline{0}&\underline{0}&\underline{0}&\underline{0}&\underline{0}&\ldots& \underline{0}\end{array}\right), \tag{4.97}\]where \(q\) is the order of the matrix \(\Delta tM\), \(\underline{0}\) is the \(q\times q\) zero matrix and \(I\) is the \(q\times q\) identity matrix. If we exponentiate the matrix \(B_{s}\), the resulting matrix \[e^{B_{s}}=\left(\begin{array}{cccccccc}e^{\Delta tM}&f_{1}(\Delta tM)&f_{2}( \Delta tM)&f_{3}(\Delta tM)&f_{4}(\Delta tM)&\ldots&f_{s}(\Delta tM)\\ \underline{0}&I&I&I/2&I/3!&\cdots&I/(s-1)!\\ \underline{0}&\underline{0}&I&I&I/2&\cdots&I/(s-2)!\\ \vdots&\vdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ \underline{0}&\underline{0}&\underline{0}&\underline{0}&\underline{0}&\cdots&I \\ \end{array}\right), \tag{4.98}\] returns the coefficient \(f_{k}(\Delta tM)\) (4.73), \(k=1,2,\ldots,s\), required by the ETD methods of order \(s\), which can again be extracted easily. The proof of the resulting matrix (4.98) is essentially the same as in the scalar case (see SS4.2.5), which uses the Taylor series expansion of the exponential function. We note in particular that, due to the structure of the matrix \(B_{s}\), any power of the matrix \(B_{s}\) contains as a sub-matrix the corresponding power of the matrix \(\Delta tM\) in the same position where \(B_{s}\) contains \(\Delta tM\), and therefore, the exponential of the matrix \(\Delta tM\) will be generated in the same position. This algorithm is implemented using the Matlab function _expm_ to approximate the exponential \(e^{B_{s}}\) (4.98), and has the advantage of being one of the simplest of all the algorithms to code. Figure 4.8 shows that, when computing the relative error (4.10) of using this algorithm for approximating the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=2,3\) of matrix size \(q=40\) for small values of \(\Delta t\), the results are very satisfactory (the same holds qualitatively for the expression \(f_{1}(\Delta tM_{2})\) (4.70)). As the value of \(\Delta t\) increases, the norm of the matrix \(B_{s}\) (4.97) also increases. Therefore more scaling operations are needed9, leading to an amplification of the cancellation errors and the rounding errors resulting from the matrix inversion and the repeated matrix multiplications when using the Pade approximation, see SS4.3.5, and to an increase in the computational expense. In fact, referring to formula (4.30), these errors are doubled at each scaling and we expect the relative error to increase linearly as the value of \(\Delta t\) increases (see in SS4.2.3 the analysis of the rounding errors in using the Scaling and Squaring algorithm for approximating the exponential function). In addition, the algorithm uses a much larger matrix than the other algorithms,because the order of the matrix \(B_{s}\) (4.97) is \((s+1)\) times that of the matrix \(\Delta tM_{2}\) (4.69). This also leads to a significant increase in the computational effort that slows the algorithm (see SS4.6). #### Matrix Decomposition Algorithm One class of efficient algorithms for problems involving large matrices and evaluation of the exponential \(e^{\Delta tM}\) is based on factorizations or decompositions [60] of the matrix \(\Delta tM\). Such matrix decompositions are based on transformations of the form \[\Delta tM=VDV^{-1},\] and the power series definition of \(e^{\Delta tM}\) then implies \[e^{\Delta tM}=Ve^{D}V^{-1}.\] The idea is to find a matrix \(V\) for which \(e^{D}\) is easy to compute. This provides a useful algorithm in the case where matrices can be diagonalized. The simplest approach [60] is to take \(V\) to be the matrix whose columns are the eigenvectors of the matrix \(\Delta tM\), that is \[V=[v_{1}]\ldots[v_{q}],\] and \[\Delta tMv_{j}=\zeta_{j}v_{j},\ \ \ \ j=1,\ldots,q,\] where \(\zeta_{j}\) are the eigenvalues of the matrix \(\Delta tM\) of order \(q\). These \(q\) equations can be written \[\Delta tMV=VD,\] where \(D=diag(\zeta_{1},\ldots,\zeta_{q})\). The exponential of the diagonal matrix \(D\) can be found easily, since it only requires computing the exponential of a scalar \[e^{D}=diag(e^{\zeta_{1}},\ldots,e^{\zeta_{q}}).\] Using the above considerations, we can write the expression \(f_{k}(\Delta tM)\) (4.73),\(k=1,2,\ldots,s\) as follows: \[f_{k}(\Delta tM) = \Big{(}e^{\Delta tM}-\sum_{j=0}^{k-1}\frac{(\Delta tM)^{j}}{j!} \Big{)}/(\Delta tM)^{k}, \tag{4.99}\] \[= (VDV^{-1})^{-k}\Big{(}Ve^{D}V^{-1}-\sum_{j=0}^{k-1}\frac{(VDV^{-1 })^{j}}{j!}\Big{)},\] \[= VD^{-k}e^{D}V^{-1}-\sum_{j=0}^{k-1}\frac{VD^{-k}D^{j}V^{-1}}{j!},\] \[= VD^{-k}\Big{(}e^{D}-\sum_{j=0}^{k-1}\frac{D^{j}}{j!}\Big{)}V^{-1},\] \[= Vf_{k}(D)V^{-1}.\] Here, we have simplified the evaluation of a function of a non-diagonal matrix exponential to that of a diagonal matrix exponential \(D\), whose elements are the eigenvalues \(\zeta_{j},\ j=1,\ldots,q\) of the matrix \(\Delta tM\). In our numerical experiments, firstly, we use the command \([V,D]=\mathit{eig}\,(\Delta tM_{2})\), in Matlab code, for matrix size \(q=40\), to produce a diagonal matrix \(D\) whose elements on the main diagonal are the eigenvalues \(\lambda_{j},\ j=1,2,\ldots,q\) of the matrix, and another matrix \(V\) whose columns are the corresponding \(q\) eigenvectors. Then, we use the Taylor expansion with 30 terms, as described in SS4.2.1, to approximate the exponentials \(e^{\lambda_{j}},\ j=1,\ldots,q\) in \(f_{k}(D)\) (4.99), \(k=1,2,3\) for those eigenvalues satisfying \(|\lambda_{j}|<1\), and the explicit formula \(f_{k}(z)\) (4.2) of orders \(k=1,2,3\) respectively for those eigenvalues satisfying \(|\lambda_{j}|\geq 1\). Finally, we compute the matrix inverse of the matrix \(V\), using the command _inv_ in Matlab code, then apply (4.99) to approximate the expression \(f_{k}(\Delta tM_{2})\) (4.73), \(k=1,2,3\) and find the numerical relative errors (4.10) of using this algorithm. According to figure 4.8, this algorithm is remarkable when we compare its accuracy with that of the explicit formula \(f_{k}(\Delta tM_{2})\) (4.73), \(k=2,3\), over all, and with that of the Taylor series and the Cauchy integral formula for large values of \(\Delta t\). However, it is less accurate than the Cauchy integral formula, the Scaling and Squaring type \(\mathbf{I}\) algorithm, and the Composite Matrix algorithm for small values of \(\Delta t\) (qualitatively similar results are found for the formula \(f_{1}(\Delta tM_{2})\) (4.71)). The theoretical difficulty with this algorithm obviously occurs when a matrix does not have a complete set of linearly independent eigenvectors. In this case there is no invertible matrix of eigenvectors \(V\), and the algorithm in the conventional eigenvector approach breaks down (a more general Schur decomposition can be used in this case [56]). ### 4.4 Chebyshev Spectral Differentiation Matrices In this section, we carry out some tests on Chebyshev spectral differentiation matrices [11, 25, 83, 84]. The formulas for the entries of the \((Q+1)\times(Q+1)\) Chebyshev differentiation matrix for the first derivative on the Chebyshev points \(x_{j}=\cos(j\pi/Q),\ j=0,1,\ldots,Q,\ x\in[-1,1]\) are given in [84]. To compute the Chebyshev differentiation matrix \(M_{c}\) for the second derivative with Dirichlet boundary conditions, we square the Chebyshev matrix for the first derivative and then strip the first and last rows and columns to obtain a matrix \(M_{c}\) of order \(q=Q-1\). These rows and columns have no effect, since the rows are multiplied by zero and the columns are ignored. Note that these matrices are dense and have widely-spread eigenvalues. In order to compare the results for the Chebyshev matrix \(M_{c}\) with those of our earlier experiments on the finite difference matrix \(M_{2}\) (4.69), we re-scale \(M_{c}\) so that it applies to an interval of arbitrary length \(q+1=Q\) (this ensures that its eigenvalues of small magnitude are almost identical to those of \(M_{2}\)). Thus we work with the matrix \(M_{C}=4M_{c}/Q^{2}\), for the second derivative, of order \(q=Q-1=40\). We again use the Matlab function _expm_, to approximate the exponential function \(e^{\Delta tM_{C}}\), the function _inv_ to find \((\Delta tM_{C})^{-1}\), 50 digit arithmetic to approximate the exact values of the expression \[f_{3}(\Delta tM_{C})=\frac{e^{\Delta tM_{C}}-I-\Delta tM_{C}-(\Delta tM_{C})^{ 2}/2}{(\Delta tM_{C})^{3}}, \tag{4.100}\] and we use the \(2-\)norm of a matrix, given by (4.75), to find the numerical relative errors (4.10) of using each algorithm to approximate the expression for a range of values of \(\Delta t\). In figure 4.15 we present the results for the expression \(f_{3}(\Delta tM_{C})\) (4.100) with the errors for the use of the explicit formula; this means simply evaluating the formula \(f_{3}(\Delta tM_{C})\) using the Matlab commands _expm_ and _inv_ with standard double precision (16 digits) arithmetic (results for the expressions \(f_{1}(\Delta tM_{C})\) (4.70) and \(f_{2}(\Delta tM_{C})\)(4.71) are found to be qualitatively similar). The test exhibits qualitatively similar results to the case of the finite difference matrix \(\Delta tM_{2}\) (4.69), shown in figure 4.15, except that the errors are typically larger,due to the larger eigenvalues of the Chebyshev matrix \(\Delta tM_{C}\). The eigenvalue of largest magnitude is approximately \(-319.5\Delta t\) and the smallest is approximately \(-0.0059\Delta t\) for \(q=40\). The values of \(\Delta t\), for which the explicit formula, the Taylor series and the Cauchy Integral Formula algorithms start to be inaccurate when numerically evaluating the function \(f_{3}(\Delta tM_{C})\), are smaller (errors are also larger and worse) than that when approximating \(f_{3}(\Delta tM_{2})\), again this is due to the larger eigenvalues of the Chebyshev matrix \(\Delta tM_{C}\), see figure 4.15. For the Cauchy Integral Formula algorithm, we take the contour of integration in (4.81) to be a circle centered at half the minimum eigenvalue (\(\xi_{min}\)) of the matrix \(\Delta tM_{C}\) (the eigenvalues of the matrix are on the negative real axis), and sampled at Figure 4.15: Relative errors for the expression \(f_{3}(\Delta tM_{C})\) (4.100) versus the values of \(\Delta t\) in the \(40\times 40\) matrix case. The algorithms are: Explicit Formula (red stars), 30-term Taylor series (blue circles), the Cauchy Integral Formula (magenta circles), Scaling and Squaring Type **I** based on the identities (4.20) - (4.22) (black stars), Composite Matrix (cyan diamonds) and Matrix Decomposition (green squares). 128 equally spaced points. The radius \[R=-\frac{\xi_{min}}{2}+5,\] is varying for each value of \(\Delta t\geq 0.025\) to ensure that the circular contour encloses all eigenvalues of the matrix \(\Delta tM_{C}\), and does not pass too close to any. The above choice of \(R\) was found to be less accurate for small values of \(\Delta t\), so the radius \[R=-\frac{\xi_{min}}{2}+1,\] is chosen for each value of \(\Delta t<0.025\); this again varies to ensure that the circular contour encloses all the eigenvalues of the matrix and that the algorithm yields the desired error levels. For the Composite Matrix algorithm, we compute the exponential of the matrix \(B_{s}\) (4.97) (that contains the matrix \(\Delta tM_{C}\)), using the Matlab code _expm_, which is based on the Scaling and Squaring algorithm combined with Pade approximations (4.94). We find that for small values of \(\Delta t\), see figure 4.15, the _expm_ function leads to significantly greater rounding errors than those when using the Scaling and Squaring algorithm type \(\mathbf{I}\) based on identities (4.20) - (4.22), combined with the Taylor series, with a threshold value \(\delta_{1}=1\) for approximating the function \(f_{3}(\Delta tM_{C})\). This confirms our reasons, explained in SS4.3.5, for favoring the Taylor series to combine the Scaling and Squaring algorithm than the Pade approximation. On the other hand, as the value of \(\Delta t\) increases, the norm of the matrices \(B_{s}\) and \(\Delta tM_{C}\) increases, as the eigenvalues of the matrix \(\Delta tM_{C}\) spread widely, and the performance of the Composite Matrix algorithm resembles that of the Scaling and Squaring type \(\mathbf{I}\) algorithm, both being the second least accurate algorithms. This is due to the amplification (in fact, it is doubling) of the rounding errors caused by the increase in the number of scaling and squaring operations needed to approximate the function \(f_{3}(\Delta tM_{C})\) and the matrix exponential \(e^{B_{s}}\) (4.98). And so, we expect the relative error to increase linearly as the value of \(\Delta t\) increases (see in SS4.2.3 the analysis of the rounding errors in using the Scaling and Squaring algorithm for approximating the exponential function, leading to formula (4.30)). Finally, according to figure 4.15, the performance of the Matrix Decomposition algorithm surpasses that of all other algorithms for large values of \(\Delta t\), though for small \(\Delta t\), the algorithm's performance resembles that of the Composite Matrix algorithm, both being less accurate than the Taylor series, the Cauchy integral formula,and the Scaling and Squaring type **I** algorithm. ### 4.5 Matrices With Imaginary Eigenvalues To investigate further the efficiency of the algorithms described in SS4.3 for approximating the function \(f_{k}(\Delta tM)\) (4.73), \(k=1,2,\ldots,s\), we conduct similar tests on the \(60\times 60\) second-order centered difference differentiation matrix (see SS2.2) for the first derivative, \[M_{1}=\frac{1}{2}\left(\begin{array}{ccccccccc}0&1&0&0&0&\ldots&0&0\\ -1&0&1&0&0&\ldots&0&0\\ 0&-1&0&1&0&\ldots&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ldots&\vdots&\vdots\\ 0&0&0&0&0&\ldots&-1&0\end{array}\right). \tag{4.101}\] Figure 4.16: Relative errors in \(f_{3}(\Delta tM_{1})\) (4.102) versus the values of \(\Delta t\) in the \(60\times 60\) matrix case. The algorithms are: Explicit Formula (red stars), 30-term Taylor series (blue circles), the Cauchy Integral Formula (magenta circles), Scaling and Squaring Type **I** based on the identities (4.20) - (4.22) (black stars), Composite Matrix (cyan diamonds) and Matrix Decomposition (green squares). Note that if the order of the matrix \(M_{1}\) (4.101) is \(q\), the scaling of the matrix \(M_{1}\) is such that it corresponds to the first derivative on an interval of length \(q+1\), and that the eigenvalues of the matrix are all pure imaginary. The eigenvalues \(\eta_{j}\) of the matrix \(\Delta tM_{1}\) (4.101) can be derived analytically (see [43]) in the form \[\eta_{j}=i\cos\left(\frac{j\pi}{q+1}\right)\Delta t,\ \ \ \ j=1,\cdots,q,\] so the eigenvalue of largest magnitude is \(\eta_{max}\approx 0.998i\Delta t\) and the smallest is \(\eta_{min}\approx 0.0257i\Delta t\) for \(q=60\). As usual, we use the Matlab function _expm_, to approximate the exponential function \(e^{\Delta tM_{1}}\), the function _inv_ to find \((\Delta tM_{1})^{-1}\), 50 digit arithmetic to approximate the exact values of the expression \[f_{3}(\Delta tM_{1})=\frac{e^{\Delta tM_{1}}-I-\Delta tM_{1}-(\Delta tM_{1})^{ 2}/2}{(\Delta tM_{1})^{3}}, \tag{4.102}\] and we use the \(2-\)norm of a matrix, given by (4.75), to find the numerical relative errors (4.10) of using each algorithm to approximate the expression. In figure 4.16 we present a comparison of the results for the expression \(f_{3}(\Delta tM_{1})\) (4.102) using the same six algorithms as in the previous sections (results for the expressions \(f_{1}(\Delta tM_{1})\) (4.70) and \(f_{2}(\Delta tM_{1})\) (4.71) are found to be qualitatively similar). For the Cauchy Integral Formula algorithm, we take the contour of integration in (4.81) to be a circle centered at zero, and sampled at 128 equally spaced points. The radius \(R=|\eta_{max}|+3\), where \(|\eta_{max}|\) is the largest absolute eigenvalue of the matrix \(\Delta tM_{1}\) (4.101), is varying for each value of \(\Delta t\) to ensure that the circular contour encloses all eigenvalues of the matrix \(\Delta tM_{1}\), and does not pass too close to any so that the algorithm yields the desired error levels. The test exhibits qualitatively similar results to the case of the finite difference matrix \(M_{2}\) (4.69) and suggests that the algorithms are efficient for approximating the function \(f_{k}(\Delta tM)\) (4.73), \(k=1,2,\ldots,s\), for small values of \(\Delta t\), whatever the type and the magnitude of the eigenvalues of the matrix \(M\). ### 4.6 Computation Time The main computational challenges in the implementation of the ETD methods are the need for fast and accurate algorithms for approximating the ETD coefficients. Figure 4.17: CPU time of using each algorithm for approximating the expression \(f_{3}(\Delta tM_{2})\) (4.72) versus the order \(q\) of the matrix \(\Delta tM_{2}\) (4.69). The algorithms are: Taylor series (blue stars), the Cauchy Integral Formula (circle black), Scaling and Squaring Type **I** based on the identities (4.20) - (4.22) (green diamonds), Composite Matrix (magenta circles) and Matrix Decomposition (black pluses). The preceding subsections investigated the accuracy of the various algorithms; we now study the computational time of each. We calculate in Matlab code the CPU time of using each algorithm to approximate the expression (4.72) \[f_{3}(\Delta tM_{2})=\frac{e^{\Delta tM_{2}}-I-\Delta tM_{2}-(\Delta tM_{2})^{2} /2}{(\Delta tM_{2})^{3}},\] of the matrix \(\Delta tM_{2}\) (4.69) versus the order of the matrix \(q\) for values ranging from \(q=40\) to \(q=200\). We perform our numerical experiments for two values of the time step \(\Delta t\). In the first experiment \(\Delta t=0.6\). For the Cauchy integral formula algorithm we use \(N=32\) points to discretize the circular contour, centered at the minimum eigenvalue (\(\lambda_{min}\)) of the matrix \(\Delta tM_{2}\) (the eigenvalues of the matrix \(M_{2}\) are on the negative real axis) with radius \(R=-\lambda_{min}+1\) varying to enclose all the eigenvalues of the matrix. In the second experiment \(\Delta t=10\), and so the eigenvalues of the matrix \(\Delta tM_{2}\) have a larger spread. Thus, for the Cauchy integral formula algorithm we use \(N=128\) points around the circular contour centered at half the minimum eigenvalue (\(\lambda_{min}\)) of the matrix \(\Delta tM_{2}\), with radius \(R=-\lambda_{min}/2+5\) again varying to enclose all eigenvalues. Figure 4.17 provides the timing results for the two different values of \(\Delta t\). The figure shows that most of the algorithms exhibit a CPU time proportional to \(q^{3}\); this is to be expected since both matrix multiplication and matrix inversion scale in this way. The Matrix Decomposition algorithm does extremely well over all, and is the cheapest algorithm in terms of CPU time for non-diagonal matrix problems. The majority of the time is consumed by finding the eigenvalues \(\lambda_{j}\) of the matrix \(\Delta tM_{2}\) (4.69) and applying the 30-term Taylor series to approximate the exponentials \(e^{\lambda_{j}},\ j=1,\ldots,q\) in \(f_{3}(D)\) (4.99) for those eigenvalues satisfying \(|\lambda_{j}|<1\). The CPU time for the Taylor series and the Scaling and Squaring type **I** algorithms follows the same pattern for the two values of \(\Delta t\). The 30-term Taylor series algorithm is the second most economical in time (having no need compute a matrix exponential), and the CPU time does not depend on the value of \(\Delta t\). Most of the CPU time is spent on working out the matrix multiplications needed for the algorithm. The Scaling and Squaring type **I** algorithm, used with a 30-term Taylor series, has a start-up time overhead, as it requires us to carry out matrix multiplicationsand compute several values of the identities (4.20) - (4.22) to begin. The time consumption of this algorithm depends on the norm of the matrix, the value of the threshold (here the threshold \(\delta_{1}=1\)), and the number of terms used in the Taylor series: when the norm gets larger (as the value of \(\Delta t\) increases) or when the chosen value of the threshold is smaller, more scaling operations are needed. Also, when the chosen value of the threshold is larger, more terms of the Taylor series and therefore more work on matrix multiplications are needed. Hence in both cases the algorithm becomes expensive. The Cauchy integral formula algorithm is very expensive computationally as one has to compute \(N\) matrix inverses (128 for the largest value of \(\Delta t\)) and take the average (4.81) of the function values at the \(N\) points. As we enlarge the contour to enclose all the eigenvalues of the matrix, we must also increase the number of points \(N\) required to discretize the contour accurately, and therefore, the computation time of this algorithm increases. Lastly, the Composite Matrix algorithm requires the evaluation of the exponential function \(e^{B_{s}}\) (4.98), which is a non-diagonal matrix of size \(4q\times 4q\), so this is often the slowest algorithm. In Matlab this evaluation uses the code _expm_ which depends on the Scaling and Squaring algorithm. Hence, for larger matrix order \(q\) and for larger values of \(\Delta t\), i.e. larger matrix norm, more scaling operations are required, and thus the time cost increases. ### 4.7 Conclusion In our investigation of the accuracy and the efficiency of six algorithms for approximating the ETD coefficients we found the following: 1. **Taylor Series:** The primary advantage of this algorithm is the simplicity and ease of implementation for both scalar and matrix cases. In addition, it is the second least costly in time. However, it is not accurate when approximating the ETD coefficients for large values (in magnitude) of the argument (matrix norm in the matrix case). 2. **The Cauchy Integral Formula:** This algorithm exhibits significant variation in performance in different cases. It has turned out to be very accurate for diagonal matrix problems, but it can be inaccurate for non-diagonal matrices with large norm. The large errors that can arise in such case are caused by the chosen method of implementation: matrices \(\Delta tM\) with large norm have a large spread of eigenvalues, and a circle of large radius is thus required to enclose all eigenvalues. This requires an a priori contour radius which in general is problem dependent, and not trivially available. In addition, the location of the eigenvalues must be known, which in general adds to the expense of the algorithm. If a fixed number of points \(N\) is used to discretize the contour then as the radius of the required circle increases, the method's accuracy decreases. To avoid this, \(N\) must be chosen to increase as the matrix norm increases (i.e. as \(\Delta t\) increases). This results in a large increase in computational time, as a matrix inverse has to be calculated for each point on the contour - a disadvantage to the algorithm in non-diagonal matrix cases. However, improvements to this algorithm have recently been developed [69, 70]. 3. **Scaling and Squaring Algorithm Type I:** This algorithm is the most complex to implement. But it is one of the most effective and powerful algorithms for diagonal and non-diagonal matrix problems. In the non-diagonal matrix problems, knowledge of the eigenvalue of largest magnitude is required. The algorithm based on the identities (4.20) - (4.22), which has been used in our main experiments of which the results are illustrated in figures 4.8 and 4.15 - 4.17, has proved to be efficient in terms of computation time and accuracy for a good range of \(\Delta t\)-values, although, the errors are seen to increase in proportion to \(\Delta t\). But testing the algorithm with the identities (4.27), (4.21) and (4.22) has shown that it is the most accurate out of all identities used in this chapter. Moreover, errors have not grown for a large range of \(\Delta t\)-values, which gives these identities an additional advantage. 4. **Scaling and Squaring Algorithm Type II:** This algorithm performs well when approximating the coefficient \(f_{1}(z)\) that appears in the ETD1 method (3.14), but when computing the coefficients in higher order ETD methods for small values (in magnitude) of the argument the results are very inaccurate, because of the amplification of rounding errors at each scaling. Thus, the Scaling and Squaring type **II** algorithm is not a useful algorithm. 5. **Composite Matrix Algorithm:** From a practical point of view, the algorithm is successful for approximating the ETD coefficients accurately, and is also very easy to program. However, finding the exponential of a large matrix in non-diagonal matrix problems can lead to high computational cost, caused by the larger number of operations needed to approximate the exponential matrix. 6. **Matrix Decomposition Algorithm:** For non-diagonal problems, the matrix decomposition algorithm is the cheapest algorithm in time. Most of the CPU time is spent on determining the eigenvalues required for the algorithm. Furthermore, it is remarkably accurate when compared with the explicit formula for ETD coefficients (over all \(\Delta t\) values considered), and with the Taylor series and the Cauchy integral formula for large values of \(\Delta t\). For small values of \(\Delta t\), however, it is slightly less accurate than the Cauchy integral formula, the Scaling and Squaring type **I** and the Composite Matrix algorithms. We can sum up this set of comparisons by saying that the Scaling and Squaring type **I** algorithm is an efficient algorithm for computing the ETD coefficients in diagonal and non-diagonal matrix cases. It exhibits some loss of accuracy as the matrix norm increases, but this is much less severe than for the Taylor series and the Cauchy integral formula when approximating the ETD coefficients for large values (in magnitude) of the scalar arguments and large norm matrices respectively. Also, it compares favorably with the high computational cost of the Cauchy integral formula and the Composite Matrix algorithm in non-diagonal matrix cases. The Matrix Decomposition algorithm, in the conventional eigenvector approach, also performs well, and is very efficient computationally, though it is slightly less accurate when the matrix norm is small, and is not applicable to all matrices.
## Chapter 1 Introduction ## Outline of Chapter In this chapter, we perform a variety of numerical experiments on real application problems. For the simulation tests, we choose periodic boundary conditions and apply Fourier spectral approximation for the spatial discretization. We employ first, second and fourth-order ETD methods and compare them with other competing stiff integrators including: first-order Implicit-Explicit (IMEX) method and first, second and fourth-order Integrating Factor (IF) methods for integrating in time three stiff partial differential equations (PDEs) all in one space dimension. The problems considered are: the time-dependent scalar **Kuramoto-Sivashinsky (K-S)** equation, the nonlinear **Schrodinger (NLS)** equation and the nonlinear **Thin Film** equation. In the K-S and the NLS equations, the linear terms are primarily responsible for stiffness, whereas in the third equation the nonlinear terms are the stiffest. The main testing parameters are the accuracy, the start-up overhead cost and the CPU time consumed by the methods, since these parameters play key roles in the overall efficiency of the methods. ### 5.1 Introduction Over the last decade there has been a renewed interest in applying **Exponential Time Differencing (ETD)** schemes [15, 39, 53, 61, 71] to the solution of stiff systems. A Matlab package recently designed by **Berland et al.**[8] aimed to facilitate easy testing and comparison of various exponential integrators, of Runge-Kutta, multi-step and general linear type methods, applied to semi-linear problems such as, the **Kuramoto-Sivashinsky (K-S)**[41] and the nonlinear **Schrodinger (NLS)**[77] equations. One of the main reasons for this renewed interest is the improvement in the accurate computation of the coefficients that arise in ETD schemes [2, 5, 8, 35, 47, 54, 56, 57, 67, 70, 80, 81] (this includes the exponential and related functions; see also SS4). Following these efforts, the exponential integrators have emerged as viable alternatives to classical ones. The numerical comparisons presented in [81], for solving chemical kinetics problems, and the numerical experiments performed in [37], for solving large stiff systems of DEs, reveal examples where explicit exponential integrators outperform standard integrators. A similar conclusion was reached by **Du** and **Zhu**[22, 23] when they performed some simulations of micro-structure evolution (a core component of phase field modeling) in two and three dimensions. The authors found that the higher order ETD based schemes can be several orders of magnitude faster than low-order **Implicit-Explicit (IMEX)**[87] methods. The superior performance of the ETD methods, for solving some dissipative and dispersive PDEs, was also illustrated in [19] by **Cox** and **Matthews**. In addition, **Kassam** and **Trefethen**[44, 45] compared the ETD methods of [19] with various fourth-order methods for solving various one-dimensional diffusion-type problems. They concluded that exponential integrators are highly competitive and accurate, with the best, by a clear margin, being the ETD4RK method of [19]. However, more recently **Krogstad**[49] presented an alternative fourth-order ETD method (ETDRK4-B), and found that it is slightly more accurate than the ETD4RK method of [19] when solving several semi-discretized PDEs, such as the Kuramoto-Sivashinsky (K-S) equation. A recent report [57] on six different types of exponential integrators showed that, especially for parabolic semi-linear problems, such as the K-S and the nonlinear Schrodinger (NLS) equations, the ETD type of exponential integrators outperform integrators of Lawson type [52]. Again, **Berland** and **Skaftestad**[7] used the NLS equation as a numerical test problem, and found that under certain circumstancesthe performance of a fourth-order Lawson integrating factor method was demonstrably poorer than the fourth-order ETD4RK method of [19]. Further studies on solving numerically the NLS equation were presented in [46]. The author compared the performances of several fourth-order methods (mainly related to exponential integrators), and found that in specific cases, these methods can be efficiently used to solve accurately the test equations numerically. The aim of this chapter is to make some observations regarding the efficiency of a variety of exponential integrators of different orders (including the ETD and the ETD-RK methods proposed by **Cox** and **Matthews**[19], see SS3.2) when compared with other competing stiff integrators. These methods are listed in SS5.2 with details of the implementations. We conduct numerical studies and comparison experiments on three model problems all in one space dimension. In SS5.3 and SS5.4, we consider the numerical solution of the time dependent scalar **Kuramoto-Sivashinsky (K-S)** equation [41] and the nonlinear **Schrodinger (NLS)** equation [77] respectively. The third model considered is the nonlinear **Thin Film** equation [36] (to our knowledge, no work containing the application of the exponential integrators to the thin film equation has been done). In the K-S and the NLS equation equations, the linear terms of the equations are primarily responsible for stiffness whereas in the thin film equation the nonlinear terms are the stiffest. However, we show in SS5.5, that this equation can be treated within the same framework as the K-S and the NLS equations. ### 5.2 Numerical Experiments Our comparison experiments are based on the simulation of three model problems, all in one space dimension: the time-dependent scalar dissipative **Kuramoto-Sivashinsky (K-S)** equation [41], the nonlinear dispersive **Schrodinger (NLS)** equation [77] and the nonlinear **Thin Film** equation [36] which is characterized as a dissipative and a dispersive PDE. All the calculations presented in this chapter are performed using Matlab codes. For the simulation tests, we choose periodic boundary conditions. This leads conveniently to the application of the Fourier spectral approximation [11, 12, 25, 83, 84]. This approximation provides very high accuracy for smooth solutions of the test model problems. In the resulting system of ordinary differential equations (ODEs), the linear part of the model becomes diagonal, i.e. an uncoupled system of equations for each Fourier mode. The nonlinear term is transformed to physical space and evaluated at the uniform grid points and then transformed back to spectral space. Hereafter, we advance the system of ODEs in time by a numerical integration that can be used effectively in combination with the spectral approximation. For the time discretization, all comparable methods, listed below, are expressed with respect to the model problem \[\frac{du(t)}{dt}=cu(t)+F(u(t),t), \tag{5.1}\] where the constant \(c\) is either large, negative and real, or large and imaginary, or complex with large, negative real part, and \(F(u(t),t)\) is the nonlinear forcing term, see SS3.2. In our study of first-order accurate methods, we analyze the performance of the first-order **ETD1** method [9, 15, 19, 61] \[u_{n+1}=u_{n}e^{c\Delta t}+(e^{c\Delta t}-1)F_{n}/c, \tag{5.2}\] where \(\Delta t\) represents the time step and \(u_{n}\) and \(F_{n}\) denote the numerical approximation to \(u(t_{n})\) and \(F(u(t_{n}),t_{n})\) respectively, and compare its accuracy with the **Euler** method \[u_{n+1}=u_{n}+\Delta t(cu_{n}+F_{n}). \tag{5.3}\] The Euler method is used only to obtain the numerical solution for the K-S equation [41] and the nonlinear thin film equation [36]. The comparison also includes the first-order **Integrating Factor Euler (IFEULER)** method [11, 84] \[u_{n+1}=(u_{n}+\Delta tF_{n})e^{c\Delta t}, \tag{5.4}\] and the first-order **Implicit-Explicit (IMEX)** method [4] (see SS1) \[u_{n+1}=u_{n}+\Delta t(cu_{n+1}+F_{n}). \tag{5.5}\] For second-order accurate comparison, we compare the two-step **ETD2** method [19] \[u_{n+1}=u_{n}e^{c\Delta t}+\{((c\Delta t+1)e^{c\Delta t}-2c\Delta t-1)F_{n}+(-e ^{c\Delta t}+c\Delta t+1)F_{n-1}\}/(c^{2}\Delta t), \tag{5.6}\] the **ETD2RK1** method [19] \[\begin{split} a_{n}&=u_{n}e^{c\Delta t}+(e^{c\Delta t }-1)F_{n}/c,\\ u_{n+1}&=a_{n}+(e^{c\Delta t}-c\Delta t-1)(F(a_{n}, t_{n}+\Delta t)-F_{n})/(c^{2}\Delta t),\end{split} \tag{5.7}\]and the **ETD2RK2** method (derived in SS3.2) \[\begin{split} a_{n}&=u_{n}e^{c\Delta t/2}+(e^{c\Delta t/ 2}-1)F_{n}/c,\\ u_{n+1}&=u_{n}e^{c\Delta t}+\{((c\Delta t-2)e^{c \Delta t}+c\Delta t+2)F_{n}\\ &\quad+2(e^{c\Delta t}-c\Delta t-1)F(a_{n},t_{n}+\Delta t/2)\}/ (c^{2}\Delta t),\end{split} \tag{5.8}\] against the second-order **ETD2CP** method introduced by **Calvo & Palencia**[15] \[\begin{split} u_{n+1}&=u_{n-1}e^{2c\Delta t}+\{(e^ {2c\Delta t}-2c\Delta t-1)F_{n}+((c\Delta t-1)e^{2c\Delta t}+c\Delta t+1)F_{n -1}\}/(c^{2}\Delta t),\end{split} \tag{5.9}\] and the **ETDC2** method \[\begin{split} u_{n+1}&=u_{n}e^{c\Delta t}+(e^{c \Delta t}-1)(3F_{n}-F_{n-1})/2c,\end{split} \tag{5.10}\] presented by **Livermore**[53] in the solution of the incompressible magnetohydrodynamics equations. In addition, we apply the second-order **Integrating Factor Runge-Kutta (IFRK2)** method [19] \[\begin{split} a_{n}&=\Delta tF_{n}e^{c\Delta t},\\ b_{n}&=\Delta tF((u_{n}+\Delta tF_{n})e^{c\Delta t},t_{n}+\Delta t),\\ u_{n+1}&=u_{n}e^{c\Delta t}+\frac{1}{2}(a_{n}+b_{ n}).\end{split} \tag{5.11}\] For higher order ETD methods, we consider particularly the comparison of the fourth-order **ETD4** method (derived in SS3.2) \[\begin{split} u_{n+1}&=u_{n}e^{c\Delta t}+(\Phi_{1} F_{n}-\Phi_{2}F_{n-1}+\Phi_{3}F_{n-2}-\Phi_{4}F_{n-3})/(6c^{4}\Delta t^{3}), \end{split} \tag{5.12}\] where \[\begin{split}\Phi_{1}&=(6c^{3}\Delta t^{3}+11c^{2} \Delta t^{2}+12c\Delta t+6)e^{c\Delta t}-24c^{3}\Delta t^{3}-26c^{2}\Delta t^{2 }-18c\Delta t-6,\\ \Phi_{2}&=(18c^{2}\Delta t^{2}+30c\Delta t+18)e^{c \Delta t}-36c^{3}\Delta t^{3}-57c^{2}\Delta t^{2}-48c\Delta t-18,\\ \Phi_{3}&=(6c^{2}\Delta t^{2}+24c\Delta t+18)e^{c \Delta t}-24c^{3}\Delta t^{3}-42c^{2}\Delta t^{2}-42c\Delta t-18,\\ \Phi_{4}&=(2c^{2}\Delta t^{2}+6c\Delta t+6)e^{c\Delta t }-6c^{3}\Delta t^{3}-11c^{2}\Delta t^{2}-12c\Delta t-6,\end{split}\] against the **ETD4RK** method [19] \[\begin{split} a_{n}&=u_{n}e^{c\Delta t/2}+(e^{c \Delta t/2}-1)F_{n}/c,\\ b_{n}&=u_{n}e^{c\Delta t/2}+(e^{c\Delta t/2}-1)F(a_ {n},t_{n}+\Delta t/2)/c,\\ c_{n}&=a_{n}e^{c\Delta t/2}+(e^{c\Delta t/2}-1)(2F (b_{n},t_{n}+\Delta t/2)-F_{n})/c,\\ u_{n+1}&=u_{n}e^{c\Delta t}+\{((c^{2}\Delta t^{2 }-3c\Delta t+4)e^{c\Delta t}-c\Delta t-4)F_{n}\\ &\quad+2((c\Delta t-2)e^{c\Delta t}+c\Delta t+2)(F(a_{n},t_{n}+ \Delta t/2)+F(b_{n},t_{n}+\Delta t/2))\\ &\quad+((-c\Delta t+4)e^{c\Delta t}-c^{2}\Delta t^{2}-3c\Delta t- 4)F(c_{n},t_{n}+\Delta t)\}/(c^{3}\Delta t^{2}),\end{split} \tag{5.13}\]and the **Integrating Factor Runge-Kutta (IFRK4)** method [7, 44, 45] \[a_{n} =\Delta tF_{n},\] \[b_{n} =\Delta tF((u_{n}+a_{n}/2)e^{c\Delta t/2},t_{n}+\Delta t/2),\] \[c_{n} =\Delta tF(u_{n}e^{c\Delta t/2}+b_{n}/2,t_{n}+\Delta t/2), \tag{5.14}\] \[d_{n} =\Delta tF(u_{n}e^{c\Delta t}+c_{n}e^{c\Delta t/2},t_{n}+\Delta t/ 2),\] \[u_{n+1} =u_{n}e^{c\Delta t}+\frac{1}{6}(a_{n}e^{c\Delta t}+2(b_{n}+c_{n}) e^{c\Delta t/2}+d_{n}).\] Our tests are broken into two parts, each designed to show a particular aspect of the methods' performance. The goal of the first set of tests is to address the question of stability and accuracy of the methods. Therefore, we perform a series of runs with different choices of final times \(t\) which are computed, for all methods, with various time-step sizes. The time-step values are selected to ensure that all methods achieve stable accurate results. In the second set of tests, we turn our attention to the accuracy as a function of CPU time to top up the differentiation factors between the methods for each model tested. The CPU time is one of the factors that affects the efficiency of the methods, that is because a method could be stable and achieve a good accuracy in few steps, but, it could be more costly, due to the larger number of operations per time step, and consequently less efficient than others. We measure the accuracy in terms of the relative error evaluated in the maximum norm, the 2-norm and the integrated error norm, between the results of each time stepping method (for different time-step sizes) and an "exact" solution. The relative error of the maximum norm is given by \[relative\ max\ error=\frac{\max|numerical\ solution|-\max|exact\ solution|}{\max|exact\ solution|}, \tag{5.15}\] the relative error of the 2-norm is given by \[relative\ norm\ error=\frac{\left(\sum|numerical\ solution|^{2}\right)^{1/2}- \left(\sum|exact\ solution|^{2}\right)^{1/2}}{\left(\sum|exact\ solution|^{2} \right)^{1/2}}, \tag{5.16}\] and the relative error of the integrated error norm is given by \[relative\ integrated\ error=\frac{\left(\sum|numerical\ solution-exact\ solution|^{2}\right)^{1/2}}{\left(\sum|exact\ solution|^{2}\right)^{1/2}}, \tag{5.17}\] where the sum is taken over the number of grid points in the spatial discretization. For the K-S and the nonlinear thin film equations, no explicit general analytic solutions exist, and the exact solution is approximated numerically using a fourth-order method with a very small time-step size. On the other hand, when considering the NLS equation, we focus primarily on the traveling solitons [10] as explicit exact solutions when evaluating the errors. Experimentally, we find that the relative errors (5.15) and (5.16) do not represent appropriate measurements of accuracy. For example, the relative error (5.15) focuses on an error occurring in the difference between a local maximum point of the numerical and the exact solution. This could be misleading as it could yield a zero error even when the numerical and exact solutions are different. Therefore, the desirable choice for a measure of accuracy is the relative error of the integrated error norm (5.17). This error is more meaningful and gives a representative measure of the error in the entire solution space. Also, it does not yield a zero error unless the numerical and the exact solution agree at all points. Hence, in our tests, we plot the numerical relative error of the integrated error norm (5.17) as a function of the time step and of the CPU time for each model tested for various initial conditions. Considering the implementation of the above time discretization methods, we find that the task is straightforward for the first-order methods. However, as the order of the methods increases, the complexity of the implementation grows. The higher order methods require more memory space, and need a relatively large computational effort. For example, the multi-step ETD and the ETD-RK methods require an accurate algorithm for evaluating the coefficients of \(F(u(t_{n}),t_{n})\) to avoid numerical difficulties (see SS4). We use the 'Cauchy integral' approach (fully detailed in SS4.2.2) proposed by **Kassam** and **Trefethen**[44, 45]. In this approach, we evaluate the coefficients (one coefficient for the ETDC2 method (5.10), three coefficients for the ETD2 (5.6), the ETD2CP (5.9), the ETD2RK1 (5.7) and the ETD2RK2 (5.8) methods, four coefficients for the ETD4RK (5.13) method and eight coefficients for the ETD4 method (5.12)), once at the beginning of the integration for each value of the time-step sizes, by means of contour integration in the complex plane approximated by the Trapezium rule (4.16). In addition, the multi-step ETD methods need to store the nonlinear terms at several previous time steps in order to advance the solution. So, preceding the integration loop and for each value of the time-step sizes, we obtain, for the two-step methods (ETD2, ETD2CP and ETDC2), one starting value of the nonlinear term using the ETD1 method (5.2), and three starting values of the nonlinear term for the ETD4 method using the ETD4RK method. Additionally, we store one value of the solution at the previous time step for the ETD2CP method. Moreover, the ETD2RK1, the ETD2RK2, the IFRK2 (5.11), the ETD4RK and the IFRK4 (5.14) methods carry out two (for the second-order methods) and four (for the fourth-order methods) function transforms per time step in the main loop of integration. For the IF schemes, we find that they require the evaluation of one or more matrix exponentials, for which acceptable algorithms are well known [9, 60]. However, the schemes have some disadvantages. For example, they do not preserve fixed points for the differential equations, and are also known for having rather larger error constants [7, 11, 19, 49, 57] (for PDEs with slowly varying nonlinear terms) than other methods of the same order. The investigation of the methods' performances and the results of the experiments are outlined in SS5.3, SS5.4 and SS5.5 for the numerical solution of the K-S equation, the NLS equation and the nonlinear thin film equation respectively. ### 5.3 Kuramoto-Sivashinsky (K-S) Equation The Kuramoto-Sivashinsky equation, which we will refer to as the K-S equation, is one of the simplest PDEs capable of describing complex (chaotic) behavior in both time and space. This equation has been of mathematical interest [29, 75] because of its rich dynamical properties. In physical terms, this equation describes reaction diffusion problems, and the dynamics of viscous-fluid films flowing along walls, and was introduced by **Sivashinsky**[74] as a model of laminar flame-front instabilities and by **Kuramoto**[50] as a model of phase turbulence in chemical oscillations. A fairly large number of numerical and theoretical studies have been devoted to the K-S equation; the reader is referred to the review paper of **Hyman \(\&\) Nicolaenko**[41]. The K-S equation in one space dimension can be written in "derivative" form \[\frac{\partial w(x,t)}{\partial t}=-w(x,t)\frac{\partial w(x,t)}{\partial x}- \frac{\partial^{2}w(x,t)}{\partial x^{2}}-\frac{\partial^{4}w(x,t)}{\partial x ^{4}}, \tag{5.18}\] or in "integral" form \[\frac{\partial u(x,t)}{\partial t}=-\frac{1}{2}\Big{(}\frac{\partial u(x,t)}{ \partial x}\Big{)}^{2}-\frac{\partial^{2}u(x,t)}{\partial x^{2}}-\frac{ \partial^{4}u(x,t)}{\partial x^{4}}, \tag{5.19}\] where \(w(x,t)=\partial u(x,t)/\partial x\). Equation (5.18) has strong dissipative dynamics, which arise from the fourth-order dissipation (\(\partial^{4}w/\partial x^{4}\)) term that provides damping at small scales. Also, it includes the mechanisms of a linear negative diffusion (\(\partial^{2}w/\partial x^{2}\)) term, which is responsible for an instability of modes with large wavelength, i.e. small wave-numbers. The nonlinear advection/steepening (\(w\partial w/\partial x\)) term in the equation transforms energy between large and small scales. The zero solution of the K-S equation is linearly unstable (the growth rate \(\lambda(k)>0\), for perturbations of the form \(e^{\lambda t}e^{ikx}\)) to modes with wave-numbers \(|k|=|2\pi/\ell|<1\) for a wavelength \(\ell\), and is damped for modes with \(|k|>1\), see figure 5.1; these modes are coupled to each other through the non-linear term. We can write the K-S equation (5.18) with \(2L\) periodic boundary conditions in Fourier space as follows \[\frac{d\hat{w}_{k}(t)}{dt}=(k^{2}-k^{4})\hat{w}_{k}(t)-\frac{ik}{2}\textbf{fft} (w(t)^{2}), \tag{5.20}\] where **fft** is a Matlab command that represent the fast Fourier transform FFT. The stiffness in the system (5.20) is due to the fact that the diagonal linear operator, with the elements \(k^{2}-k^{4}\), has some large negative real eigenvalues that represent decay, because of the strong dissipation, on a time scale much shorter than that typical of the nonlinear term. Thus the dynamics are dominated by a relatively few large scale modes. However, we expect all methods except the Euler method to work reasonably well regarding the stability analysis. The nature of the solutions to the K-S equation varies with the system size \(L\). For large \(L\), enough unstable Fourier modes exist to make the system chaotic. For small \(L\), insufficient Fourier modes exist, causing the system to approach a steady state solution. In this case, the ETD methods integrate the system very much more accurately than the IF methods, since the ETD methods assume in their derivation that the solution varies slowly in time. For the simulation tests, we choose two periodic initial conditions \[w_{1}(x,0) = \exp(\cos(x/2)),\ x\in[0,4\pi], \tag{5.21}\] \[w_{2}(x,0) = 1.7\cos(x/2)+0.1\sin(x/2)+0.6\cos(x)+2.4\sin(x),\ x\in[0,4\pi]. \tag{5.22}\] When evaluating the coefficients of the ETD and the ETD-RK methods via the 'Cauchy integral' approach [44, 45] (see SS4.2.2), we choose circular contours of radius \(R=1\). Each contour is centered at one of the elements that are on the diagonal matrix of the linear part of the semi-discretized model (5.20). The contours are sampled at 32 equally spaced points and approximated by (4.16). In figure 5.2, we show the numerical solution of the K-S equation (5.18) with the initial condition \(w_{1}(x,0)=\exp(\cos(x/2)),\ x\in[0,4\pi]\) (5.21), using \(N_{\mathcal{F}}=64\) grid points in the Fourier spatial discretization. We integrate the system (5.20) using the ETD4RK method (5.13) with time-step size \(\Delta t=2^{-10}\) and up to final time \(t=60\). The solution, in the figure, appears as a mesh plot and shows waves propagating, traveling periodically in time and persisting without change of shape. The computations are performed using Matlab code in a program described in Appendix A. In the following section, we present the results of integrating the system (5.20) for the two initial conditions (5.21) and (5.22), up to final time \(t=30\), utilizing the methods described in SS5.2. Again, we use \(N_{\mathcal{F}}=64\) grid points in the Fourier spatial discretization. The results are supported by figures and analysis of the methods' efficiency. Figure 5.1: The growth rate \(\lambda(k)\) for perturbations of the form \(e^{\lambda t}e^{ikx}\) to the zero solution of the Kuramoto-Sivashinsky (K-S) equation (5.18). #### Computational Results The results of our experiments are presented in figures 5.3 and 5.5 for the initial condition (5.21), and in figures 5.4 and 5.6 for the initial condition (5.22). In figures 5.3 and 5.4, the numerical relative integrated error (5.17), of using each time discretization method to obtain the numerical solution of the K-S equation (5.18), is plotted as a function of the time step. The exact solution is approximated numerically using \(N_{\mathcal{F}}=64\) grid points in the Fourier spatial discretization. For the time discretization, we use the fourth-order Runge-Kutta method [14] with a very small time-step size. The plots (in figures 5.3 and 5.4) indicate the largest time-step size, i.e. the fewest number of steps, that each method requires to converge to a solution within a fixed given relative error in the figures. The first aspect to emphasize in such figures is that the plots confirm the expected order of the methods. Secondly, a fixed reduction in the time-step size will effectively improve the accuracy, but Figure 5.2: Time evolution of the numerical solution of the K-S equation (5.18) up to \(t=60\) with the initial condition \(w_{1}(x,0)=\exp(\cos(x/2)),\ x\in[0,4\pi]\) (5.21). ## Chapter 5 Numerical Experiments ### 5.1 Introduction ## Chapter 5 Numerical Experiments ### 5.1 Introduction When testing the Euler method (5.3) we find that this method, obviously, requires the smallest number of operations per time step out of all the methods we test. However, because of the numerical stability constraints, the time-step size is limited. Tests show, in figures 5.3 and 5.4, that the Euler method performs well at a very small time-step size, though it breaks down for time-step size larger than \(\Delta t\approx 2^{-16}\). This adds to the computation cost as is shown in figures 5.5 and 5.6. Therefore, the Euler method is not in the competition for the "best" method. Larger time steps may be taken using the other methods that are designed for stiff problems, where there is no such severe restriction for reasons of stability and the time-step size selection is only limited by accuracy. For the other first-order methods (the ETD1 (5.2), the IFEULER (5.4) and the IMEX (5.5) methods), figure 5.3 indicates that, for the initial condition (5.21), all methods behave similarly and the corresponding errors lie approximately on the same line for all values of the time-step. Furthermore, all methods are inaccurate (errors are of \(O(1)\)) for time-step sizes larger than \(\Delta t\approx 2^{-6}\). On the other hand, figure 5.4 reveals a different behavior of the methods for the initial condition (5.22): it shows a better performance and accuracy of the ETD1 method compared to the other first-order methods. Also, we find that for the time step restriction imposed by the linear term, all methods remain stable at a large value of the time-step \(\Delta t\approx 2^{-2}\), but the ETD1 method produces the most accurate solution. Considering the computational cost of the methods, it is clear from figure 5.5 that, for the initial condition (5.21), the methods have an almost identical computation cost per time step. However, in figure 5.6 for the initial condition (5.22), the ETD1 method outperforms the other methods both in speed and in the accuracy of the obtained solution. For the second-order accurate methods we consider, the ETD2RK1, the ETD2RK2, the ETD2, the ETD2CP, the ETDC2 and the IFRK2 methods. Second-order convergence is confirmed in figure 5.3 for the initial condition (5.21). The performance of all second-order methods is very nearly equivalent here, and the errors for small time steps are almost identical. In addition, the variation in time consumption, for a given level of accuracy, is insignificant, see figure 5.5. However, the ETD2CP method (5.9) slightly outperforms the others both in accuracy and speed referring ## Chapter 5 Numerical Experiments ### 5.1 Introduction ## Chapter 5 Numerical Experiments ### 5.1 Introduction On the other hand, for the initial condition (5.22), we find that of all comparable second-order methods, the ETD2RK2 method (5.8) is the most accurate, for a given time-step size, and the least time consuming for a given level of accuracy, see figures 5.4 and 5.6 respectively. The IFRK2 (5.11) and the ETD2CP methods do not do well for the initial condition (5.22). The ETD2CP method is the least accurate and the most time costly. In addition, figure 5.6 shows that the ETD2 (5.6) and ETDC2 (5.10) methods consume about the same CPU time per time step, while the ETD2RK1 method (5.7) has a longer computation time. All second-order methods successfully integrate the system for time-step sizes less than \(\Delta t\approx 2^{-2}\) and \(\Delta t\approx 2^{-1}\) for the initial conditions (5.21) and (5.22) respectively. However, the ETD2CP method fails to be accurate for a larger time-step size than \(\Delta t\approx 2^{-4}\) for the initial condition (5.22), see figure 5.4. For the fourth-order methods, as is evident from figures 5.3 and 5.4, the ETD4 Figure 5.7: Relative errors versus CPU time for the K-S equation (5.18) with the initial condition \(w_{2}(x,0)=1.7\cos(\frac{\pi}{2})+0.1\sin(\frac{\pi}{2})+0.6\cos(x)+2.4\sin(x)\) (5.22). (5.12), the ETD4RK (5.13) and the IFRK4 (5.14) methods behave in a similar way for the two initial conditions. The performance of the IFRK4 method resembles that of the ETD4 method, and the errors for small time steps are almost identical. Clearly the methods have a superior performance, as the accuracy is improved significantly compared to lower order methods. Referring to figures 5.3 and 5.4, the most accurate for a given time step is the ETD4RK method. The ETD4RK method has the advantages (relative to the other fourth-order methods) of being stable for larger time steps1, having fewer coefficients to evaluate via the Cauchy integral formula approach, and having no starting values to obtain. Also, the ETD4RK method uses slightly less CPU time than the ETD4 method for a given level of accuracy, while the most time-consuming is the IFRK4 method for the two initial conditions (5.21) and (5.22), see figures 5.5 and 5.6. Footnote 1: This agrees qualitatively with our analysis for the stability region of the ETD4 and the ETD4RK methods in §3. Second-order time discretization methods have been used often for obtaining numerical solutions for a wide range of PDEs. Reasons for their choice include the difficulties introduced by the combination of nonlinearity and stiffness of a PDE, the increase in complexity both of analysis and implementation for higher-order methods, and in addition, higher-order methods usually require increased computer storage and CPU time. However, when we do a comparison test between the performance of the ETD1 (5.2), the ETD2RK2 (5.8) and the ETD4RK (5.13) methods for solving the K-S equation (5.18) with the initial condition (5.22), we find that the fourth-order method can be very accurate and less costly than the second-order one (the same conclusion was reached by Kassam and Trefethen [44, 45]. The authors found that it is entirely practical to solve nonlinear PDEs to a high accuracy by fourth-order time-stepping methods). In our comparison test we plot, in figure 5.7, the accuracy of these three methods, measured in the relative integrated error (5.17), as a function of CPU time. In the figure, we can see that within the same level of accuracy, the ETD4RK method is less costly than the ETD2RK2 method, whereas the most expensive with high computational cost is the ETD1 method. Thus, the greater accuracy of the ETD4RK and the ETD4 methods more than compensates for the additional computational cost per time step. #### Conclusion We have demonstrated how for stiff problems such as the K-S equation (5.18), ETD methods provide an efficient alternative to standard explicit integrators. We have found that the \(s\)-step ETD methods (for \(s=1,2,4\)) all achieve order \(s\) and exhibit high accuracy with superior stability properties compared to the explicit method (the Euler method), which imposes a ceiling on the time-step size selection. To time step our test problem with a second-order method, we can say that, in practice, the consideration of accuracy and computational cost indicates that some of the methods are preferable to others, but all are completely satisfactory. The most efficient choice is the ETD2RK2 method. Higher order methods are more advantageous. They exhibit higher accuracy and maintain good stability. We have found that the ETD4RK method is marginally the best for the test problem considered. Regarding accuracy and CPU time in the solving process, we can conclude that the ETD4RK method is clearly favored in most general cases. It is found to be the most stable method with reasonable computational effort. Even at fairly large time-steps, it still maintains good stability and produces high accuracy. As a final point, this conclusion is limited to the studies of the Kuramoto-Sivashinsky (K-S) equation with the two initial conditions (5.21) and (5.22). The experiments have shown that the performance of the methods varies from one case to the other, and that the ETD and ETD-RK methods of [19] outperform the compared methods for solving the test model (5.18) for the initial condition (5.22). These results cannot be generalized, as they may differ for other choices of initial conditions and for other problems. ### 5.4 Non-Linear Schrodinger (NLS) Equation The nonlinear Schrodinger (NLS) equation in one space dimension [10] \[i\frac{\partial u(x,t)}{\partial t}=\frac{\partial^{2}u(x,t)}{\partial x^{2}} +(V(x)+|u(x,t)|^{2})u(x,t), \tag{5.23}\] where \(V(x)\) is the potential function, arises in several different areas of physics, including multi-scale perturbation theory, gravity of electromagnetic waves in a plasma, and the propagation of intense optical light pulses in fibers. The equationgives the wave amplitude \(u(x,t)\) as a function of independent variables \(x\) (space) and \(t\) (time), and it possesses several conservation laws, notably conservation of density, energy and momentum. In addition, it yields a rich variety of nonlinear wave structures, including solitons with arbitrary amplitude and velocity, several kinds of periodic nonlinear wave, and uniform wave-train solutions. The derivation of this equation for the propagation of a plane electromagnetic wave in a nonlinear medium can be found in [10, 42], and an introduction to its mathematical theory is given in [77]. The major application of the NLS equation (5.23) is to the analysis of the propagation of dispersive wave-packets in a nonlinear medium. This equation governs the envelope of wave-packets in the presence of the competing effects of linear dispersion (which tends to smear them out) and nonlinear amplitude dependence (which tends to compress the pulse) of the material properties in a one-dimensional system. When these two competing effects balance, the formation of optical envelope solitons is possible. "Soliton solution" means that the envelope of the nonlinear wave takes the shape of a simple pulse. Solitons are localized waves and are often used to transmit information along optical fibers. They can be ordered in a fashion with the taller solitons moving faster, and the shorter ones moving slower [13]. They also have certain properties, such as clean overtaking of two solitons and clean collisions, i.e. they retain their individual identities (which in addition persist over long distances) after a nonlinear interaction [77]. In our numerical experiments, we use the cubic nonlinear Schrodinger equation \[i\frac{\partial u(x,t)}{\partial t}=\frac{\partial^{2}u(x,t)}{\partial x^{2}} +|u(x,t)|^{2}u(x,t), \tag{5.24}\] with \(V(x)=0\) in equation (5.23). Equation (5.24) is an example of a problem whose linearization has imaginary eigenvalues, and whose dispersion relation (\(\lambda(k)=ik^{2}\) for wave-numbers \(k\)) obtained from a linear stability analysis shows that perturbation to the zero solution neither grow nor decay, but oscillate and travel at speed \(-k\). The stiffness in this problem comes from the term \(\partial^{2}u/\partial x^{2}\), which results in rapid oscillations of high wave number modes. Transforming equation (5.24) to Fourier space, assuming that the solution satisfies periodic boundary conditions, gives \[\frac{d\tilde{u}_{k}(t)}{dt}=i(k^{2}\hat{u}_{k}(t)-\mathbf{fft}(|u(t)|^{2}u(t) )), \tag{5.25}\] where \(\mathbf{fft}\) is the Matlab command that represents the fast Fourier transform FFT. We focus primarily on the traveling soliton solutions [10] \[u(x,t)=a\text{sech}(b(x-vt))e^{i(c_{0}x+dt)}, \tag{5.26}\] where \(b=\frac{a}{\sqrt{2}}\), \(c_{0}=-\frac{1}{2}v\) and \(d=c_{0}^{2}-b^{2}\) are real numbers, as explicit exact solutions for the NLS equation (5.24), in testing the efficiency of the first, second and fourth order time discretization methods stated in SS5.2. The Euler method (5.3) is never stable for solving the NLS equation for any time-step size. This due to the imaginary eigenvalues of the linearized NLS equation, which are outside the stability region of the method. Since the exact solutions are known, the numerical results really provide only a check on the numerical methods. The parameters are the speed \(v\) and the amplitude \(a\), which are independent. In particular, the larger the velocity \(v\), the more rapid the spatial variation in \(u(x,t)\). The amplitude of the wave \(u(x,t)\) vanishes at infinity, so, provided we solve on a sufficiently large domain, we can treat the problem as essentially periodic. These solitary waves are known as bright solitons [10]. We direct interested readers to the book by **Billingham** and **King**[10] for a different kind of solitons, known as "dark soliton solutions" for the NLS equation. Note that, a special case of (5.26), is the non-traveling wave soliton solutions, i.e. \(v=0\), given by \[u(x,t)=a\text{sech}(bx)e^{idt},\] with one complete period of soliton oscillation in time \(t=2\pi/|d|\), frequency \(d=-b^{2}\) and amplitude \(a=\sqrt{2}b\). When evaluating the coefficients in the ETD and the ETD-RK methods via the 'Cauchy integral' approach [44, 45] (see SS4.2.2), we choose circular contours of radius \(R=1.2\). Each contour is centered at one of the elements that are on the diagonal matrix of the linear part of the semi-discretized model (5.25). The contours are sampled at 32 equally spaced points and approximated by (4.16). The plots in figure 5.8 give a picture of the solutions (5.26) of the NLS equation (5.24), with the choice of the wave amplitude \(a=\sqrt{2}\), \(b=1\) and speeds \(v=0\) and \(v=4\) shown in cases (a) and (b) respectively. For the spatial discretization we use \(N_{\mathcal{F}}=256\) grid points for \(x\in[-5\pi,5\pi]\) (for case (a) \(v=0\)) and \(x\in[-6\pi,10\pi]\) (for case (a) \(v=4\)). We integrate the system (5.25) using the ETD4RK method (5.13) with time-step size \(\Delta t=2^{-8}\) up to final time \(t=2\pi\) and \(t=6\) for cases (a) and (b) respectively. The solution, in the figure, appears as waterfall plot and shows, in Figure 5.8: Real part of the numerical complex solutions (5.26) of the NLS equation (5.24) subject to the initial condition \(u(x,t=0)=\sqrt{2}\mathrm{sech}(x)e^{-ivx/2}\) with (a) speed \(v=0\) and \(t=2\pi\) and (b) speed \(v=4\) and \(t=6\). case (a), waves oscillating without change in form, while in case (b), the waves are oscillating and traveling with displacements in position. The results of our experiments in testing the efficiency of the numerical methods, for producing accurate solutions of the NLS equation (5.24), are illustrated in SS5.4.1 and SS5.4.2 and final conclusions are revealed in SS5.4.3. #### Computational Results The comparison results of our experiments are presented in figures 5.9, 5.11 and 5.13, for the first-order, second-order and fourth-order methods stated in SS5.2, respectively. We consider the soliton solutions (5.26) as exact solutions for the NLS equation (5.24) and plot, in these figures, the numerical relative integrated error (5.17) of using each time discretization method for solving the test model as a function of the time step. In figures 5.10, 5.12 and 5.14, we plot the accuracy as a function of the CPU time to give an insight into the computation timing. In each figure, we consider integrating the system (5.25) up to final time \(t=6\) for the initial condition \[u(x,t=0)=a\text{sech}(bx)e^{-ivx/2},\ a=\sqrt{2},\ b=1, \tag{5.27}\] with speed \(v=0\) in case (a), while cases (b) and (c), consider the speeds \(v=2,4\) respectively. The plots (in figures 5.9, 5.11 and 5.13) confirm the expected order of the methods and indicate the largest time-step size, i.e. the fewest number of steps, that each method requires to converge to a solution within a fixed given relative error in the figures. Improved accuracy is assured if we reduce the time-step size, but this considerably increases the computation cost, which is illustrated in figures 5.10, 5.12 and 5.14), where we plot the methods' accuracy as a function of the CPU time. For the first and second-order methods, applied to the initial wave (5.27), the spatial discretization is performed using \(N_{\mathcal{F}}=256\) grid points on the space interval \(x\in[-5\pi,5\pi]\) for wave speed \(v=0\). Although the wave's amplitude is not periodic, it vanishes at infinity, so we can still treat the problem as periodic by enlarging the space domain. For the wave with speeds \(v=2,4\), the solutions travel, and hence, we use \(N_{\mathcal{F}}=512\) grid points on the space interval \(x\in[-5\pi,15\pi]\). The spatial discretization in the case of utilizing fourth-order methods varies. Generally, an estimate of the total accumulated errors during a time integration consists of the truncation error of the local time step and the rounding errors of the space discretization. Using fourth-order methods with very small time-step size leads to a small truncation error, of \(O(\Delta t)^{4}\), though, with spectral methods the rounding errors are important too. Therefore, to ensure periodicity of the solution and accurate approximation in the space discretization, it is required to enlarge the domain with an increase in the number of grid points. In the case of the initial condition (5.27) with speed \(v=0\), the computations are carried out on a space domain \(x\in[-10\pi,10\pi]\), using \(N_{\mathcal{F}}=512\) grid points. For speeds \(v=2,4\), we use \(N_{\mathcal{F}}=1024\) grid points on a spatial domain \(x\in[-10\pi,30\pi]\). The comparison results for the first-order methods, shown in figure 5.9 (a), indicate that, for the initial condition (5.27) with speed \(v=0\), the best accuracy for a given time-step is achieved by the ETD1 method (5.2), followed by the IMEX method (5.5) (which takes essentially similar CPU times, for a given level of accuracy, as the ETD1 method, see figure 5.10 (a)). The IFEULER method (5.4) requires a smaller time-step and consumes more CPU time than the other comparable first-order methods to produce the same accuracy. Figure 5.9 (b) shows similar results for the ETD1 method when increasing the speed up to \(v=2\). Clearly the method's performance is superior to the other comparable first-order methods. The method uses the largest time-step size to reach a desired accuracy. The IMEX and the IFEULER methods perform almost identically in achieving the same level of accuracy, with similar CPU times, as illustrated in figure 5.10 (b). For large speed \(v=4\), the ETD1 method seems to perform slightly worse than the IFEULER method (the most accurate method) for a given time-step size, see figure 5.9 (c). In addition, a considerable variation in CPU times is noticed in figure 5.10 (c) when running the first-order methods on a high-speed (\(v=4\)) wave solution. The IFEULER method consumes the least CPU time for a given error, while a large amount is used by the IMEX method. This is due to the fact that a much smaller step size is required for the IMEX method to obtain the same level of accuracy obtained by the other comparable first-order methods. Further tests are done on the first-order methods for integrating the system at high speeds (\(v=6,8,\ldots\)). In these tests we find that the IFEULER method is the best for producing accurate solutions for the test model (5.24), for a given time-step size. Moreover, increasing the speed \(v\) has no effect on its performance, i.e. as the Figure 5.9: Relative errors versus time step for (5.24), with the initial condition (5.27), with speeds (a) \(v=0\), (b) \(v=2\) and (c) \(v=4\), for the first-order methods. Figure 5.10: Relative errors versus CPU time for (5.24), with the initial condition (5.27), with speeds (a) \(v=0\), (b) \(v=2\) and (c) \(v=4\), for the first-order methods. Figure 5.11: Relative errors versus time step for (5.24), with the initial condition (5.27), with speeds (a) \(v=0\), (b) \(v=2\) and (c) \(v=4\), for the second-order methods. speed increases, we obtain the same quantitative results for the errors over the same range of time-step sizes. This behavior is in contrast with that of the ETD1 and the IMEX methods, which both suffer considerably as the wave speed increases (figures are not shown here as the results resemble those in figure 5.9 (c), for the case of the speed \(v=4\). However, the errors for the ETD1 and the IMEX methods get larger as the speed increases (\(v=6,8,\ldots\)), for the same range of the time-step sizes). The IMEX method needs a much smaller time-step size and much more CPU time to be accurate to the same level obtained by the other first-order competitor methods. In SS5.4.2, we further investigate the computational advantages of the IFEULER method and the disadvantages of the ETD1 method. When integrating the system (5.25) using second-order methods for the initial wave (5.27) with speed \(v=0\), we find that for a given time-step size, the ETD2RK2 method (5.8) is the most accurate and the ETD2 method (5.6) is the third least accurate, while the IFRK2 method (5.11) is the least accurate, see figure 5.11 (a). The performance of the ETD2CP (5.9) and the ETDC2 (5.10) methods is seen to be very similar to that of the ETD2RK2 and the ETD2 methods respectively. In the case of increasing the speed up to \(v=2\), illustrated in figure 5.11 (b), the performance of the ETD2RK1 method (5.7) resembles that of the previous case (\(v=0\)), while that of the ETD2CP method deteriorates. For a given time-step size, the ETD2RK1 method is the second least accurate and the ETD2CP method is third least accurate. The ETD2, the ETDC2 and the IFRK2 methods have almost identical errors and are the least accurate methods for obtaining the numerical solution of the NLS equation (5.24). Among all comparable second-order methods, the ETD2RK2 shows the best performance overall (in cases of \(v=0,2,4,6\)) and is the most accurate method for a given time-step size (see figure 5.11 (c) for the case of \(v=4\), although errors are equivalent to that of the IFRK2 method in the case of \(v=6\) (not shown)). Increasing the speed up to \(v=4,6,\ldots\) has a significant effect in reducing the performance of the ETD2RK1, the ETDC2, the ETD2 and the ETD2CP methods. See the case of \(v=4\) in figure 5.11 (c), where these methods require a smaller time-step size than that used by the ETD2RK2 method to obtain a desired accuracy. On the other hand, increasing the speed has no impact on the performance of the IFRK2 method. This method is seen to have the same quantitative error for a given time-step size as the speed increases, see figure 5.11 (a), (b) and (c). For larger speeds\(v=6,\ldots\) and for a given time-step size, the IFRK2 method is the most accurate method. Regarding CPU time consumption, figure 5.12 reveals that, generally, the variations in CPU time between the comparable second-order methods are insignificant (when required to be accurate to some specified level). However, figure 5.12 shows that, in cases (a) \(v=0\) and (b) \(v=2\), the IFRK2 method is the most expensive in timing, as it uses a smaller time-step size than that used by the other second-order methods to reach a desired level of accuracy. In case (c) \(v=4\), we find that the ETD2RK2 method achieves a required accuracy level fastest, while the ETD2RK1 is the most time consuming. We finally consider the performance of the fourth-order methods. It can be seen in figure 5.13 that all methods show a clear fourth-order behavior. For speed \(v=0\) in the initial condition (5.27) (figure 5.13 (a)), the IFRK4 method (5.14) is slightly less accurate, for a given time-step size, than the ETD4 (5.12) and the ETD4RK (5.13) methods, which show the best performance in producing accurate numerical solutions (errors are seen to be very similar for a given time step). As the speed increases \(v=2,4,\ldots\), we find that the IFRK4 method has the same accuracy for a given time-step size (this is similar to the behavior of the IFEULER and the IFRK2 methods. The analysis applied to explain the behaviors of the IFEULER method's performance in SS5.4.2, can also be used to analyze the performance of the IFRK2 and the IFRK4 methods). In addition, the performance of the ETD4RK method deteriorates gradually as \(v\) increases, while that of the ETD4 method deteriorates dramatically (see figure 5.13 (b) and (c), of which the case of speed \(v=4\) (c) shows that, for a given time-step size, the IFRK4 method is most accurate for obtaining the numerical solution). In figure 5.14, we find that the exponential integrators rely on the fast evaluation of the exponential and related functions. The computations of the ETD and the ETD-RK methods coefficients (which are done once at the beginning of the integration for each time-step size) have a noticeable effect on the CPU times, as they impose a significant timing overhead when the methods use large time-step size. As can be seen in figure 5.14 (a), (b) and (c), the differences in timing between fourth-order methods, for large time-step size, become significant for a required level of accuracy. In general, the ETD4 method is the most computationally expensive, since most of the CPU time is spent on setting up the coefficients, via the complex integration (four more coefficients than the ETD4RK method). In addition, as we decrease the time-step size, the variation in the time consumed by ETD4 and ETD4RK methods is negligible for a considerable range of specified error tolerances. However, for small time-step, and for non-traveling solitons \(v=0\), the ETD4 method in fact consumes least CPU time, as it can use the largest time-step size to reach a desired accuracy. The IFRK4 method is marginally the slowest for \(v=0\), as shown in figure 5.14 (a). As the speed increases \(v=2,4\), see figure 5.14 (b) and (c) respectively, the IFRK4 method maintains its rapidity in accomplishing the computations, whereas the computation times of the ETD4 and the ETD4RK methods increase gradually. For higher speeds these methods use a smaller time-step size to obtain a level of accuracy similar to that obtained for \(v=0\). We note finally that, for accurate and economical computations, it is often advantageous to utilize fourth-order methods. The benefits of these methods are that they can use a much larger time-step size than that for the lower order comparable integrators for an equivalent level of accuracy, and hence they are cheap. #### Error Analysis of the ETD and the IF Methods Our analysis in this section investigates the computational advantage of the Integrating Factor IF methods and the disadvantage of the Exponential Time Differencing ETD methods in solving the nonlinear Schrodinger (NLS) equation (5.24). The previous tests in SS5.4.1 revealed that, increasing the speed \(v=0,2,\ldots\) of the soliton solutions (5.26) has no effect on the IF methods' performance, i.e. as the speed increases, we obtain the same quantitative results for the errors over the same range of the time-step sizes for an \(s\)-order IF method. This behavior is in contrast with that of the ETD methods, which suffer considerably as the wave speed increases, and so, in cases of large speeds \(v=4,5,\ldots\) and for a given time-step size, the IF methods are the best for producing accurate soliton solutions. The usual way to analyze the accuracy of a numerical method is to study the local truncation error in time. So if we consider the model (5.1) \[\frac{du(t)}{dt}=cu(t)+F(u(t),t),\] then the local truncation error, for example, of the ETD1 (5.2) and the IFEULER(5.4) methods are \[L.T.E_{1} \approx \frac{\Delta t^{2}}{2}dF(u(t),t)/dt, \tag{5.28a}\] \[L.T.E_{2} \approx \frac{\Delta t^{2}}{2}d(F(u(t),t)e^{-ct})/dt, \tag{5.28b}\] respectively. The derivation of equations (5.28a) and (5.28b) is given in Appendix B. For cases where the nonlinear term \(F(u(t),t)\) is slowly varying, i.e. the value of \(dF(u(t),t)/dt\) is small, the ETD1 method is highly accurate, while in cases where the term \(F(u(t),t)e^{-ct}\) is slowly varying (\(d(F(u(t),t)e^{-ct})/dt\) is small), the IFEULER is the most accurate. To understand clearly how this analysis can be applied to the numerical methods in case of the soliton solutions (5.26), we need first to apply the analysis to simple exact periodic solutions of the NLS equation (5.24) of the form \[u(x,t)=Ae^{i(\omega t+Cx)},\] with amplitude \(A\), and frequency \(\omega=C^{2}-A^{2}\). According to (5.28) \[L.T.E_{1}\approx\Delta t^{2}\omega A^{3}e^{i(\omega t+Cx)}/2,\ \ \ \ L.T.E_{2} \approx-\Delta t^{2}A^{5}e^{i(Cx-A^{2}t)}/2.\] Hence, \(L.T.E_{1}\propto\omega A^{3}=(C^{2}-A^{2})A^{3}\) and \(L.T.E_{2}\propto A^{5}\). For cases in which \(\omega\) is small, i.e. \(C\) and \(A\) are quantitatively similar, the ETD1 method (5.2) should therefore show the best accuracy for a given time-step size, while if \(A\) is small and \(C\) is large, i.e. \(\omega\) is large, the IFEULER method (5.4) should be the best. In figure 5.15, we plot the numerical relative error of the integrated error norm (5.17) as a function of the time-step size, for two different values of \(\omega\). The choices of our exact solutions are \[u_{1}(x,t) = e^{i(-3t/16+x/4)}/2,\ x\in[0,8\pi],\] \[u_{2}(x,t) = e^{i(63t+8x)},\ x\in[0,\pi/4].\] We use \(N_{\mathcal{F}}=64\) grid points for the space discretization, and integrate the system (5.25) up to one period of time \(t=2\pi/|\omega|\), utilizing the ETD1 (5.2) and the IFEULER (5.4) methods. We evaluate the coefficients in the ETD1 method using the 'Cauchy integral' approach. Figure 5.15 case (a) confirms that, for the initial condition \[u_{1}(x,0)=e^{ix/4}/2,\]Figure 5.15: Relative errors versus time step for solving the NLS equation (5.24) with the ETD1 (5.2) and the IFEULER (5.4) methods, subject to the initial condition (a) \(u_{1}(x,0)=e^{ix/4}/2,\ x\in[0,8\pi]\) (b) \(u_{2}(x,0)=e^{i8\pi},\ x\in[0,\pi/4]\). with the small value of \(\omega=-3/16\), the ETD1 method gives the best accuracy for a given time-step size, while, case (b) of the figure shows that for the initial condition \[u_{2}(x,0)=e^{i8x},\] with the small value of \(A=1\) and large value of \(\omega=63\), the IFEULER method is the most accurate method for solving the NLS equation (5.24). The above analysis can also be applied in a similar way to the soliton solutions \[u(x,t)=a\text{sech}(b(x-vt))e^{i(c_{0}x+dt)}, \tag{5.29}\] of the NLS equation (5.24) \[\frac{\partial u(x,t)}{\partial t}=-i\Big{(}\frac{\partial^{2}u(x,t)}{ \partial x^{2}}+|u(x,t)|^{2}u(x,t)\Big{)}.\] For these solutions, the non-linear part takes the form \[F(u(t),t)=-i|u|^{2}u=-ia^{3}\text{sech}^{3}\big{(}a(x-vt)/\sqrt{2}\big{)}e^{i( -vx/2+(v^{2}/4-a^{2}/2)t)}, \tag{5.30}\] and the linear operator \((-i\partial^{2}u/\partial x^{2})\) corresponds to the term \(c=ik^{2}\) for waves with wave-number \(k\). If we look again at (5.28a), we find that if the soliton solutions (5.29) vary slowly in time, then the non-linear part \(F(u(t),t)\) (5.30) also varies slowly in time, and the ETD1 method (5.2) is highly accurate. As we increase the speed \(v\), however, the value of \(dF(u(t),t)/dt\) increases, and the ETD1 method becomes less accurate (this was seen in our earlier experiments illustrated in figure 5.9 (c) for the case of the speed \(v=4\)). On the other hand, if we look at (5.28b) again for the non-linear term \(F(u(t),t)\) (5.30), we find that if the term \(F(u(t),t)e^{-ct}\) varies slowly in time, then the IFEULER method (5.4) should be the most accurate for a given time step. Here, as we vary the value of the speed \(v\), the value of \(d(F(u(t),t)e^{-ct})/dt\) does not change, and therefore, the IFEULER method obtains the same quantitative results for the errors over the same range of time-step sizes (this is illustrated in figure 5.9 (a) \(v=0\), (b) \(v=2\) and (c) \(v=4\)). To give a more evident view of the above analysis, fix \(a=\sqrt{2}\) (as in our earlier experiments in SS5.4.1) and assume that the dominant wave-number is approximately \(k=-v/2\Rightarrow c=iv^{2}/4\), and hence for the nonlinear term \(F(u(t),t)\) (5.30) \[dF(u(t),t)/dt\propto(v^{2}/4-a^{2}/2)a^{3}.\] Therefore, as the speed \(v\) increases, the nonlinear term \(F(u(t),t)\) varies rapidly and the value of \(dF(u(t),t)/dt\) increases, and according to (5.28a), the ETD1 method (5.2) become less accurate than the IFEULER method (5.4) for a given time step. We would expect the ETD1 method to be highly accurate when \(v=0\) and \(v^{2}/4=a^{2}/2\), i.e. \(v=2\), which agrees with our results in figure 5.9 (b). On the other hand the term \[F(u(t),t)e^{-ct}=-ia^{3}\mathrm{sech}^{3}(a(x-vt)/\sqrt{2})e^{i(-vx/2-a^{2}t/2)},\] and hence, the value \(d(F(u(t),t)e^{-ct})/dt\) is not influenced by increasing the speed \(v\), and the truncation error (5.28b) does not change and therefore the IFEULER method obtains the same quantitative results for the errors over the same range of time-step sizes. Finally, we note that the above investigation can also be useful to explain the behaviors of the second and fourth-order ETD and IF methods in solving the NLS equation (5.24), illustrated in figures 5.11 and 5.13 in our earlier experiments. #### Conclusion We have implemented several competing exponential integrators for a large stiff system of ODEs arising from the space discretization of the NLS equation (5.24). Our simulations for soliton solutions have revealed considerably different performances of the compared numerical methods in different cases, which make it clear that the best choice of method depends on the specific problem to be solved. However, all compared methods have been able to resolve oscillatory solitons to a required error tolerance without the severe time-step size restrictions of the standard schemes. Experimentally, we have found that, in the non-traveling wave soliton solution case (\(v=0\) in (5.26)), the most efficient methods of those we have compared are the ETD and ETD-RK methods of **Cox** and **Matthews**[19]. The best of these methods are the first-order ETD1 (5.2), the ETD2RK2 (5.8) and the ETD4RK (5.13) methods. Similar conclusions have been found in the case of slowly traveling soliton solutions (i.e. the speed \(v=2\) in our tests). In addition, we have found firstly that, the performance of IFEULER (5.4), IFRK2 (5.11) and IFRK4 (5.14) methods is not influenced by varying the values of the speed \(v\), i.e. if we compare the performance of similar order IF methods, as the speed increases, we get the same quantitative results for the errors over the same range of time-step sizes. Secondly, to produce accurate numerical solutions of the NLS equation for larger speeds \(v\), the IF methods prove to be the most accurate. ### 5.5 Thin Film Equation As a next step towards solving nonlinear PDEs, we include the fourth-order thin film equation [36] \[\frac{\partial\mathcal{H}(x,t)}{\partial t}=-\frac{\partial\mathcal{H}(x,t)}{ \partial x}+\frac{\partial}{\partial x}\Big{(}\mathcal{H}^{3}(x,t)\Big{(} \gamma\cos x-\alpha\Big{(}\frac{\partial^{3}\mathcal{H}(x,t)}{\partial x^{3} }+\frac{\partial\mathcal{H}(x,t)}{\partial x}\Big{)}\Big{)}\Big{)}, \tag{5.31}\] where the film thickness \(\mathcal{H}(x,t)\) is a \(2\pi\)-periodic function. The authors of [36] studied the time-dependent evolution equation (5.31) for a free-surface of a thin film viscous fluid flow exterior to a rotating horizontal circular cylinder in a vertical gravitational field with a polar angle \(x\) of a point on the cylinder. The model is based on a lubrication approximation assuming that the film is very thin compared to the cylindrical radius, and includes the effect of cylindrical rotation \((-\partial\mathcal{H}/\partial x)\), gravity \((\partial(\gamma\mathcal{H}^{3}\cos x)/\partial x)\) and surface-tension \((-\partial(\alpha\mathcal{H}^{3}(\partial^{3}\mathcal{H}/\partial x^{3}+ \partial\mathcal{H}/\partial x))/\partial x)\) with corresponding parameters \(\gamma\) and \(\alpha\). The theoretical analysis and physical interpretation of equation (5.31) in [36] revealed that distinct physical mechanisms, governing a slow approach to a steady state, occur on different time-scales. Firstly, there is the fast process of rotating with the cylinder. Secondly, surface-tension squeezes the free fluid surface to a cylindrical shape. After this, oscillations decay exponentially on a slow time scale. Their numerical investigation revealed that the solution oscillates with time before eventually decaying to a steady state at large time. A very brief list of some recent research in fluid dynamics involving numerical simulations of this class of problems includes [6, 24, 88]. The authors of [6] obtained an evolution equation and analyzed the dynamics of a thin viscous film which lines a rigid cylindrical tube and surrounds a core of inviscid fluid considering flow in the 2D cross section of the tube. When solving the full nonlinear system numerically, they found that the film can evolve towards a steady solution of uniform thickness. In addition, a model for the evolution of a thin liquid film flowing on and coating a horizontal cylinder that is rotating uniformly about its axis is presented in [24]. The authors of [24] obtained solutions to the evolution equation with implicit numerical schemes based on finite differences. The results showed a wide range of possible behavior depending on the rotation rate. Solving the thin film equation (5.31) numerically is a challenging task since the numerical solution poses several problems:1. The fourth-order term is very stiff: the stability constraint on the time step for explicit methods requires \(\Delta t\approx O(h^{4})\) for space step \(h\). 2. When integrating the semi-discretized system of the equations, we find that * Applying fully implicit methods requires at each time step the solution of a system of nonlinear equations. * Applying the IMEX methods requires at each time step the calculation of either a pentadiagonal differentiation matrix inverse (in the case of discretizing with finite difference approximations [24]) or a full dense differentiation matrix inverse (in the case of a Fourier spectral approximation). For example, applying a first-order IMEX method gives \[\mathcal{H}_{n+1}=\cdots+(I+\alpha\Delta t\mathcal{H}_{n}^{3}D_{4})^{-1} \mathcal{H}_{n}+\cdots,\] where \(I\) is the identity matrix, \(D_{4}\) is the corresponding differentiation matrix for the fourth derivative and \(\mathcal{H}_{n}\) denotes the numerical approximation to \(\mathcal{H}(t_{n})\). * We cannot apply exponential integration methods directly to solve such problems since these methods are designed for PDEs that can be split into linear and nonlinear parts. To facilitate numerical studies of the thin film equation (5.31), we set the perturbation \[\mathcal{H}(x,t)=h_{0}+u(x,t),\] where \(h_{0}\) is the mean film thickness, to be the solution of the equation and obtain \[\frac{\partial u(x,t)}{\partial t}=-\frac{\partial u(x,t)}{\partial x}+\frac{ \partial}{\partial x}\Big{(}(h_{0}+u(x,t))^{3}\Big{(}\gamma\cos x-\alpha\Big{(} \frac{\partial^{3}u(x,t)}{\partial x^{3}}+\frac{\partial u(x,t)}{\partial x} \Big{)}\Big{)}\Big{)}.\] After some algebraic manipulation to split the linear and nonlinear terms we deduce \[\frac{\partial u(x,t)}{\partial t} = -\frac{\partial u(x,t)}{\partial x}-\alpha h_{0}^{3}\Big{(}\frac{ \partial^{4}u(x,t)}{\partial x^{4}}+\frac{\partial^{2}u(x,t)}{\partial x^{2}} \Big{)}-\gamma h_{0}^{3}\sin x \tag{5.32}\] \[+ \frac{\partial}{\partial x}\Big{(}((h_{0}+u(x,t))^{3}-h_{0}^{3}) \Big{(}\gamma\cos x-\alpha\Big{(}\frac{\partial^{3}u(x,t)}{\partial x^{3}}+ \frac{\partial u(x,t)}{\partial x}\Big{)}\Big{)}\Big{)},\] which is an exact reformulation of the original thin film equation (5.31). The perturbation \(u(x,t)\) is either small, leading to a weakly nonlinear PDE or large, leading to a strongly nonlinear PDE. Figure 5.16: Time evolution of the thin film equation (5.32) with \(\alpha=0.0048,\gamma=0.0532\) and initial film thickness \(\mathcal{H}(x,0)=1\) (a) solution at polar angle \(x=0\), in the time interval \(0\leq t\leq 1000\) (b) an approach to a steady state at \(t=1000\). Discretizing the spatial derivatives of the thin film equation (5.32), with periodic boundary conditions, in Fourier space yields \[\frac{d\hat{u}(t)}{dt} = (-ik+\alpha h_{0}^{3}(k^{2}-k^{4}))\hat{u}(t)-\gamma h_{0}^{3}{\bf fft }(\sin x) \tag{5.33}\] \[+ ik[{\bf fft}((\gamma\cos x+i\alpha\Re({\bf ifft}((k^{3}-k)\hat{u} (t))))((u(t)+h_{0})^{3}-h_{0}^{3}))],\] where \(k\) is the wavenumber and \({\bf fft}\) and \({\bf ifft}\) are Matlab commands that represent the fast Fourier transform FFT and its inverse respectively. The FFT is a complex transform, but in most applications, the data \(u\) to be differentiated is real, hence, only the real part \(\Re\) is taken in the spectral differentiation. The semi-discrete system (5.33) is a system of coupled ODEs in time. The stiffness in the system is due to the fact that the diagonal linear operator, with elements \(-ik+\alpha h_{0}^{3}(k^{2}-k^{4})\), has complex eigenvalues of which some have large negative real parts that represent decay, because of the strong dissipation (\(-\partial^{4}u/\partial x^{4}\)), on a time scale much shorter than that typical of the nonlinear terms. We solve the thin film equation (5.32) in the time interval \(0\leq t\leq 1000\) with periodic boundary conditions and initial film thickness \({\cal H}(x,0)=1+u(x,0)\), where \(u(x,0)=0\). We set \(\alpha=0.0048\) and \(\gamma=0.0532\) (taken from [36]) and use \(N_{\cal F}=32\) grid points in the Fourier spatial discretization. We utilize the ETD4RK method (5.13) with time-step size \(\Delta t\approx 2^{-6}\) for the time-discretization. When evaluating the coefficients of the ETD4RK method (and similarly for the ETD and the ETD-RK methods utilized for the comparison computations presented in SS5.5.1) via the 'Cauchy integral' approach [44, 45] (see SS4.2.2), we choose circular contours of radius \(R=1\). Each contour is centered at one of the diagonal elements of the matrix of the linear part of the semi-discretized PDE (5.33). The contours are sampled at 32 equally spaced points and approximated by (4.16). The numerical results are presented in figure 5.16, which shows in (a) that the solution, at the polar angle \(x=0\), oscillates with time before eventually decaying to a steady state, shown in (b), at large time. In addition, the figure demonstrates excellent agreement between our numerical results and those of [36], providing a check on the accuracy of the method. In SS5.5.1, we solve numerically the thin film equation (5.32) up to final time \(t=20\) in a manner analogous to solving the former PDEs: the K-S (5.18) and the NLS (5.24) equations. We utilize the time-discretization methods listed in SS5.2, and discuss the results of the comparison tests and state our conclusion in SS5.5.2. #### Computational Results In our simulation tests, we numerically integrate the thin film equation (5.32) up to final times \(t=20\) with periodic boundary conditions and again the initial film thickness \(\mathcal{H}(x,0)=1+u(x,0)\), where \(u(x,0)=0\). We set \(\alpha=0.0048,\gamma=0.0532\) (taken from [36]) and again use \(N_{\mathcal{F}}=32\) grid points in the Fourier spatial discretization. We measure the efficiency of each time-discretization method, listed in SS5.2, in solving the test model, by computing the numerical relative error (5.17). The exact solution is approximated utilizing, for the time discretization, the ETD4RK method (5.13) with a very small time-step. In figure 5.17, we plot in (a), (b) and (c) the accuracy of first, second and fourth-order methods as a function of the time step respectively. The aim is to look for the most competitive method which takes fewer steps (larger time-step size) to achieve a given error tolerance. Note that the accuracy is improved as time-step decreases, and that the figures confirm the order expected for each method. In figure 5.18 (a), (b) and (c), we plot the first, second and fourth-order methods' accuracy as a function of CPU time respectively, to add a competing factor in differentiating between the methods. Considering the first-order methods, it appears from figure 5.17 (a) that the IMEX and the EULER methods are not reliable methods for solving the thin film equation (5.32). In the plot, we find that these two methods are the least accurate for a given time-step size. In addition, they are the most time consuming, see figure 5.18 (a), due to the very small time-step used by the methods (compared to the larger time-step used by the IFEULER and the ETD1 methods) to produce a solution that is accurate to any given level of accuracy. We find that the IFEULER method outperforms the EULER, the IMEX and the ETD1 methods in both accuracy and speed for any given level of accuracy. To produce solutions for the thin film equation with higher orders of accuracy in time, we utilize second-order methods. In figure 5.17 (b), we find that, for a given time-step size, the IFRK2 method produces more accurate solutions than the IFEULER method does. However, for any given level of accuracy, it is the second most accurate and the most time consuming method, see figure 5.18 (b), comparing to other second-order methods. Its performance resembles that of the ETD2RK1 and the ETD2CP methods, though, the ETD2CP method is the second most costlyFigure 5.17: Relative errors versus time step for the thin film equation (5.32) with initial film thickness \(\mathcal{H}(x,0)=1\) (a) first-order methods (b) second-order methods (c) fourth-order methods. Figure 5.18: Relative errors versus CPU time for the thin film equation (5.32) with initial film thickness \(\mathcal{H}(x,0)=1\) (a) first-order methods (b) second-order methods (c) fourth-order methods. method. In addition, tests show that the most accurate method is the ETD2RK2 method with the smallest amount of time consumed for solving the equation to any given level of accuracy, see figure 5.18 (b). Finally, we consider utilizing fourth-order methods with fourth-order convergence, that guarantee higher order accuracy in time. As illustrated in figure 5.17 (c) and 5.18 (c), the IFRK4 method has proven to be a satisfactory method, being the most accurate and the least time consuming method for any given level of accuracy. In addition, our simulations reveal that, whereas the ETD4 fails to produce an accurate solution to the equation for time-step larger than \(\Delta t\approx 5\times 10^{-3}\), the ETD4RK method can use time-steps of a maximum size \(\Delta t\approx 8\times 10^{-2}\) to produce a solution with an accuracy of \(10^{-7}\), see figure 5.17 (c). This indicates that the ETD4RK method has larger stability region than that of the ETD4 method which in addition, is smaller than that of the ETD2 method. This agrees with our stability analysis in SS3.3. #### Conclusion Problems in the fluid dynamics of thin films have been solved to demonstrate the effectiveness of exponential integrators. Under certain circumstances, we have found that, whereas the first-order IMEX, the EULER and the ETD4 methods are impractical for solving the nonlinear thin film equation (5.32), the IF and the ETD2RK2 methods have proven to be accurate and reliable. It would be interesting in future to analyze theoretically and understand the behavior of the numerical methods' performance in the experiments that have been conducted. Our conclusions have relied on only one case study, where we have considered one fixed value of the surface-tension and the gravity parameters with an initially uniform film thickness. However, the thin film equation is strongly nonlinear, hence convergence and stability become solution-dependent issues, and our conclusions could differ greatly for different cases. ## Chapter 1 Introduction ### 6.1 Overall Conclusions This research aimed to employ **Exponential Time Differencing** (ETD) as a time-discretization method to solve accurately stiff partial differential equations. We considered the effectiveness of these methods for solving real application problems. Throughout this project, we also presented the modifications that these methods need in order to be effective. In essence, for semi-linear time-dependent equations, these schemes provide a systematic coupling of the explicit treatment of nonlinearities and the exact integration of the stiff linear part of the equations. The thesis began with a review of the derivation of the explicit ETD method of arbitrary order \(s\), which includes the explicit formula of the methods' coefficients, and we presented the Runge-Kutta (ETD-RK) methods of **Cox** and **Matthews**[19] up to fourth-order. We also derived the ETD2RK2 scheme (analogous to the "modified Euler" method [78]) as an example of the one-parameter family of the ETD2RK\({}_{\text{J}}\) schemes for \(\jmath\in\mathbb{R}^{+}\). We concluded that * If the nonlinear part \(F(u(t),t)\) of the differential equation (3.3) is zero, the ETD integrators produce the exact solution to the ODE and so the schemes are automatically A-stable. * If the linear part is zero (\(c=0\) in (3.3)), the ETD and the ETD-RK integrators reduce to linear multi-step or classical explicit Runge-Kutta methods respectively. This work raised the issue of defining other formulas for one-parameter families of \(s\)-order ETD-RK\({}_{\text{J}}\) schemes in future studies. As a next step, we examined analytically the ETD and ETD-RK methods' stability properties, up to fourth-order. Tests were illustrated with figures where we computed and plotted the boundaries of the stability regions in two dimensions for negative and purely real values of the stiffness parameter in the test problem (3.24). The figures demonstrated that the stability regions of the ETD-RK methods are larger than those of the multi-step ETD methods, which agrees with the conventional fact that the RK methods have larger stability regions than the ordinary multi-step time-discretization methods of the same order. However, we found that the different types of an \(s\)-order ETD-RK schemes (for example, the ETD2RK1 and the ETD2RK2 schemes) have different stability regions, in contrast to the stability regions of different formulas of an \(s\)-order RK methods (which coincide). In addition, for any given value of the stiffness parameter, the stability regions of multi-stepETD methods get smaller as the order of the methods increases, which agrees with the stability characteristic of the ordinary multi-step methods. This work illustrates that the ETD and the ETD-RK methods have the advantage of avoiding the severe restrictions on the time-step size when compared with any conventional explicit method in solving a stiff system of ODEs. We found that the stability regions of the ETD and ETD-RK methods grow larger as the stiffness parameter decreases, which permits the usage of a large time-step size and consequent rapidity in computations. Applying the ETD methods requires the computation of the coefficients, which are matrix exponentials and related matrix functions of the linear operators. A complication [19] arises in the computation of these coefficients, in addition to the difficulties already inherent in computing a matrix exponential [60]. For matrices which have eigenvalues equal to zero, the explicit formulas of the coefficients involve division by zero, while for matrices which have very small eigenvalues approaching zero, the coefficients suffer from rounding errors due to the large amount of cancellation in the formulas. At this stage of the research, the plan was to test various algorithms against each other and assess their accuracy, efficiency and ease of use for improving the numerical difficulties in approximating the ETD coefficients and for an efficient implementation of the ETD methods. The algorithms studied in this thesis are the Taylor series, the Cauchy integral formula, the Scaling and Squaring algorithm, the Composite Matrix algorithm and the Matrix Decomposition algorithm for non-diagonal matrix cases. We now reiterate the main conclusions that were drawn from this work: 1. **Taylor Series:** This algorithm is known for its simplicity and ease of implementation. However, its efficiency deteriorates when approximating the ETD coefficients for large values (in magnitude) of the argument (matrix norm in the matrix case). 2. **The Cauchy Integral Formula:** The algorithm was proposed by **Kassam** and **Trefethen**[44, 45] to evaluate the ETD coefficients by means of contour integration in the complex plane approximated by the Trapezium rule (4.16). This algorithm turned out to be very accurate for diagonal matrix problems, but it can be inaccurate and time consuming for non-diagonal matrices with large norm. In addition, this algorithm requires a prior knowledge of the eigenvalue of largest magnitude, and we must ensure that none of the points on the contour are close to or at the origin, otherwise the original problem of rounding errors reappears. We gave theoretical estimate formulas of the errors when using the Cauchy integral formula to approximate the ETD coefficients for matrices with large norm, and we used these formulas to estimate the number of points required to discretize the contour to achieve a relative error of some chosen tolerance. However, improvements to this algorithm have recently been developed [70]. 3. **Scaling and Squaring Algorithm Type I:** This algorithm is one of the most effective and powerful algorithms for diagonal and non-diagonal matrix problems. However, it is the most complex to implement. In diagonal matrix cases, it is stable for small positive values and for all negative values on the diagonal. However, the algorithm performance deteriorates for large positive values. Due to our analysis, we found that the errors resulting from the scaling and squaring process in approximating, for large positive values, either the exponential of a diagonal matrix (when the algorithm's formulas include it) or the identity (4.27) (if the algorithm is based on it), are doubled at each scaling. The analysis in both cases also predicts that these errors increase linearly as the positive values increase. In non-diagonal matrix cases, the algorithm requires the knowledge of the eigenvalue of largest magnitude. In the case of matrices with negative eigenvalues, we found that the performance of the algorithm, when it is based on the identities (4.27), (4.21) and (4.22), agrees well with that in case of the negative values of the diagonal matrix. This is due to the advantage that we do not need to compute a matrix exponential. However, when the algorithm is based on (4.20) - (4.22), or (4.23) - (4.25) or (4.27) and (4.24) - (4.25), we found that the errors resulting from the scaling and squaring process cause the performance of the algorithm to deteriorate for large norm matrices. This behavior is contrary to that in the case of negative values of the diagonal matrices, and it is consequently a task for future research to investigate. Note that we favored Taylor series to combine with the algorithm rather than the popular Pade approximation. Firstly, we found that Pade approximations lead to larger rounding errors, due to cancellation problems, than Taylor series, which are then amplified by the scaling and squaring process. Secondly, Pade approximations require a more expensive matrix inversion, in which the matrix can be very poorly conditioned with respect to inversion. 4. **Scaling and Squaring Algorithm Type II:** This algorithm gave accurate results when evaluating the coefficients in the first-order ETD1 method (3.14) for very small values (in magnitude) of the argument utilizing the identity (4.48). Due to our analysis, we found that there is no amplification of the errors and the algorithm's accuracy remains the same at each scaling. However, this algorithm did not perform well when computing the coefficients in higher order ETD methods for very small values (in magnitude) of the argument, due to the amplification of rounding errors at each scaling as our analysis suggested. Thus, the Scaling and Squaring type **II** algorithm is not generally useful. 5. **Composite Matrix Algorithm:** Implementing this algorithm involves taking the exponential of a specially constructed matrix, via the Matlab routine "_expm_", which is based on the Scaling and Squaring algorithm. The resulting matrix contains the values of the ETD coefficients which then can be extracted easily. For small positive values and all negative values in diagonal problems and for small norm matrices in non-diagonal problems, the algorithm proved to be successful in approximating the ETD coefficients accurately. But it is inaccurate and computationally expensive in time for non-diagonal matrices with large norm, due to the larger number of scaling and squaring process that become inaccurate. 6. **Matrix Decomposition Algorithm:** This algorithm simplifies the evaluation of a function of a non-diagonal matrix exponential to that of a diagonal matrix exponential whose elements are the eigenvalues of the non-diagonal matrix. This algorithm is remarkably accurate when compared with the explicit formula for ETD coefficients, and is the cheapest algorithm in time. For small norm matrices, however, it is slightly less accurate than the Cauchy integral formula, the Scaling and Squaring type **I** and the Composite Matrix algorithms. Tests on the second-order centered difference differentiation matrix for the first and second derivatives and the Chebyshev differentiation matrix for the second derivative exhibit qualitatively similar results, except that the errors are typically larger for the Chebyshev matrix, due to the larger eigenvalues of this matrix. The above results led us to agree with the quotation "practical implementations are dubious in the sense that implementation of a sole algorithm might not be entirely reliable for all classes of problems" [60]. However, in differentiating between the algorithms considered, we concluded that the Scaling and Squaring type **I** algorithm is an efficient algorithm for computing the ETD coefficients in both diagonal and non-diagonal matrix cases. It exhibits some loss of accuracy for large values of the scalar arguments and large norm of matrices, but this is much less severe than for the Taylor series and the Cauchy integral formula. Also, it compares favorably with the high computational cost of the Cauchy integral formula and the Composite Matrix algorithm in non-diagonal matrix cases. The Matrix Decomposition algorithm, in the conventional eigenvector approach is also very efficient computationally, though it is slightly less accurate when the matrix norm is small, and is not applicable to matrices that do not have a complete set of linearly independent eigenvectors (where no invertible matrix of eigenvectors exists). The final part of this project aimed to conduct numerical comparison experiments on three stiff PDEs, in one space dimension. We employed first, second and fourth-order ETD methods, including the ETD and the ETD-RK methods proposed by **Cox** and **Matthews**[19], and made some observations regarding their efficiency against other competing stiff integrators including: the first-order Implicit-Explicit (IMEX) method and first, second and fourth-order Integrating Factor (IF) methods. The problems considered were: the dissipative time-dependent scalar **Kuramoto-Sivashinsky (K-S)** equation, the nonlinear dispersive **Schrodinger (NLS)** equation and the nonlinear (dissipative-dispersive) **Thin Film** equation. In the K-S and the NLS equations, the linear terms of the equations are primarily responsible for stiffness whereas in the thin film equation the nonlinear terms are the stiffest. For the simulation tests, we chose periodic boundary conditions and applied Fourier spectral approximation for the spatial discretization. In addition, we evaluated the coefficients of the ETD and the ETD-RK methods via the 'Cauchy integral' approach [44, 45]. Our simulations revealed considerably different performances of the compared numerical methods in different cases. Regarding accuracy and CPU time in the solving process of the K-S equation (5.20), we concluded that the ETD4RK method (5.13) is marginally the best. It maintains good stability and produces high accuracy with reasonable computational effort. Furthermore, when solving the test model for the specific initial condition (5.22), we found that the ETD and ETD-RK methods of [19] outperformed the compared methods in both speed and accuracy. For non-traveling or slowly traveling wave soliton solutions of the NLS equation (5.24), we found that, the most efficient methods of those we compared are the ETD and ETD-RK methods of [19]. However, the performance of these methods declines for solutions with larger speeds (fast traveling waves) and the IF methods then prove to be the most accurate. Our analysis revealed that, as the soliton wave-speed increases, the local truncation error of the ETD methods gets larger and the methods become less accurate. On the other hand, the local truncation error of the IF methods does not change as the speed varies and hence these methods maintain their performance, and moreover we obtain the same quantitative results for the errors (over the same range of time-step sizes) for an _s_-order IF method. To apply our numerical tests to the nonlinear thin film equation, we introduced a perturbation to split the equation into linear and nonlinear terms. For this equation we found that the first-order IMEX and the ETD4 methods are impractical methods, whereas the IF and the ETD2RK2 methods proved to be accurate and reliable. It would also be interesting to analyze theoretically and understand in future work the behavior of the numerical methods' performance. Further studies on the thin film equation should consider large and small perturbations to the constant solution. We expect exponential integrators to perform well when the perturbation is small. Large perturbations lead to nonlinear terms with a stiffer character, and hence the performance of the exponential integrators could deteriorate. In addition, we should consider cases of varying the surface-tension and the gravity parameters in equation (5.32). For example, increasing the surface-tension makes the decay of the amplitude of the higher oscillating Fourier modes more rapid and the complexity of the time-dependent solutions increases rapidly. Overall we deduced that all the compared methods exhibited the order of accuracy expected, and proved to be efficient alternatives to standard explicit integrators for computing solutions for stiff problems without severe time-step size restrictions. Additionally, we noted that, for accurate and economical computations, it is often advantageous to utilize fourth-order methods. The benefits of these methods are that they can use a much larger time-step size than the lower order comparable integrators for an equivalent level of accuracy, and hence they are cheap. We also found that the ETD integrators rely on the fast evaluation of the exponential and related functions. The computations of the methods' coefficients, which are done once at the beginning of the integration for each time step size, have a noticeable effect on the CPU times, as they impose a significant timing overhead when the methods use a large time step. However, ETD schemes can be efficiently combined with spatial approximations to provide accurate smooth solutions for stiff or highly oscillatory semi-linear PDEs. These methods were shown to perform extremely well in solving various real application problems, while achieving high accuracy and maintaining good stability. The ETD-RK methods were demonstrated to be more stable and allow to use larger time-step size than that used by the multi-step ETD methods, and we also found that the lower order multi-step ETD methods are more stable than higher order ones, which agrees very well with our stability analysis in SS3. As a final point, we caution that our conclusions are restricted only to the cases studied. These results cannot be generalized, as they may differ for other choices of initial conditions and for other problems. It is clear that the best choice of method depends on the specific problem to be solved. Our research serves as a basis for more detailed theoretical and numerical investigations on time-discretization methods to be carried out in the future. It is hoped that the future investigation will serve dual roles. Firstly, to confirm that the ETD methods can be ideal methods to cope with stiff systems in a wide range of applications. Secondly, to develop time-discretization methods that can facilitate numerical studies of higher-order problems with nonlinear stiff terms arising from mathematical models of a diverse range of physical phenomena. **Figure Captions** **Fig. 1.** The \(\pi^{0}\) dependence of the \(\piThe Numerical Solution of the Kuramoto-Sivashinsky Equation This **Matlab** program is used to obtain the numerical solution of the **Kuramoto-Sivashinsky (K-S)** equation (5.18), utilizing the ETD4RK method (5.13), and to produce figure 5.2. % Spatial grid N = 64; L = 2*pi; x = ([0:N-1]*2*L/N)'; dt = 2^(-10); % Spectral differentiation matrices D1 = i*(pi/L)*[0:N/2-1 0 -N/2+1:-1]; D2 = D1.^2; D2((N/2)+1) = -(N*pi/(2*L))^2; D4 = D2.^2; c = -D2-D4; % Evaluating the coefficients of the ETD4RK method % Using Cauchy integral formula R = 1; N1 = 32; r = R*exp(2*i*pi*(1:N1)/N1); c1 = c*dt; c2 = c1/2; E1 = exp(c1); E = exp(c2); for k = 1:N C1(k) = real(mean((dt/2)*((exp(c2(k)+r)-1)./(c2(k)+r)))); C2(k) = real(mean(dt*((-4-c1(k)-r+exp(c1(k)+r).*(4-3*(c1(k)+r)+(c1(k)+r).^2)) ./(c1(k)+r).^3))); C3(k) = real(mean(dt*((2+c1(k)+r+(c1(k)+r-2).*exp(c1(k)+r))./(c1(k)+r).^3)));C4(k) = real(mean(dt*((-4-3*(c1(k)+r)-(c1(k)+r).^2+(4-c1(k)-r).*exp(c1(k)+r)) ./(c1(k)+r).^3))); end % Initial condition u = exp(cos(x/2)); uhat = fft(u); % Solve PDE: tmax = 60; nmax = round(tmax/dt); nc = 60; nplt = floor(nmax/nc); udata = u; tdata = 0; min1 = min(u); max1 = max(u); for n = 1:nmax t = n*dt; uhat1_x = D1.*fft(u.^2)/2; ahat = (E.*uhat)-(C1'.*uhat1_x); a=real(ifft(ahat)); bhat = (E.*uhat)-(C1'.*D1.*fft(a.^2)/2); b=real(ifft(bhat)); chat = (E.*ahat)-(C1'.*(D1.*fft(b.^2)-uhat1_x)); C=real(ifft(chat)); uhat = (E1.*uhat)-(C2'.*uhat1_x+C3'.*D1.*(fft(a.^2)+fft(b.^2)) +C4'.*D1.*fft(C.^2)/2); u = real(ifft(uhat)); if mod(n,nplt) == 0 udata = [udata u]; tdata = [tdata t]; min1 = [min1 min(u)]; max1 = [max1 max(u)]; end end % plot results: set(gcf,'renderer','zbuffer'), clf, drawnow mesh(x,tdata,udata'), colormap(1e-6*[1 1 1]) xlabel x, ylabel t, zlabel u, grid on axis([0 2*L 0 tmax floor(min(min1)) ceil(max(max1))]) set(gca,'ztick',[floor(min(min1)) ceil(max(max1))]) ## Chapter Derivation of the Local Truncation Errors Local truncation errors or discretization errors are errors made by numerical algorithms that arises from taking finite number of steps in computation. It is present even with infinite-precision arithmetic, because it is caused by truncation of the infinite Taylor series to form the algorithm. To derive the local truncation errors \(L.T.E_{1}\) (5.28a) \[L.T.E_{1}\approx\frac{\Delta t^{2}}{2}dF(u(t),t)/dt,\] of the ETD1 method (5.2) \[u(t_{n+1})=u(t_{n})e^{c\Delta t}+(e^{c\Delta t}-1)F(u(t_{n}),t_{n})/c,\] and \(L.T.E_{2}\) (5.28b) \[L.T.E_{2}\approx\frac{\Delta t^{2}}{2}d(F(u(t),t)e^{-ct})/dt,\] of the IFEULER method (5.4) \[u(t_{n+1})=(u(t_{n})+\Delta tF(u(t_{n}),t_{n}))e^{c\Delta t},\] (both methods are performed with respect to the model \(du(t)/dt=cu(t)+F(u(t),t)\) (5.1)) let us assume that the function \(u(t_{n+1})\) can be expanded formally in Taylor series about \(t_{n}\) as follows, \[u(t_{n+1})=u(t_{n})+\Delta t\left.\frac{du(t)}{dt}\right|_{t=t_{n}}\!\!\!\!+ \left.\frac{\Delta t^{2}}{2!}\right.\frac{d^{2}u(t)}{dt^{2}}\right|_{t=t_{n}}\! \!\!\!+\left.\frac{\Delta t^{3}}{3!}\left.\frac{d^{3}u(t)}{dt^{3}}\right|_{t=t _{n}}\!\!\!\!+\cdots,\] (B.1)where \[\frac{d^{2}u(t)}{dt^{2}} = c\frac{du(t)}{dt}+\frac{dF(u(t),t)}{dt}\] (B.2a) \[= c^{2}u(t)+cF(u(t),t)+\frac{dF(u(t),t)}{dt},\] \[\frac{d^{3}u(t)}{dt^{3}} = c\frac{d^{2}u(t)}{dt^{2}}+\frac{d^{2}F(u(t),t)}{dt^{2}},\] (B.2b) \[= c^{3}u(t)+c^{2}F(u(t),t)+c\frac{dF(u(t),t)}{dt}+\frac{d^{2}F(u( t),t)}{dt^{2}},\] \[\vdots\] \[\frac{d^{m}u(t)}{dt^{m}} = c\frac{d^{m-1}u(t)}{dt^{m-1}}+\frac{d^{m-1}F(u(t),t)}{dt^{m-1}},\] (B.2c) \[= c^{m}u(t)+c^{m-1}F(u(t),t)+c^{m-2}\frac{dF(u(t),t)}{dt}+\cdots\] \[+ c\frac{d^{m-2}F(u(t),t)}{dt^{m-2}}+\frac{d^{m-1}F(u(t),t)}{dt^{m -1}}.\] For the ETD1 method (5.2), expand \(u(t_{n+1})\) utilizing (B.1), and substitute the Taylor series expansion of the exponential function \(e^{c\Delta t}\) to deduce ## Bibliography * [1] M. Abramowitz and I. A. Stegun. _Handbook of Mathematical Functions_. Dover Publications Inc., 1972. * [2] F. Aluffi-Pentini, V. DeFonzo, and V. Parisi. A Novel Algorithm for the Numerical Integration of Systems of Ordinary Differential Equation Arising in Chemical Problems. _Journal of Mathematical Chemistry_, 33:1-15, 2003. * [3] U. M. Ascher, S. J. Ruuth, and R. J. Spiteri. Implicit-Explicit Runge-Kutta Methods for Time-Dependent Partial Differential Equations. _Appl. Num. Math._, 25:151-167, 1997. * [4] U. M. Ascher, S. J. Ruuth, and B. T.R. Wetton. Implicit-Explicit Methods for Time-Dependent Partial Differential Equations. _SIAM J. Numer. Anal._, 32:797-823, 1995. * [5] H.A. Ashi, L.J. Cummings, and P.C. Matthews. Comparison of Methods for Evaluating Functions of a Matrix Exponential. _Applied Numerical Mathematics_, 59:468-486, 2009. * [6] L. R. Band, D. S. Riley, P. C. Matthews, J. M. Oliver, and S. L. Waters. Annular Thin-Film Flows Driven by Azimuthal Variations in Inter-Facial Tension. _Quart. J. Mech. Appl. Math._, In press. * [7] H. Berland and B. Skaflestad. Solving the Nonlinear Schrodinger Equation Using Exponential Integrators. _Norwegian Society of Automatic Control_, 27:201-217, 2006. - A Matlab Package for Exponential Integrators. _ACM Transactions on Mathematical Software_, 33:Article Number 4, 2007. * [9] G. Beylkin, J. M. Keiser, and L. Vozovoi. A New Class of Time Discretization Schemes for the Solution of Nonlinear PDEs. _J. Comput. Phys._, 147:362-387, 1998. * [10] J. Billingham and A. C. King. _Wave Motion_. Cambridge University Press, 2000. * [11] J. P. Boyd. _Chebyshev and Fourier Spectral Methods_. Dover, New York, second edition, 2001. * [12] E. O. Brigham. _The Fast Fourier Transform and its Applications_. Prentice-Hall, Englewood Cliffs, NJ, 1988. * [13] J. C. Bronski and J. N. Kutz. Numerical Simulation of the Semi-Classical Limit of the Focusing Nonlinear Schrodinger Equation. _Phy. Lett. A_, 245:325-336, 1999. * [14] R. L. Burden and J. D. Faires. _Numerical Analysis_. Wadsworth Group, seventh edition, 2001. * [15] M. Calvo and C. Palencia. A Class of Explicit Multi-Step Exponential Integrators for Semi-Linear Problems. _Numer. Math._, 102:367-381, 2006. * [16] M. P. Calvo, J. de Frutos, and J. Novo. Linearly Implicit Runge-Kutta Methods for Advection-Reaction-Diffusion Equations. _Appl. Num. Math._, 37:535-549, 2001. * [17] J. Certaine. The Solution of Ordinary Differential Equations with Large Time Constants. _In Mathematical Methods for Digital Computers_, A. Ralston and H. S. Wilf, eds.:128-132, Wiley, New York, 1960. * [18] J. W. Cooley and J. W. Tukey. An Algorithm for the Machine Calculation of Complex Fourier Series. _Math. Comp._, 19:297-301, 1965. * [19] S. M. Cox and P. C. Matthews. Exponential Time Differencing for Stiff Systems. _J. Comput. Phys._, 176:430-455, 2002. * [20] C. F. Curtiss and J. O. Hirschfelder. Integration of Stiff Equations. _Proc. Nat. Acad. Sci._, 38:235-243, 1952. * [21] G. Dahlquist. A Special Stability Problem for Linear Multi-Step. _BIT Numer. Math._, 3:27-43, 1963. * [22] Q. Du and W. Zhu. Stability Analysis and Applications of the Exponential Time Differencing Schemes. _Journal of Computational Mathematics_, 22:200-209, 2004. * [23] Q. Du and W. Zhu. Analysis and Applications of the Exponential Time Differencing Schemes and their Contour Integration Modifications. _BIT Numer. Math._, 45:307-328, 2005. * [24] P. L. Evans, L. W. Schwartz, and R. V. Roy. Steady and Unsteady Solutions for Coating Flow on a Rotating Horizontal Cylinder: Two-Dimensional Theoretical and Numerical Modeling. _Phys. Fluids_, 16:2742-2756, 2004. * [25] B. Fornberg. _A Practical Guide to Pseudo-Spectral Methods_. Cambridge University Press, Cambridge, UK, 1996. * [26] B. Fornberg and T. A. Driscoll. A Fast Spectral Algorithm for Nonlinear Wave Equations with Linear Dispersion. _J. Comput. Phys._, 155:456-467, 1999. * [27] J. Frank, W. Hundsdorfer, and J. G. Verwer. On the Stability of Implicit-Explicit Linear Multi-Step Methods. _Appl. Num. Math._, 25:193-205, 1997. * [28] A. Friedli. Generalized Runge-Kutta Methods for the Solution of Stiff Differential Equations. _In Numerical Treatment of Differential Equations, R. Burlirsch, R. Grigorieff, and J. Schroder, eds., 631_, Lecture Notes in Mathematics:35-50, Springer, Berlin, 1978. * [29] U. Frisch, Z. S. She, and O. Thual. Viscoelastic Behavior of Cellular Solution to the Kuramoto-Sivashinsky Model. _J. Fluid. Mech._, 168:221-240, 1986. * [30] D. Garfinkel, C. B. Marbach, and N. Z. Shapiro. Stiff Differential Equations. _Ann. Rev. Biophys_, 6:525-542, 1977. * [31] C. W. Gear. Automatic Integration of Ordinary Differential Equations. _Communications of the ACM_, 14:176-179, 1971. * [32] G. H. Golub and C. F. Van Loan. _Matrix Computations_. The Johns Hopkins University Press, Baltimore, MD, third edition, 1996. * [33] E. Hairer and G. Wanner. _Solving Ordinary Differential Equations_ II. Springer-Verlag, Berlin, second edition, 1996. * [34] P. Henrici. Fast Fourier Methods in Computational Complex Analysis. _SIAM Review_, 21:481-527, 1979. * [35] N. J. Higham. The Scaling and Squaring Method for the Matrix Exponentials Revisited. _SIAM J. Matrix Anal. Appl._, 26:1179-1193, 2005. * [36] E. J. Hinch and M. A. Kelmanson. On the Decay and Drift of Free-Surface Perturbations in Viscous Thin-Film Flow Exterior to a Rotating Cylinder. _Proc. R. Soc. Lond. A_, 459:1193-1213, 2003. * [37] M. Hochbruck, C. Lubich, and H. Selhofer. Exponential Integrators for Large Systems of Differential Equations. _SIAM. J. Sci. Comp._, 19:1552-1574, 1998. * [38] M. Hochbruck and A. Ostermann. Explicit Exponential Runge-Kutta Methods for Semi-linear Parabolic Problems. _SIAM J. Numer. Anal._, 43:1069-1090, 2005. * [39] M. Hochbruck and A. Ostermann. Exponential Runge-Kutta Methods for Parabolic Problems. _Appl. Numer. Math._, 53:323-339, 2005. * [40] R. Holland. Finite-Difference Time-Domain (FDTD) Analysis of Magnetic Diffusion. _IEEE Trans. Electromagn. Compat._, 36:32-39, 1994. * [41] J. M. Hyman and B. Nicolanenko. The Kuramoto-Sivashinsky Equation: a Bridge Between PDE's and Dynamical Systems. _Physica D._, 18:113-126, 1986. * [42] E. Infeld and G. Rowlands. _Nonlinear Waves, Solitons and Chaos_. Cambridge University Press, 1990. * [43] A. Iserles. _A First Course in the Numerical Analysis of Differential Equations_. Cambridge University Press, Cambridge, UK, 1996. * [44] A. K. Kassam. _High Order Time stepping for Stiff Semi-Linear Partial Differential Equations_. PhD thesis, Oxford University, 2004. * [45] A. K. Kassam and L. N. Trefethen. Fourth-Order Time Stepping for Stiff PDEs. _SIAM J. Sci. Comput._, 26:1214-1233, 2005. * [46] C. Klein. Fourth Order Time-Stepping for Low Dispersion Korteweg-de Vries and Nonlinear Schrodinger Equations. _Electronic Transactions on Numerical Analysis_, 29:116-135, 2008. * [47] S. Koikari. An Error Analysis of the Modified Scaling and Squaring Method. _Comput. Math. Appl._, 53:1293-1305, 2007. * [48] H.-O. Kreiss and J. Oliger. Comparison of Accurate Methods for the Integration of Hyperbolic Equations. _Tellus_, 24:199-215, 1972. * [49] S. Krogstad. Generalized Integrating Factor Methods for Stiff PDEs. _J. Comput. Phys._, 203:72-88, 2005. * [50] Y. Kuramoto. Diffusion-Induced Chaos in Reaction Systems. _Progr. Theoret. Phys. Suppl._, 64:346-367, 1978. * [51] C. Lanczos. Trigonometric Interpolation of Empirical and Analytical Functions. _J. Math. Phys._, 17:123-199, 1938. * [52] J. D. Lawson. Generalized Runge-Kutta Processes for Stable Systems with Large Lipschitz Constants. _SIAM J. Numer Anal._, 4:372-380, 1967. * [53] P. W. Livermore. An Implementation of the Exponential Time Differencing Scheme to the Magnetohydrodynamics Equations in a Spherical Shell. _J. Comput. Phys._, 220:824-838, 2007. * [54] Y. Y. Lu. Computing a Matrix Function for Exponential Integrators. _J. Comput. Appl. Math._, 161:203-216, 2003. * [55] J. E. Marsden and M. J. Hoffman. _Basic Complex Analysis_. W. H. Freeman and Company, third edition, 1998. * [56] B. V. Minchev. Computing Analytic Matrix Functions for a Class of Exponential Integrators. Reports in Informatics 278, University of Bergen, Bergen, Norway, 2004. * [57] B. V. Minchev and W. M. Wright. A Review of Exponential Integrators for First Order Semi-Linear Problems. _Tech. Rep. NTNU. (2005)_,, Preprint. * [58] A. R. Mitchell and D. F. Griffiths. _The Finite Difference Method in Partial Differential Equations_. John Wiley & Sons, 1980. * [59] C. Moler and C. Van Loan. Nineteen Dubious Ways to Compute the Exponential of a Matrix. _SIAM Review_, 20:801-836, 1978. * [60] C. Moler and C. Van Loan. Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later. _SIAM Review_, 45:3-49, 2003. * [61] D. R. Mott, E. S. Oran, and B. V. Leer. A Quasi-Steady-State Solver for the Stiff Ordinary Differential Equations of Reaction Kinetics. _J. Comput. Phys._, 164:407-428, 2000. * [62] J. D. Murray. _Mathematical Biology_. Springer-Verlag Berlin Heidelberg, second edition, 1993. * [63] S. P. Norsett. An A-Stable Modification of the Adams-Bashforth Methods. _In Conf. on Numerical Solution of Differential Equations_, Lecture Notes in Math. 109/1969:214-219, Springer-Verlag, Berlin, 1969. * [64] M.S. Paterson and L. J. Stockmeyer. On the Number of Non-Scalar Multiplications Necessary to Evaluate Polynomials. _SIAM J. Comput._, 2:60-66, 1973. * [65] P. G. Petropoulos. Analysis of Exponential Time-Differencing for FDTD in Lossy Dielectrics. _IEEE Trans. on Antennas and Propagation_, 45:1054-1057, 1997. * [66] S. J. Ruuth. Implicit-Explicit Methods for Reaction-Diffusion Problems in Pattern Formation. _J. Math. Biol._, 34:148-176, 1995. * [67] Y. Saad. Analysis of Some Krylov Subspace Approximations to the Matrix Exponential Operator. _SIAM J. Numer. Anal._, 29:209-228, 1992. * [68] J. M. Sanz-Serna and J. G. Verwer. Special Issue on Time Integration. _Applied Numerical Mathematics_, 25:135-136, 1997. * [69] T. Schmelzer. _The Fast Evaluation of Matrix Functions for Exponential Integrators_. PhD thesis, Oxford University, 2007. * [70] T. Schmelzer and L. N. Trefethen. Evaluating Matrix Functions for Exponential Integrators via Caratheodory-Fejer Approximation and Contour Integrals. _Electronic Transactions on Numerical Analysis_, 29:1-18, 2007. * [71] C. Schuster, A. Christ, and Fichtner W. Review of FDTD Time-Stepping for Efficient Simulation of Electric Conductive Media. _Microwave Optical Technol. Lett._, 25:16-21, 2000. * [72] L. F. Shampine and C. W. Gear. A User's View of Solving Stiff Ordinary Differential Equations. _SIAM Review_, 21:1-17, 1979. * [73] R. B. Sidje. EXPOKIT: A Software Package for Computing Matrix Exponentials. _ACM Trans. Math. Softw._, 24:130-156, 1998. * [74] G. I. Sivashinsky. Nonlinear Analysis of Hydrodynamic Instability in Laminar Flames, Part I: Derivation of the Basic Equations. _Acta Astronautica_, 4:1176-1206, 1977. * [75] G. I. Sivashinsky. Instabilities, Pattern Formation, and Turbulence in Flames. _Ann. Rev. Fluid Mech._, 15:179-199, 1983. * [76] B. Skaflestad and W. M. Wright. The Scaling and Modified Squaring Method for Matrix Functions Related to the Exponential. _Applied Numerical Mathematics_, 59:783-799, 2009. * [77] C. Sulem and P. Sulem. _The Nonlinear Schrodinger Equation: Self-Focusing and Wave Collapse_. Springer-Verlag New York, 1999. * [78] E. Suli and D. F. Mayers. _An Introduction to Numerical Analysis_. Cambridge University Press, first edition, 2003. * [79] E. Tadmor. The Exponential Accuracy of Fourier and Chebyshev Differencing Methods. _SIAM J. Numer. Anal._, 23:1-10, 1986. * [80] H. Tal-Ezer. Spectral Methods in Time for Parabolic Problems. _SIAM J. Numer. Anal._, 26:1-11, 1989. * [81] M. Tokman. Efficient Integration of Large Stiff Systems of ODEs with Exponential Propagation Iterative (EPI) Methods. _J. Comput. Phys._, 213:748-776, 2006. * [82] C. E. Treanor. A Method for the Numerical Integration of Coupled First-Order Differential Equations with Greatly Different Time Constant. _Math. Comp._, 20:39-45, 1966. * [83] L. N. Trefethen. _Finite Difference and Spectral Methods for Ordinary and Partial Differential Equations_. Online at: [http://www.comlab.ox.ac.uk/nick.trefethen/pdetext.html](http://www.comlab.ox.ac.uk/nick.trefethen/pdetext.html). * [84] L. N. Trefethen. _Spectral Methods in MATLAB_. SIAM, Philadelphia, 2000. * [85] L. N. Trefethen and H. M. Gutknecht. The Caratheodory-Fejer Method for Real Rational Approximation. _SIAM J. Numer. Anal._, 20:420-436, 1983. * [86] C. F. Van Loan. A Note on the Evaluation of Matrix Polynomials. _IEEE Trans. Automatic Control_, AC-24:320-321, 1979. * [87] J. M. Varah. Stability Restrictions on Second Order, Three Level Finite Difference Schemes for Parabolic Equations. _SIAM J. Numer. Anal._, 17:300-309, 1980. * [88] T. P. Witelski and M. Bowen. ADI Schemes for Higher-Order Nonlinear Diffusion Equations. _Appl. Num. Math._, 45:331-351, 2003. * [89] W. Wright. _A Partial History of Exponential Integrators_. Department of Mathematical Sciences, NTNU, Norway, Online at: [http://www.math.ntnu.no/num/expint/talks/wright04innsbruck.pdf](http://www.math.ntnu.no/num/expint/talks/wright04innsbruck.pdf), 2004. # Comparison of Different Methods for Computing Lyapunov Exponents Karlheinz Geist Ulrich Parlitz Werner Lauterborn Institut fur Angewandte Physik, Technische Hochschule Darmstadt Schlossgartenstrasse 7, D-6100 Darmstadt ###### Abstract Different discrete and continuous methods for computing the Lyapunov exponents of dynamical systems are compared for their efficiency and accuracy. All methods are either based on the QR or the singular value decomposition. The relationship between the discrete methods is discussed in terms of the iteration algorithms and the decomposition procedures used. We give simple derivations of the differential equations for continuous methods proposed recently and show that they cannot be recommended because of their long computation time and numerical instabilities. The methods are tested with the damped and driven Toda chain and the driven van der Pol oscillator. January 25, 1990 ## 1 Introduction The main aim of this paper is to compare different approaches for computing the spectrum of Lyapunov exponents that have been proposed during the last years.[1]-[7] We consider discrete (\(t\!\in\!\!\mathbf{Z}\)) or continuous (\(t\!\in\!\mathbf{R}\)) dynamical systems (\(M\), \(\phi\)) that are defined by a diffeomorphic flow map acting on an \(m\)-dimensional state space \(M\): \[\phi^{t}: M\!\to M\;,\] \[\mathbf{x}\!\mapsto\!\phi^{t}(\mathbf{x})\;. \tag{1}\] A continuous dynamical system is usually given by an ordinary differential equation \[\dot{\mathbf{y}}\!=\!v(\mathbf{y})\;,\qquad\mathbf{y$ }\!=\!\mbox{\boldmath$y}(\mathbf{x};\,t)\!=\!\phi^{t}(\mbox{\boldmath $x$})\!\in\!M\;,\quad t\!\in\!\mathbf{R}\;, \tag{2}\] The computation of the Lyapunov exponents is based on the linearized flow map \[D_{x}\phi^{t}: T_{x}M\!\to T_{\phi^{t}(\mathbf{x})}M\;,\] \[\mathbf{w}\!\to\!D_{x}\phi^{t}(\mathbf{u})\;. \tag{3}\] With respect to the orthonormal standard basis \(\{\mathbf{e}_{1},\,\cdots,\,\mathbf{e}_{m}\}\) in the tangent spaces \(T_{x}M\) and \(T_{\phi^{t}(\mathbf{x})}M\) the linearized flow map \(D_{x}\phi^{t}\) is given as the invertible \(m\!\times\!m\) flow matrix \(Y\!=\!Y(\mathbf{x};\,t)\). For discrete dynamical systems \(Y\) is obtained as the product of the Jacobi matrices of the map \(\phi^{t}(t\!=\!1)\) at the successive orbit points \(\mathbf{x}_{i}\!\!:=\!\phi^{t}(\mathbf{x})\) (\(1\!\leq\!j\)\(\!\leq\!n\)). When dealing with continuous systems the associated matrix variational equations \[\dot{Y}\!=\!JY\;,\;\;\dot{Y}(\mathbf{x};0)\!=\!I \tag{4}\] have to be integrated simultaneously with the differential equations (2) in order to obtain \(D_{x}\phi^{t}\) as the \(m\!\times\!m\) matrix \(Y\). Here \(J\!=\!\!f(\mathbf{y}(\mathbf{x};\,t))\!=\!((\partial\nu_{i} /\partial y_{j})|_{\mathbf{y}(\mathbf{x};t)})\) denotes the Jacobi matrix of partial derivatives of the vector field \(v\) at the point \(\mathbf{y}(\mathbf{x};\,t)\). Theinitial condition of the differential equation (4) in this case is the identity matrix \(I\). The _Lyapunov exponents_\(\lambda_{i}\) are given by the logarithms of the eigenvalues \(\mu_{i}\left(1{\leq}i{\leq}m\right)\) of the positive and symmetric matrix \[\Lambda_{\mathbf{z}}{:=}\lim_{t\to\infty}\left[\,Y(\mathbf{x};\,t)^{\mathrm{tr}}\,Y( \mathbf{x};\,t)\right]^{1/2t}\,, \tag{5}\] where \(Y(\mathbf{x};\,t)^{\mathrm{tr}}\) denotes the transpose of \(Y(\mathbf{x};\,t)\). The existence of \(\Lambda_{\mathbf{x}}\) for \(\rho\)-almost all \(x{\in}M\) (\(\rho\) denotes an ergodic \(\phi\)-invariant probability measure on the state space \(M\)) is based on the multiplicative ergodic theorem proved by Oseledec in 1968.[8] The Lyapunov exponents are \(\rho\)-almost everywhere constant and describe the way nearby trajectories converge or diverge in the state space of a dynamical system by measuring the mean logarithmic growth rates n yields the desired differential equations when only \(k\) (\(k\!<\!m\)) Lyapunov exponents are to be computed. In this case rectangular \(m\!\times\!k\) matrices \(Q\) are considered and the matrix identity \(QQ^{\rm tr}\!=\!I\) cannot be used anymore. Therefore this second approach leads to more complicated differential equations than the first one. We discuss the problems that occur when these algorithms are used. In SS 4 another method for computing the spectrum of Lyapunov exponents is considered that is based on the singular value decomposition SVD of the flow matrix \(Y\). In contrast to the discrete versions of the QR algorithm discussed in SS 3.1 for the SV decomposition no iterative procedure is known to the authors that avoids the typical numerical collapse. Therefore we resume here only the continuous method introduced by Greene and Kim[10] with slight modifications that are necessary to avoid numerical overflow problems. It shows the same numerical inefficiency as the continuous methods based on the QR decomposition. A further disadvantage of this method is the fact that its differential equations become singular when the Lyapunov spectrum to be computed is degenerate, i.e., \(\lambda_{i}\!\simeq\!\lambda_{i+1}\) for at least one \(1\!\leq\!i\!\leq\!m\!-\!1\), which is the normal case for periodic and quasiperiodic attractors (see, e.g., Geist and Lauterborn[14, 158]) The continuous methods based on the SV decomposition are therefore not suitable for the computation of Lyapunov diagrams like those shown in Refs. [14], [15]. We conclude with a final discussion of our experiences with the different methods. ## 2 Matrix decompositions ### Singular value decomposition Let \[Y\!=\!UFV^{\rm tr} \tag{7}\] be the _singular value decomposition_ (SVD) of \(Y\!=\!Y(\mathbf{x};t)\) into the product of the orthogonal matrices \(U\) and \(V\) and the diagonal matrix \(F\!=\!\mbox{diag}(\sigma_{i}(t),\cdots,\sigma_{n}(t))\)[16]. The diagonal elements \(\sigma_{i}(t)\) (\(1\!\leq\!i\!\leq\!m\)) of \(F\) are called the _singular values_ of \(Y\). The SVD is unique up to permutations of corresponding columns, rows and diagonal elements of the matrices \(U,\ V\) and \(F\). In those cases where all singular values are different a unique decomposition can be achieved by the additional request of a strictly monotonically decreasing singular value spectrum, i.e., \(\sigma_{1}(t)\!>\!\sigma_{2}(t)\!>\!\cdots>\!\sigma_{n}(t)\). Multiplying Eq. (7) with the transpose \(Y^{\rm tr}\!=\!VFU^{\rm tr}\) from the left shows, that the squares of the singular values \(\dot{\sigma}_{i}(t)\) of \(Y\) are the eigenvalues of the matrix \(Y^{\rm tr}\,Y\) (see, e.g., Lorenz[17]). Therefore Eq. (5) implies the relation \[\lambda_{i}\!=\!\ln\!\mu_{i}\!=\!\lim_{t\rightarrow\infty}\ln[\sigma_{i}^{ \alpha}(t)]^{1/\alpha}\!=\!\lim_{t\rightarrow\infty}\frac{1}{t}\!\ln[\sigma_{i }(t)] \tag{8}\] between the Lyapunov exponents \(\lambda_{i}\), the eigenvalues \(\mu_{i}\) of \(\Lambda_{\mathbf{x}}\) and the singular values \(\sigma_{i}(t)\) (\(1\!\leq\!i\!\leq\!m\)). The singular value decomposition permits an impressive geometric illustration of the meaning of the Lyapunov exponents. Multiplying Eq. (7) with \(V\) from the right rates of the principal axes give the Lyapunov exponents. ### QR decomposition Another way to look at the Lyapunov spectrum is to ask how the volumes \(V_{k}\) of _k_-dimensional parallelepipeds [_P__i_(_t_),..., \(P_{k}\)(_t_)] (1<=_k_<=_m_) in the _m_-dimensional tangent space \(T_{u}\)(_x_)_M_ grow (or shrink) in time. The axes of the parallelepipeds are given by \(P_{i}\)(_t_):=_D__x_'_(_O_'_)=_YO__i_ where {_O_',..., **O_'_**_=_}_ denotes an orthonormal basis of _T__x_M_ chosen at random.0,6) The orthonormal basis vectors _O_'(1<=_i_<=_m_) define orthogonal \(m\) x_k_ matrices _O_:=(**O**',..., **O**'_)(1<=_k_<=_m_). It turns out that the sum of the first \(k\) Lyapunov exponents \(l_{i}\) (1<=_i_<=_k_<=_m_) gives the desired growth rates \[\lambda^{k_{+}} = \lim\limits_{t\to\infty}\frac{1}{t}\ln[V_{k}] = \sum\limits_{i = 1}^{k}\lambda_{i}\;,\] when the Lyapunov exponents constitute a monotonically decreasing sequence (see, e.g., Benettin et al.4). The volume \(V_{k}\) (1<=_k_<=_m_) can be computed with the help of the uniquely defined _QR decomposition_ \[P = QR = (\textbf{{Q}}^{1},\cdots,\textbf{{Q}}^{n})\left( \begin{array}{cccc} {R_{11}} & \ast & \cdots & \ast \\ 0 & {R_{22}} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ast \\ 0 & \cdots & 0 & {R_{nk}} \\ \end{array} \right)\] of the \(m\) x_k_ parallelepiped matrix _P_:=(**P**',..., **P****k_) into the product of an orthogonal \(m\) x_k_ matrix \(Q\) (**Q**'_**'_O_'=_di_ for 1<=_i_<=_i_<=_k_) and an upper triangular \(k\) x_k_ matrix \(R\) with positive diagonal elements _R__i_>0 (1<=_i_<=_k_) (see Fig. 1(b)): \[V_{k} = \prod\limits_{i = 1}^{k}R_{ii}\;.\] Substituting the volumes \(V_{k}\) (1<=_k_<=_m_) in Eq. (10) by the products (12) yields: \[\lambda_{i} = \lim\limits_{t\to\infty}\frac{1}{t}\ln[R_{ii}]\;.\;\;\;(1\leq i \leq m)\] Fig. 1: Geometric illustration of (a) the SV and (b) the QR decomposition. For almost all orthogonal bases \(\{\mathbf{O}^{1},\,\cdots,\,\mathbf{O}^{n}\}\) the diagonal elements \(R_{ii}\) (\(1\leq i\leq m\)) of \(R\) are, in the limit \(t\to\infty\), ordered according to their size. In the following we take \(\mathbf{O}^{i}\!\!:=\!\mathbf{e}_{i}\) as it is done usually. In particular cases however this special choice of \(\{\mathbf{O}^{1},\,\cdots,\,\mathbf{O}^{n}\}\) can destroy the asymptotic monotony of the \(R_{ii}\) (\(1\leq i\leq m\)) (for an example see. SS 5). ## 3 QR decomposition based methods ### Discrete methods The discretization \(t_{j}\!\!:=\!j\!\cdot\!\Delta t\) (\(0\!\leq\!j\!\leq\!n\)) of the continuous time variable \(t\) (\(t\!=\!n\!\cdot\!\Delta t\)) enables the stepwise computation of the QR decomposition of the \(m\!\times\!k\) parallelepiped matrix \(P\) (\(1\!\leq\!k\!\leq\!m\)), where \(\Delta t\) has to be chosen sufficiently small to avoid the numerical problems mentioned above. The different algorithms for computing the QR decomposition of \(P\) for discretized continuous systems or iterated maps are summarized in the following diagram: (14) The diagram (14) is commutative, since the flow matrix \(Y\!\!=\!Y^{n-1}\!\!\cdots\!Y^{0}\) can be expressed as the product of the matrices \(Y^{j}\) at the successive orbit points \(\mathbf{x}_{i}\!\!:=\!\phi^{\prime\prime}(\mathbf{x}_{0})\)\(\in\!M(0\!\leq\!j\!\leq\!n\!-\!1)\) by the chain rule and \(P\!\!=\!QR\) and \(Y^{\prime-1}Q^{\prime-1}\!\!=\!Q^{\prime}R^{\prime-1}\) (\(1\!\leq\!j\!\leq\!n\)) by definition of the QR decomposition (11). The diagonal "maps" \(P^{j}\) (\(0\!\leq\!j\!\leq\!n\!-\!1\)) are only important for continuous dynamical systems as will be discussed below. In that case the matrices \(Y^{j}\) and \(P^{j}\) (\(0\!\leq\!j\!\leq\!n\!-\!1\)) can be obtained by integrating the matrix variational equations (4) with the \(m\) x \(m\) unit matrix \(I\) ( \(I_{u}\) = \(d_{u}\) for 1 <= \(i\) <= \(m\), 1 <= \(j\) <= _m_) or the orthogonal \(m\) x \(k\) matrices _\[Q^{\rm tr}\dot{Q}-Q^{\rm tr}JQ=-\dot{R}R^{-1}\,. \tag{20}\] The right-hand side of Eq. (20) is an upper triangular matrix. The components of the skew symmmetric matrix \[S:=Q^{\rm tr}\dot{Q} \tag{21}\] are therefore given by the equation \[S_{\rm u}=\left\{\begin{array}{ll}(Q^{\rm tr}JQ)_{\rm u}\,,&i>j\\ 0\,,&i=j\\ -(Q^{\rm tr}JQ)_{\rm u}\,.&i<j\end{array}\right. \tag{22}\] The matrix \(S\) may be used to define the desired differential equation for \(Q\): \[\dot{Q}=QS\,. \tag{23}\] By (20) and (22) the equations for the diagonal elements of \(R\) are given by \[\frac{\dot{R}_{\rm u}}{R_{\rm u}}=(Q^{\rm tr}JQ)_{\rm u}\,.\,\,\,\,(1\leq i \leq m) \tag{24}\] To determine the Lyapunov exponents \(\lambda_{t}\) only the logarithms \(\rho_{t}:=\ln(R_{\rm u})\) of the diagonal elements of \(R\) are of interest. According to (24) they fulfill the equations \[\dot{\rho}_{t}=(Q^{\rm tr}JQ)_{\rm u}\,.\,\,\,\,\,\,\,\,\,\,(1\leq i\leq m) \tag{25}\] Thus to compute the spectrum of Lyapunov exponents only Eqs. (23) and (25) have to be solved simultaneously with the equations of motion (2). The quantities \(\rho_{t}(t)/t\) converge to the Lyapunov exponents \(\lambda_{t}(1\leq i\leq m)\) in the limit \(t\to\infty\). #### 3.2.2 Differential equations for the largest \(k\) Lyapunov exponents In the derivation of differential equation (23) for the orthogonal matrix \(Q\) from the definition (21) of \(S\) we have used the matrix identity \(QQ^{\rm tr}=I\). This is not possible in the case where only the largest \(k\) Lyapunov exponents are to be computed. Then \(R\) is a \(k\times k\) matrix and \(Q\) is a \(m\times k\) matrix. Therefore the identity \(Q^{\rm tr}Q=I\) holds but not \(QQ^{\rm tr}=I\). In the following a derivation of the differential equations for \(Q\) and the diagonal elements \(R_{\rm u}\) (\(1\leq i\leq m\)) of \(R\) is given where the identity \(QQ^{\rm tr}=I\) is not used. Equation (19) directly implies the differential equation \[\dot{Q}=JQ-QW \tag{26}\] for \(Q\), where \(W:=\dot{R}R^{-1}\) is a \(k\times k\) upper triangular matrix. From (26) it follows: \[W=Q^{\rm tr}JQ-Q^{\rm tr}\dot{Q}=Q^{\rm tr}JQ-S\,. \tag{27}\] As \(W\) is upper triangular and \(S\) skew symmetric it is easy to see that the equations \[W_{\rm u}=\left\{\begin{array}{ll}(Q^{\rm tr}JQ)_{\rm u}+(Q^{\rm tr}JQ)_{\rm u }\,,&i<j\\ (Q^{\rm tr}JQ)_{\rm u}\,,&i=j\\ 0\,,&i>j\end{array}\right. \tag{28}\]for the components of \(W\) hold. Knowing \(W\) as a function of \(J\) and \(Q\), the differential equation (26) can be solved to obtain \(Q(t)\). The differential equations for the logarithms \(\rho_{i}(t)\) of the diagonal elements \(R_{ii}\) of \(R\) are given in terms of \(W\): \[\dot{\rho}_{i}=W_{ii}\;.\;\;\;(1\leq i\leq m) \tag{29}\] For \(k\) larger than a critical value the computation of the largest \(k\) exponents is more expensive than the determination of the complete Lyapunov spectrum (see Fig. 2). ## 4 Singular value decomposition based continuous method Similar to the continuous QR method we will now formulate differential equations for the quantities that are needed to compute the Lyapunov spectrum in terms of the singular value decomposition SVD. To avoid computational difficulties with the exponentially increasing or decreasing diagonal elements \(\sigma_{i}\) (\(1\leq i\leq m\)) of the matrix \(F\) we consider the diagonal matrix \[E:=\ln(F)=\mathop{\rm diag}\nolimits(\varepsilon_{1},\cdots,\varepsilon_{m}) \tag{30}\] with elements \(\varepsilon_{i}:=\ln(\sigma_{i})\) (\(1\leq i\leq m\)). Differentiation with respect to time yields \[\dot{E}=F^{-1}\dot{F}=F^{-1}\dot{U}^{tr}UF+F^{-1}U^{tr}JUF+V^{tr}\dot{V}\;, \tag{31}\] where the derivative \(\dot{F}\) of \(F\) is given by substituting the flow matrix \(Y\) in the matrix variational equations (4) by its singular value decomposition \(Y=UFV^{tr}\) (7). To eliminate \(V\) in Eq. (31) the sum \(\dot{E}+\dot{E}^{tr}=2\dot{E}\) is computed where the term \(V^{tr}\dot{V}+\dot{V}^{tr}\,V\) vanishes due to the orthogonality of \(V\). With the abbreviations \[A:=U^{tr}\dot{U}=-\dot{U}^{tr}U\;,\] \[B:=-\,F^{-1}AF\;,\] \[C:=U^{tr}JU\;,\] Figure 2: Operations count for the two continuous QR methods for computing the Lyapunov exponents given in § 3.2. In the Eqs. (26), (28) and (29) for the \(m\times k\) case \(n_{\rm mod}(m,k):=[2m^{k}k+mk^{2}+mk^{2}-k^{2}-k]/2\) additions and \(n_{\rm mod}(m,k):=[mk(2m+3k+k^{2})]/2\) multiplications occur. In the \(m\times m\) case (Eqs. (22), (23) and (25)) \(n_{\rm mod}(m):=[5m^{3}-4m^{2}-m]/2\) additions and \(n_{\rm mod}(m):=[5m^{3}+m^{2}]/2\) multiplications are to be computed. Thus for \(k\) larger than the critical value \(k_{\rm mod}(m):=\max\{k\in\Sigma N|n_{\rm mod}(m,k)\)\(\leq n_{\rm mod}(m)\}\) (\(k_{\rm mod}(m);=\max\{k\in\Sigma N|n_{\rm mod}(m,k)\)\(\leq n_{\rm mod}(m)\}\)) the computation of the largest \(k\) exponents needs more additions (multiplications) than the determination of the complete spectrum. The dependence of the quantities \(k_{\rm mod}\) and \(k_{\rm mod}\) on the state space dimension \(m\) is shown in the figure. Comparison of Different Methods for Computing Lyapunov Exponents \[D:=F^{-1}CF\, \tag{32}\] this yields the following differential equation for \(E\): \[2\dot{E}=B+B^{\rm tr}+D+D^{\rm tr}. \tag{33}\] The right-hand side of Eq. (33) depends on the matrices \(J\), \(F=\exp(E)\), \(U\) and \(\dot{U}\). To separate the time derivatives \(\dot{E}\) and \(\dot{U}\) of \(E\) and \(\dot{U}\) the components of the matrices \(B\) and \(D\) have to be considered. They are given by the equations \[B_{\it u}=-\,A_{\it u}\frac{\sigma_{\it i}}{\sigma_{\it i}}\,\] \[D_{\it u}=C_{\it u}\frac{\sigma_{\it i}}{\sigma_{\it i}}. \tag{34}\] The orthogonality of \(U\) implies that \(A\) is skew symmetric and thus \(B_{\it u}=-\,A_{\it u}\)\(=0\) (\(1\leq i\leq m\)). The diagonal elements \(\dot{\varepsilon}_{\it i}=\dot{\sigma}_{\it i}/\sigma_{\it i}\) of \(\dot{E}\) therefore fulfill the equation \[\dot{\varepsilon}_{\it i}=C_{\it u}\, \tag{35}\] which can be used to compute the quantities \(\varepsilon_{\it i}(t)/t\stackrel{{ t\to\infty}}{{\longrightarrow}}\lambda_{\it i}\) (\(1\leq i\leq m\)). By means of the off-diagonal elements in Eq. (33) the \(m(m-1)/2\) equations \[0 =B_{\it ij}+B_{\it u}+D_{\it u}+D_{\it u}\] \[= -\,A_{\it u}\frac{\sigma_{\it i}}{\sigma_{\it i}}-A_{\it u}\frac{ \sigma_{\it i}}{\sigma_{\it j}}+C_{\it u}\frac{\sigma_{\it i}}{\sigma_{\it i}}+C_{\it u}\frac{\sigma_{\it i}}{\sigma_{\it j}}\,\Lyapunov spectra because \(l_{i}\) = \(l_{j}\) (1 <= \(i\), _j <= m_, _i = j_) implies lim_t_ -= \(h_{i}\)(_t_) = 1. For this reason and the fact that the continuous SVD method needs even more operations than the continuous QR method we have not investigated the \(m\) x \(k\) case although the SV decomposition is well defined for rectangular matrices, too.16 ## 5 Numerical results The implemented methods for computing Lyapunov exponents are given in Table I. Four of these methods are discrete QR methods where different iteration algorithms (Eqs. (17) and (18)) and different procedures for the QR decomposition are used. The GS method is based on the usual Gram-Schmidt orthonormalization procedure as well as the RGS method where the orthogonalization is repeated (see Appendix A.1). In the case of the H1 and the H2 method Householder transformations are used for the QR decomposition. H1 is based on the treppen-iteration algorithm (17) whereas H2 is given by the diagonal algorithm (18). The continuous QR methods CQR for the complete Lyapunov spectrum is implemented as described in SS 3.2.1. A continuous singular value method CSV has also been tested for a special low dimensional example as will be discussed in the following. The first dynamical system used to compare these methods is a damped and driven Toda chain of \(N\) = 15 unit masses \(m_{i}\) = 1 and periodic boundary conditions \(q_{o}\) = \(q_{N}\), \(q_{N}\) + 1 = _q__1 with the [(2\(N\) - 2) + 1]-dimensional state space \(M\) = _R__2__N_ - 2 x _S_': \[\left( \begin{array}{c} {\dot{d}}_{i} \\ {\dot{v}}_{i} \\ {\dot{\delta}} \\ \end{array} \right) = V\left( \begin{array}{c} {d}_{i} \\ {v}_{i} \\ {\dot{\theta}} \\ \end{array} \right) \cdot = \left( \begin{array}{c} {v}_{i} - {v}_{i + 1} \\ {\left[ {\left( {K}_{i - 1} - K_{i} \right) - \left( {D}_{t - 1} - D_{i} \right) + F_{i} } \right]/m_{i}} \\ {\omega_{0}/2\pi} \\ \end{array} \right). \tag{40}\] Here \(q_{i}\) denotes the elongation of the \(i\) th mass \(m_{i}\) from its equilibrium position, \(v_{i}\) := \(q_{i}\) its velocity, \(d_{i}\) := \(q_{i}\) - _q__i_+ the relative elongations of adjacent masses, \(K_{i}\). := exp(\(d_{i}\)) - 1 the exponential restoring force of the spring between the _i_th and (_i_ + 1)-th mass, \(D_{i}\) := (\(u_{i}\) + 1 - \(v_{i}\))_d_ the internal dissipation force proportional to the relative velocities of the masses with damping coefficient \(d\) and \(F_{i}\)(_t_) := \(a\) sin(_o__0__t_) \(d_{i}\),1 a single-frequency external force with driving amplitude \(a\) and frequency \(o_{0}\). This system possesses quasiperiodic solutions with two and three incommensurate fre \begin{table} \begin{tabular}{c|c|c|c|c} \hline Method & Type & Algorithm & Decomposition of \(Y\) \\ \hline & & & Type & Method \\ \hline GS & discrete & (18) & QR & (A·1) \\ \hline RGS & discrete & (18) & QR & (A·2) \\ \hline H1 & discrete & (18) & QR & (A·3) \(\sim\)(A·6) \\ \hline H2 & discrete & (17) & QR & (A·3) \(\sim\)(A·5) \\ \hline CQR & continuous & (22), (23), (25) & QR & — \\ \hline CSV & continuous & (32), (35), (37) \(\sim\)(39) & SV & — \\ \hline \end{tabular} \end{table} Table I: Classification of the implemented methods for computing Lyapunov exponents. quencies as well as high-dimensional strange attractors.[14, 15, 21] Their Lyapunov spectra are computed to demonstrate the properties of the different algorithms. Figure 3(a) shows the temporal convergence of the numerical estimates of the complete spectrum of Lyapunov exponents computed with the GS method for a three-frequency quasiperiodic solution of the (\(N\!=\!15\))-chain (see Fig. 11 in Ref. [15]). Due to the oscillations connected with the structure of the attractor it needs a long time until they arrive at a sufficiently well-defined limit (Fig. 3(b)). The same Lyapunov spectrum has also been computed with the RGS, H1, H2 and CQR method. The corresponding results for the largest and smallest Lyapunov exponent after 500 and 2000 periods of the driving are given in Table 2. The differences between the discrete methods are much smaller than the fluctuations of the numerical estimates \(\lambda_{i}(n)\) which converge in the limit \(n\!\to\!\infty\) to the Lyapunov exponents \(\lambda_{i}\). Figure 4 shows the temporal convergence of the numerical estimates of the first, third and fifth Lyapunov exponent of a high dimensional strange attractor of the Toda chain computed with the discrete methods of Table 1. The Lyapunov exponents computed with the GS and the RGS method agree within the resolution of the plot. The differences between the other methods are in the order of the fluctuations of the \(\lambda_{i}(n)\). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Method & \(\lambda_{i}\) (\(n\!=\!500\)) & \(\lambda_{i}\) (\(n\!=\!2000\)) & \(\lambda_{m}\) (\(n\!=\!500\)) & \(\lambda_{m}\) (\(n\!=\!2000\)) \\ \hline GS & \(-.3878\!\pm\!10^{-4}\) & \(-.7811\!\pm\!10^{-4}\) & \(-.1803\) & \(-.1812\) \\ \hline RGS & \(-.3878\!\pm\!10^{-4}\) & \(-.7811\!\pm\!10^{-4}\) & \(-.1803\) & \(-.1812\) \\ \hline H1 & \(-.3880\!\pm\!10^{-4}\) & \(-.7810\!\pm\!10^{-4}\) & \(-.1803\) & \(-.1812\) \\ \hline H2 & \(-.3875\!\pm\!10^{-4}\) & \(-.7803\!\pm\!10^{-4}\) & \(-.1803\) & \(-.1812\) \\ \hline CQR & \(-.2218\!\pm\!10^{-4}\) & — & \(-.1804\) & — \\ \hline \end{tabular} \end{table} Table 2: Numerical estimates of the Lyapunov exponents \(\lambda_{i}\) and \(\lambda_{m}\) after \(n=\!500\) and \(n\!=\!2000\) periods \(T_{0}\!=\!2\pi/\omega_{0}\) of the driving for the same attractor as in Fig. 3. Figure 3: (a) Numerical estimates of the Lyapunov exponents for a \(T^{3}\)-solution of a damped and driven Toda chain of \(N\!=\!15\) masses at the driving amplitude \(a\!=\!3.5\), the driving frequency \(\omega_{0}\!=\!1.1237\) and the damping coefficient \(d\!=\!0.1\) as it develops with the number \(n\) of periods \(T_{0}\!:=\!2\pi/\omega_{0}\) of the driving. (b) The second and the third Lyapunov exponent show oscillations with periods \(T_{1}\!:\simeq\!1/(\omega_{0}\!^{3}/2)\!\simeq\!3.4\) and \(T_{3}\!\simeq\!1/\omega_{0}\!^{3}\!\simeq\!69.4\), where \(1\), \(\omega_{0}\!^{3}/2\!\simeq\!0.29786\) and \(\omega_{0}\!^{3}\!\simeq\!0.01440\) are the normalized basic frequencies of the corresponding 3-dimensional double torus \(2\,T^{3}\) in the state space \(M\). For a more detailed investigation of this attractor see Ref. [15]. ric Jacobi matrices \(J\) they showed that this method preserves the orthogonality of the matrix \(Q\). It should be noted that the necessity for excluding the growth of nonorthogonal perturbations leads to a further increase of the computation time. In some periodically driven low-dimensional systems of the form \[\dot{x}+g(x,\,\dot{x}) = h(t) = h(t+T_{0})\;, \tag{41}\] e.g., the driven van der Pol oscillator,[39] \[g(x,\,\dot{x}) = d(x^{2} - 1)\,\dot{x}+x\;,\] \[h(t) = a\,\cos\,(\omega_{0}t)\;,\] Table 3 shows the mean relative computation times. In consequence of the large number of operations the continuous QR method needs up to a factor 40 (!) more CPU-time than the discrete standard methods (see Table 3). Therefore the computations have been stopped after 500 periods of the driving because this took already about three hours on a CRAY X-MP/24 with 64-bit arithmetic. Furthermore the differential equations for \(Q\) do not assure that \(Q\) remains orthogonal during its time evolution and thus nonorthogonal perturbations due to round off errors grow in time (see Fig. 5). To overcome this difficulty Greene and Kim[13] proposed to add an additional correction term \(\nu[Q^{\nu}Q-I]\) to the right-hand side of the differential equation (23) or (26). For the very special class of dynamical systems with time independent and symmetric Jacobi matrices \(J\) they showed that this method preserves the orthogonality of the matrix \(Q\). It should be noted that the necessity for excluding the growth of nonorthogonal perturbations leads to a further increase of the computation time. In some periodically driven low-dimensional systems of the form \[\ddot{x}+g(x,\,\dot{x}) = h(t) = h(t+T_{0})\;, \tag{41}\] e.g., the driven van der Pol oscillator,[39] \[g(x,\,\dot{x}) = d(x^{2} - 1)\,\dot{x}+x\;,\] \[h(t) = a\,\cos\,(\omega_{0}t)\;,\] Figure 4: Numerical estimates of the Lyapunov exponents \(\lambda_{1}\!>\!\lambda_{3}\!>\!0\!>\!\lambda_{4}\) for a high-dimensional strange attractor of a damped and driven Toda chain of \(N\!=\!15\) masses (\(\omega_{0}\!=\!0.9\), \(a\!=\!6.0\) and \(d\) = 0.1; see Fig. 13 in Ref. 21)). \begin{table} \begin{tabular}{c|c} Method & CPU-Time \\ GS & 1.00 \\ RGS & 1.02 \\ H1 & 1.07 \\ H2 & 1.17 \\ CQR & 40.00 \\ \end{tabular} \end{table} Table 3: Mean relative computation times of the discrete (GS, RGS, H1 and H2) and continuous (CQR) QR decomposition based methods. \[\omega_{0} = 2\pi/T_{0}\,, \tag{42}\] these problems may be overcome, because the QR decomposition of the flow matrix \(Y\) can be done explicitely yielding: \[\left(\begin{array}{ccc}Y_{11}&Y_{12}&Y_{13}\\ Y_{21}&Y_{22}&Y_{23}\\ 0&0&1\end{array}\right) = \left(\begin{array}{ccc}\cos\alpha&-\sin\alpha&0\\ \sin\alpha&\cos\alpha&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}R_{11}&R_{12}&R_{13}\\ 0&R_{22}&R_{23}\\ 0&0&1\end{array}\right). \tag{43}\] In this case the orthogonal matrix \(Q\) may be parameterized by a single angle \(\alpha\) and the differential equation (23) or (26) for \(Q\) can be replaced by the differential equation \[\dot{\alpha} = - \sin^{\alpha}\!a - \left(\frac{\partial g}{\partial x}\cos\alpha+\frac{\partial g}{ \partial\dot{x}}\sin\alpha\right)\!\cos\alpha \tag{44}\] for \(\alpha\). Therefore no problems with the orthogonality of \(Q\) can occur but the computation time is still about a factor of two higher than for the discrete standard methods. Note that the simple form of Eq. (43) follows from the special choice \(O = I\) for the orthogonal matrix \(O\) (see SS 2.2). It implies \(R_{3s}(t) = 1\), i.e., \(\lambda_{3} = 0\). Thus only the nontrivial Lyapunov exponents \(\lambda_{i} = \lim_{t\to\ln}[R_{ii}(t)]/t\). (\(i = 1,2\)) are (automatically) ordered according to their size. This exceptional role of the trivial exponent occurs in connection with all periodically driven dynamical systems. In contrast to the QR decomposition (43) the singular value decomposition (7) of the flow matrix \(Y\) does not yield simple orthogonal matrices \(U\) and \(V\) that are parameterized (only) by a single angle. To compute the Lyapunov exponents for the complete three-dimensional system one has therefore to solve the general differential equation (39) for the components of the matrix \(U\). This leads to high computational costs and a loss of the orthogonality of \(U\) during the computation. These problems can be avoided by first reducing the dimension of the problem \(\dot{t}\) from three to two and then performing the singular value decomposition, because \(2 \times 2\) orthogonal matrices can always be parameterized by a single angle. The reduction of the dimension results from the fact that the reduced two-dimensional set of Figure 5: Check of the orthogonality of \(Q(\mathbf{x};t)\) for the strange attractor of Fig. 4. (a) Euclidean norm of the matrix \(Q^{\alpha}Q - I\) and (b) divergence of the vector field \(v\) minus the sum of the Lyapunov exponents as they develop with the number \(n\) of periods of the driving. These two quantities have to vanish for all times because \(Q\) is orthogonal and \(\sum_{i=1}^{\ln(\alpha - \ln)}\lambda_{i} = \mathop{\rm div}V(\mathbf{x}) = \mathop{\rm Tr}(I(\mathbf{x};t)) = - 2N \cdot d\) = - 3.0 has to be constant for all \(\mathbf{x}\!\in\!M\), \(t\!\in\!\mathbf{R}\) by Liouville’s theorem (see, e.g., Arnol’d’m). variational equations \[\vec{Y} = \left( {\matrix{0 &1 \cr - \frac{\partial g}{\partial x} & -\frac{\partial g}{\partial\dot{x}}}} \right)\vec{Y}\;,\;\;\;\;\vec{Y}(0) = I \tag{45}\] suffices to compute the two nontrivial Lyapunov exponents l1 and l2 (see Appendix B). The SV decompostion \(\vec{Y} = \vec{U}\vec{F}\vec{V}^{\rm tr}\) of the reduced flow matrix \(\vec{Y}\) is given by \[\left( {\matrix{\vec{Y}_{11}&\vec{Y}_{12}\cr\vec{Y}_{21}&\vec{Y}_{22}\cr}} \right) = \left( {\matrix{\cos\beta & - \sin\beta \cr\sin\beta&\cos\beta \cr}} \right)\left( {\matrix{\sigma_{1}&0 \cr 0&\sigma_{2}\cr}} \right)\left( {\matrix{\cos\gamma&\sin\gamma \cr-\sin\gamma&\cos\gamma \cr}} \right). \tag{46}\] Substituting \(\vec{U}\) in Eq. (39) yields \[\dot{\beta}\left( {\matrix{- \sin\beta & - \cos\beta \cr\cos\beta& - \sin\beta \cr}} \right) = \left( {\matrix{\cos\beta & - \sin\beta \cr\sin\beta&\cos\beta \cr}} \right)A \tag{47}\] i.e., \[\dot{\beta} = A_{21} = - A_{12}\,. \tag{48}\] The desired differential equations for the angle \(\beta\) and the diagonal elements of \(E\) =ln(\(F\)) are therefore given as \[\dot{\beta} = \frac{C_{21} + C_{12}h_{21}}{1 - h_{21}}\,, \tag{49}\] \[\dot{\varepsilon}_{1} = C_{11}\,,\] \[\dot{\varepsilon}_{2} = C_{22}\,, \tag{50}\] with \[h_{21} = \exp(2(\varepsilon_{2} - \varepsilon_{1}))\,,\;\;\;(\stackrel{{ t \to \infty}}{{\longrightarrow}}0\;\;\;{\rm if}\;\;\;\lambda_{1} + \lambda_{2}) \tag{51}\] and \[C_{11} = \sin\beta\cos\beta - \left( {\frac{\partial g}{\partial x}\cos\beta + \frac{\partial g}{\partial\dot{x}}\sin\beta \overleftarrow{\beta}}\right)\sin\beta\,,\] \[C_{12} = \cos^{2}\beta + \left( {\frac{\partial g}{\partial x}\sin\beta - \frac{\partial g}{\partial\dot{x}}\cos\beta \overleftarrow{\beta}}\right)\sin\beta\,,\] \[C_{13} = - \sin^{2}\beta - \left( {\frac{\partial g}{\partial x}\cos\beta + \frac{\partial g}{\partial\dot{x}}\sin\beta \overleftarrow{\beta}}\right)\cos\beta\,,\] \[C_{24} = - \sin\beta\cos\beta + \left( {\frac{\partial g}{\partial x}\sin\beta - \frac{\partial g}{\partial\dot{x}}\cos\beta \overleftarrow{\beta}}\right)\cos\beta\,. \tag{52}\] In contrast to the QR case (Eq. (44)) the right-hand side of Eq. (49) depends via the term \(h_{21}\) on the diagonal elements of \(E\) (or \(F\)) (see Eq. (51)). For nondegenerate Lyapunov exponents l1 > l2 the term \(h_{21}\) vanishes for \(t \to \infty\) and Eq. (49) becomes identical to Eq. (44). Already after a short time both continuous methods then yield nearly identical results for the quantities \(\rho_{i}\) and \(\epsilon_{i}\) that are used to compute the Lyapunov exponents (see Eq. (29) or (35), respectively). This fast convergence is demonstrated for a strange attractor of the driven van der Pol oscillator in Fig. 6. The numerical estimates computed with the SV method do not converge faster to the Lyapunov exponents than those computed with the QR methods. ## 6 Conclusions Different continuous and discrete methods for computing Lyapunov exponents are compared with respect to their efficiency and accuracy. All algorithms are based either on the QR decomposition or the singular value decomposition. The continuous methods (that are only applicable to differential equations) show several disadvantages. First, they need much more computer time than the discrete methods. Secondly, the orthogonality of the matrices \(Q\) or \(U\) is destroyed during the computation if no counter-measures are taken (that increase the number of operations furthermore). Thirdly, the computation of only the largest \(k\) exponents is not necessarily cheaper than the determination of the whole spectrum. Fourthly, the continuous singular value method diverges for attractors with (almost) degenerate Lyapunov spectra which very often occur in dynamical systems (e.g., periodic win Figure 6: Numerical estimates of the nontrivial Lyapunov exponents \(\lambda_{1}\) and \(\lambda_{4}\) of a strange attractor of the driven van der Pol oscillator (41, 42) for \(d\)=5, \(a\)=5 and \(\omega_{u}\)=2.466. The solid and dashed curves show the results obtained with the CQR and CSV algorithm, respectively. Within the resolution of the plot both curves lie upon each other up to the first peak of \(\lambda_{4}\). The dotted curves were computed with the discrete GS method. The large magnitude of the negative exponent is reminiscent of the destroyed strongly attracting torus in the 3-dimensionalstatespace\(M:=R^{1}\times S^{1}\)and leads to a fast convergence of arbitrarily chosen tangent vectors to the direction of maximal expansion. This example is therefore well suited to study the numerical properties of the different methods for computing Lyapunov exponents. A Poincaré plot of this attractor showing its very thin extension normal to the expanding direction (which is consistent with the Lyapunov dimension of \(D_{k}\)=2.014) is given in Ref. [23]. dows, quasiperiodic oscillations (see for example, Fig. 3), near bifurcation points, etc.14,15). For these four reasons the continuous methods cannot be recommended. The quantities \((Q^{\rm tr}\!/Q)_{it}\), _Wi_ and _Ci_ (1 <= \(i\) <= _m_) occurring in the differential equations (25), (29) and (35) for the Lyapunov exponents can be viewed as "local" divergence rates on the attractor. They depend not only on the point \(\boldsymbol{x}\) \(M\) of the state space \(M\) but also on the "history" of the orthogonal matrix \(Q\) or \(U\).32) These divergence rates occur in a natural way when considering continuous methods and may be viewed as the continuous and high-dimensional analogues of the nonuniformity factor NUF defined for one-dimensional maps by Nicolis, Mayer-Kress and Haubs.33 In practice, however, they can be computed with discrete methods, too. All discrete methods investigated so far are based on the QR decomposition because no discrete algorithm using the singular value decomposition (SVD) is known to the authors. The discrete QR methods are different with respect to the iteration algorithm used (Eq. (17) or (18)) and the procedure for computing the QR decomposition (Gram-Schmidt orthonormalization, repeated GS orthonormalization or Householder transformations). For iterated maps and the investigation of time series9,20-27) it is natural to use algorithm (17) whereas in the case of differential equations only the methods depending on (18) avoid the solution of "superfluous" variational equations. Our investigation of the Toda chain as an example of a high-dimensional continuous dynamical system shows that the differences between the results obtained with the QR decomposition procedures listed above are at most of the order of the fluctuations of the numerical estimates \(\lambda_{i}(\boldsymbol{n})\) (\(\lambda_{i}(\boldsymbol{n})\) \(M\) for \(n\) ~ ) of the Lyapunov exponents \(\lambda_{i}\) (1 <= \(i\) <= _m_). Therefore, especially for differential equations where one can reduce the time steps _Dt_ between successive orthonormalizations, the choice of the QR decomposition procedures is not critical. This might be different for some maps with extreme contraction ratios. In that case repeated GS orthonormalizations or Householder transformations with their superior numerical properties are recommended. ## Acknowledgements All computations have been carried out on the CRAY X-MP/24 of the ZIB (Konrad-Zuse-Zentrum fur Informationstechnik Berlin) and the SPERRY 1100/82 and VAX8650 of the GWDG (Gesellschaft fur wissenschaftliche Datenverarbeitung mbH, Gottingen). We thank the members of the Nonlinear Dynamics Group at the Institut fur Angewandte Physik, Technische Hochschule Darmstadt, for many valuable discussions, especially U. Dressler and J. Holzfuss for the critical reading of the manuscript and the coworkers of the GWDG for their persistent help. This work was supported by the "Stiftung Volkswagenwerk, ForschungsprojektStrukturbildung in gekoppelten nichtlinearen Schwingungssystemen" and the "Sonderforschungsbereich SFB-Nr. 185, Nichtlineare Dynamik, Instabilitaten und Strukturbildung in physikalischen Systemen" of the DFG (Deutsche Forschungsgemeinschaft). ## Appendix A Computation of the QR Decomposition In the following sections two procedures for computing the QR decomposition of the parallelepiped matrix \(P\) (see SS 2.2) are recalled. ### Gram-Schmidt orthonormalization The column vectors \(\mathbf{Q}^{j}\) of \(Q\) can be determined recursively by the orthogonal projection of the column vectors \(\mathbf{P}^{j}\) of \(P\) on the column vectors \(\mathbf{Q}^{j-1}\) (\(2\leq j\leq k\)): \[R_{11} := \|\mathbf{P}^{1}\|\,,\] \[\mathbf{Q}^{1} := \mathbf{P}^{1}/R_{11}\,,\] \[R_{ij} := \langle\mathbf{Q}^{i}|\mathbf{P}^{j}\rangle\,,\] \[\bar{\mathbf{Q}}^{j} := \mathbf{P}^{j}-\sum\limits_{i=1}^{i-1}R_{ij}\mathbf{Q}^{i}\,,\] \[R_{ij} := \|\bar{\mathbf{Q}}^{j}\|\,,\] \[\mathbf{Q}^{i} := \bar{\mathbf{Q}}^{j}/R_{ij}\,.\,\,\,\,\,(1\leq i\leq j-1,2\leq j\leq k) \tag{1}\] Here \(\|\mathbf{x}\|:=\langle\mathbf{x}|\mathbf{x}\rangle^{1/2}\) denotes the Euclidean norm and \(\langle\mathbf{x}|\mathbf{y}\rangle:=\sum_{i=1}^{m}x_{i}\mathbf{x}_{i}\) the inner product of the vectors \(\mathbf{x}:=\)(\(x_{1}\), \(\cdots\), \(x_{m}\))\({}^{tr}\) and \(\mathbf{y}:=\)(\(y_{1}\), \(\cdots\), \(y_{m}\))\({}^{tr}\)\(\in\)\(\mathbf{R}^{n}\). For nearly parallel column vectors \(\mathbf{P}^{i}\) of \(P\) the lengths \(\|\bar{\mathbf{Q}}^{j}\|\) of the difference vectors \(\bar{\mathbf{Q}}^{j}\) (\(2\leq j\leq k\)) are very small. The computation of \(Q\) with algorithm (1) can therefore lead to errors, which can be avoided by an additional reorthogonalization: \[R_{11} := \|\mathbf{P}^{1}\|\,,\] \[\mathbf{Q}^{1} := \mathbf{P}^{1}/R_{11}\,,\] \[\bar{R}_{ij} := \langle\mathbf{Q}^{i}|\mathbf{P}^{j}\rangle\,,\] \[\bar{\mathbf{Q}}^{j} := \mathbf{P}^{j}-\sum\limits_{i=1}^{i-1}\bar{R}_{ij}\mathbf{Q}^{i}\,,\] \[\bar{\mathbf{R}}_{ij} := \langle\mathbf{Q}^{i}|\bar{\mathbf{Q}}^{j}\rangle\,,\] \[\bar{\mathbf{Q}}^{j} := \bar{\mathbf{Q}}^{j}-\sum\limits_{i=1}^{i-1}\bar{R}_{ij}\mathbf{Q}^{i}\,,\] \[R_{ij} := \bar{R}_{ij}+\bar{R}_{ij}\,,\] \[R_{ij} := \|\bar{\mathbf{Q}}^{j}\|\,,\] \[\mathbf{Q}^{j} := \bar{\mathbf{Q}}^{j}/R_{ij}\,,\,\,\,\,\,\,(1\leq i\leq j-1,2\leq j \leq k) \tag{2}\] (see Daniel, Gragg, Kaufman and Stewart[28] and Stoer[29]). ### Householder transformations A QR decompositon procedure characterized by large numerical stability has been found by Householder in 1958[30]. The \(m\times k\) matrix \(P\equiv P^{(0)}\) is transformed withthe help of the symmetric and orthogonal _Householder transformations_ ## References * [1] J. M. Greene and J.-S. Kim, Physica D**24** (1987), 213. * [2] I. Goldhirsch, P.-L. Sulem and S: A. Orszag, Physica D**27** (1987), 311. * [3] I. Shimada and T. Nagashima, Prog. Theor. Phys. **61** (1979), 1605. * [4] G. Benettin, L. Galgani, A. Giorgilli and J.-M. Strelcyn, Meccanica **15** (1980), 9, 21. * [5] A. Wolf, J. B. Swift, H. L. Swinney and J. A. Vastano, Physica D**16** (1985), 285. * [6] J.-P. Eckmann and D. Ruelle, Rev. Mod. Phys. **57** (1985), 617. * [7] U. Parlitz, Ph. D. Thesis, Georg-August-Universitat Gottingen, Gottingen (1987). * [8] V. I. Oseledec, Trans. Moscow Math. Soc. **19** (1968), 197. * [9] R. A. Johnson, K. J. Palmer and G. R. Sell, Siam J. Math. Anal. **18** (1987), 1. * [10] G. Paladin and A. Vulpiani, Phys. Rep. **156** (1987), 147. * [11] W. Lauterborn and U. Parlitz, J. Acoust. Soc. Am. **84** (1988), 1975. * [12] J. M. Greene and J.-S. Kim, Physica D**36** (1989), 83. * [13] M. Rokni and B. S. Berger, Quart. Appl. Math. **45** (1987), 789. * [14] K. Geist, Ph. D. Thesis, Georg-August-Universitat Gottingen, Gottingen (1989). * [15] K. Geist and W. Lauterborn, Physica D**41** (1990), 1. * [16] J. J. Dongarra, C. B. Moler, J. R. Bunch and G. W. Stewart, LINPACK User's Guide (SIAM, Philadelphia, Pennsylvania, 1979). * [17] E. N. Lorenz, Physica D**13** (1984), 90. * [18] H. Grauer and W. Fischer, _Differential und Integralrechnung II_, 3rd ed. (Springer, Berlin, 1978), Satz 2.5, p. 194. * [19] U. Dressler, Phys. Rev. A**38** (1988), 2103. * [20] U. Dressler, Ph. D. Thesis, Georg-August-Universitat Gottingen, Gottingen (1989). * [21] K. Geist and W. Lauterborn, Physica D**81** (1988), 103. * [22] V. I. Arnol'd, _Geov0hnliche Differentialgleichungen_ (Springer, Berlin, 1980). * [23] U. Parlitz and W. Lauterborn, Phys. Rev. A**36** (1987), 1428. * [24] J.-P. Eckmann, S. O. Kambrych, D. Ruelle and S. Ciliberto, Phys. Rev. A**34** (1986), 4971. * [25] M. Sano and Y. Sawada, Phys. Rev. Lett. **55** (1985), 1082. * [26] J. Holzfuss, Ph. D. Thesis, Georg-August-Universitat Gottingen, Gottingen (1987). * [27] J. Holzfuss and W. Lauterborn, Phys. Rev. A**39** (1989), 2146. * [28] J. Daniel, W. B. Gragg, L. Kaufman and G. W. Stewart, Math. Comp. **30** (1976), 772. * [29] J. Stoer, _Einfliximab in the Numerische Mathematik I : unter Bertokschtigung von Vorlesungen von F. L. Bauer, Heidelberger Taschenblacher_, vol. 105, 3rd ed. (Springer, Berlin, 1979). * [30] A. S. Householder, J. Assoc. Comput. Math. **5** (1958), 339. * [31] R. Mennicken and E. Wagentfuhrer, _Numerische Mathematik 1, rovroro vieveg_, vol. 28 (Rowohl Taschenbuch Verlag GmbH, Reinbeck bei Hamburg, 1977). * [32] U. Dressler and G. Mayer-Kress, personal communication. * [33] J. S. Nicolis, G. Mayer-Kress and G. Haubs, Z. Naturforsch. A**38** (1983), 1157. # Quantifying chaos in dynamical systems with Lyapunov exponents Michael van Opstall Department of Mathematics, University of California, Riverside, CA 92521, USA michael.van.opstall@ucr.edu ###### Abstract. In this paper, we analyze the dynamics of a four dimensional mechanical system which exhibits sensitive dependence on initial conditions. The aim of the paper is to introduce the basic ideas of chaos theory while assuming only a course in ordinary differential equations as a prerequisite. Key words and phrases:Chaos, Differential Equations, Dynamical Systems 1991 Mathematics Subject Classification: 34D08 This paper was written while the author was an undergraduate at Hope College. ## 1. Introduction Dynamical systems, in short, are systems which exhibit change. As such, the field of dynamical systems is varied and rich. Many dynamical systems can be modeled by systems of differential equations or discrete difference equations. Such systems are called _deterministic_. Examples of such systems include those of classical mechanics. _Sensitive dependence on initial conditions_ is a phenomenon where slight distance between the initial conditions of a system grows exponentially. Deterministic dynamical systems that exhibit a sensitive dependence on initial conditions are known as _chaotic_. Many physical systems are chaotic, from the driven simple pendulum to the more complex system modeled in this paper. Dynamical systems are classified as _discrete_ or _continuous_. A discrete dynamical system (given by one or more difference equations) is one in which a function \(f\) is iterated on an initial condition \(x_{0}\). The set of all points generated by iterating \(f\) beginning with \(x_{0}\) is known as the _orbit_ of \(x_{0}\) under \(f\). A continuous system is generally given by one or more differential equations. Continuous orbits are known as _trajectories_. There are several difficulties in working with chaotic systems. Systems of differential equations that believe chaotically are always nonlinear. This nonlinearity makes an analytic solution to these equations difficult. In addition to the nonlinearity, a continuous system which exhibits sensitive dependence on initial conditions must have dimension of at least three (that is, it must have three independent variables). The system discussed in the present paper has degree four, and hence cannot be easily visualized. Despite these difficulties, the fundamental concepts of the science are accessible to anyone who has taken a course in ordinary differential equations. ### One-Dimensional Discrete Systems Despite their simple nature, systems in a single variable can be used to model many things. One good example is the logistic map \(x_{n+1}=\mu x_{n}(l-x_{n})\), which is used as a simple model for populationgrowth. Some features of dynamical systems are easiest to demonstrate in single-dimensional systems, so a few are described here. The orbit of the function is computed according to the relation \(x_{n+1}=f(x_{n})\). The logistic map described above is an example of a one-dimensional discrete system. A point where the function's value is unaffected by further iteration (i.e. \(x_{n+1}=x_{n}\)) is called a _fixed point_. A fixed point which is approached by orbits is known as an attractor and one from which orbits diverge is a repeller. Figure 1 represents a logistic map with an attracting fixed point (\(\mu=2.9\)) and a chaotic logistic map with a repelling fixed point (\(\mu=4.0\)). One way to quantify chaotic behavior in a system is to measure the divergence between orbits of two points with small initial separation. Assume \(f^{n}\) is the nth iteration of a function \(f\). Then, for two different initial conditions, \(x\) and \(x+\varepsilon\), the separation between these orbits is given by \(\left|f^{n}(x+\varepsilon)-f^{n}(x)\right|\), as a function of the number of iterations. If we assume that the separation of the trajectories grows (or shrinks) exponentially we have \[\left|f^{n}(x+\varepsilon)-f^{n}(x)\right|\approx\varepsilon e^{n\lambda},\] and \(\lambda\) is called the Lyapunov exponent. If we take the initial separation, \(\varepsilon\), between trajectories to be small, we obtain \[\lambda\approx\frac{1}{n}\log\left|\frac{f^{n}(x+\varepsilon)-f^{n}(x)}{ \varepsilon}\right|\approx\frac{1}{n}\log\left|\frac{df^{n}}{dx}\right| \tag{1}\] Noting that, \(x_{n}=f(x_{n-1})=f^{n}(x)\), we find \(df^{n}/dx\) using the chain rule \[\frac{df^{n}}{dx}=f^{\prime}(f^{n-1})\cdot f^{\prime}(f^{n-2})\cdots f^{\prime }(x_{0})=\prod_{m=0}^{n-1}f^{\prime}(x_{m})\] Figure 1. The logistic map with \(\mu=2.9\), with an attracting fixed point (left) and another with it \(\mu=4.0\), a repelling fixed point. From this, we arrive at our final formula for the Lyapunov exponent of a one-dimensional discrete system: \[\lambda=\lim_{n\to\infty}\frac{1}{n}\log\left|\prod_{m=0}^{n}f^{\prime}(x_{m}) \right|=\lim_{n\to\infty}\frac{1}{n}\sum_{m=0}^{n}\log\left|f^{\prime}(x_{m})\right| \tag{2}\] This exponent represents the average exponential rate of divergence of nearby orbits. A zero exponent implies linear divergence. A positive exponent indicates sensitive dependence on initial conditions, as points initially dose together will diverge exponentially along neighboring trajectories. Negative exponents are found in systems where trajectories converge so the initial separation between two points will decrease in time. Noting that the formula for a single-dimensional Lyapunov exponent is simply an average of the logarithm of the size of the derivative, a formula for continuous systems can be obtained. For a continuous system, the mean becomes the expected value of \(\log\left|f^{\prime}(x)\right|\) and we have the following formula for the Lyapunov exponent for a single dimensional continuous system: \[\lambda=\int f(x)\log\left|f^{\prime}(x)\right|dx. \tag{3}\] ### Higher Dimensional Systems For many real systems, a single-dimensional model is inadequate. Unfortunately, along with a better, multi-dimensional model, we gain more problems in calculating the Lyapunov exponent of a system. The equation derived for single variable discrete systems does not directly apply, and non-linear differential equations pose problems, as they are difficult or impossible to solve. We must often resort to numerical methods to solve these problems. #### 1.2.1. Phase Space The _phase space_ of a system is the \(n\)-dimensional space in which the points of an \(n\)-dimensional system reside. A graph of trajectories in the phase space is known as a _phase diagram_. For two dimensional systems, the phase space lies in the plane (known as the phase plane), and is easily visualized. For higher dimensional systems, however, the phase space is often projected into two dimensions for easy viewing. In our system (described below), the four variables defining the phase space were paired to produce two phase diagrams. A two-dimensional phase diagram often plots the velocity of a body against its position. #### 1.2.2. Attractors In the one-dimensional case, points to which orbits converged were known as attracting fixed points. The fixed point is a special case of an _attractor_. In higher dimensional spaces, trajectories with small initial separation are sometimes pulled together into a single trajectory, an attractor. In these higher dimensional systems, these attractors can be curves or surfaces. An attractor in a chaotic system is known as a _strange attractor_. ### Lyapunov Exponents In an \(n\)-dimensional dynamical system, we have \(n\) Lyapunov exponents. Each \(\lambda_{k}\) represents the divergence of \(k\)-volume (\(k=1\): length, \(k=2\): area, etc.). The sign of the Lyapunov exponents indicates the behavior of nearby trajectories. A negative exponent indicates that neighboring trajectories converge to the same trajectory. A positive exponent indicates that neighboring trajectories diverge. When trajectories diverge exponentially, a slight error in measurement of the initial point could be catastrophic, as the error grows exponentially as well. If \(\varepsilon\) in equation (1) is taken to be the slight error in measuring a system'sstate, eventually, this error grows in accordance with the Lyapunov exponent. Figure 2 represents the three types of trajectory behavior. Any measurement taken has some error. The Lyapunov exponent affords us a measure of how quickly this error grows. If the Lyapunov exponent is negative, error actually decreases. Consider the damped pendulum; a slight error in measurement does not lead to a large overall error since the pendulum eventually comes to rest. We are primarily interested in systems where one (or more) of the Lyapunov exponents is positive. In accordance with our informal definition of chaos (behavior of a system exhibiting sensitive dependence on initial conditions), we can define a chaotic system as one with at least one positive Lyapunov exponent. Predictability in a system is lost here, and measurement error grows exponentially. ## 2. Experimental Setup ### Physical Setup The physical dynamical system we studied consisted of a pendulum of length \(\ell\) and mass \(m\) attached to a block of mass \(M\) oscillating on the end of a spring with spring constant \(k\). This apparatus was forced with forcing function \(f(t)=A\cos\varphi t+\sqrt{\ell^{2}+A^{2}\sin\varphi t}\), which is the motion of a camshaft of length \(\ell\) displaced \(A\) units from the axis of rotation, driven with frequency \(\varphi\). Sensors connected to a Realtime VAX recorded the position of the cart and angular displacement of the pendulum (\(x\) and \(\theta\), respectively). To obtain data for the cart's velocity and the pendulum's angular velocity (\(v\) and \(\omega\), respectively), the data was generated by taking numerical "derivatives" (actually the slope between neighboring points). A diagram of our system appears in Figure 3. ### Phase Diagrams After recording data for different frequencies of forcing, two-dimensional phase plots were produced for \(v\) vs. \(x\) and \(\omega\) vs. \(\theta\). The system was chaotic at high driving frequencies. The phase diagrams are given in Figure 4. Figure 5 illustrates chaotic and periodic timeseries. The problem of experimental noise is quite evident in these figures. Spurious data points can cause problems when calculating the Lyapunov exponents [2]. ### Equations of Motion Equations of motion for this system can be obtained using the Lagrangian method. The Lagrangian \(L\) is as the difference, between Figure 2. Left: Convergence of trajectories (\(\lambda<0\)), Center: two concentric circular trajectories (\(\lambda=0\)), Right: divergence of trajectories (\(\lambda>0\)). potential and kinetic energy of a dynamical system. The position \(\vec{r_{1}}(t)\) of the cart and \(\vec{r_{2}}(t)\) of the pendulum are given by: \[\vec{r_{1}}(t) = x\vec{i}\] \[\vec{r_{2}}(t) = (x+\ell\sin\theta)\vec{i}-\ell\cos\theta\vec{j}.\] The square of the velocity of each body is calculated by taking the square of the magnitude of the derivative of the position function: \[\left\|\,\vec{r_{1}}^{\prime}(t)\right\|^{2} = \dot{x}^{2}\] \[\left\|\,\vec{r_{2}}^{\prime}(t)\right\|^{2} = \dot{x}^{2}+2\dot{x}\ell\dot{\theta}\cos\theta+\ell^{2}\dot{\theta }^{2}.\] From the position and velocity functions, we get the Lagrangian: \[L=\frac{1}{2}M\dot{x}^{2}+\frac{1}{2}m(\dot{x}^{2}+2\dot{x}\ell\dot{\theta}\cos \theta+\ell^{2}\dot{\theta}^{2})-\frac{1}{2}k(x-f)^{2}+mg\ell\cos\theta\] Figure 4. Phase plots of angular velocity vs. angular position for the pendulum. The left is periodic and the right is a portion of the chaotic phase diagram. Figure 3. A schematic diagram of our physical system where \(f(t)\) is the displacement of the forcing piston at time \(t\). The final equations of motion are derived according to the Euler-Lagrange equations \(\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\theta}t_{\tt a}}\right)-\frac{ \partial L}{\partial\dot{\theta}t_{\tt a}}\) where t he \(q_{\tt a}\) are the coordinates of the system (\(x\) and \(\theta\) in our case). These equations yield the final equations of motion for this system: \[(M+m)\ddot{x}-m\ell\dot{\theta}^{2}\sin\theta+m\ell\ddot{\theta} \cos\theta+k(x-f) = 0 \tag{4}\] \[\ell\ddot{\theta}+\ddot{x}\cos\theta+g\sin\theta = 0. \tag{5}\] These equations can be further broken down into a system of four first-order differential equations, suitable for numerical integration. ## 3. Calculating Lyapunov Exponents Given a system of differential equations, numerical integration affords us a method for determining the theoretical value for the system's Lyapunov exponents. This method, described in detail by Wolf et al. in [2] and described below, is useful for determining positive Lyapunov exponents for chaotic systems. The system of equations of motion must be converted to strictly first order equations. For an \(n\)-dimensional system, \(n\) copies of \(n\) linearized equations are needed. This linearization is accomplished by multiplying the Jacobian matrix of partial derivatives of the \(n\) nonlinear functions by a column vector of the variables (comparable to approximating functions by a tangent line). Each of the linearized equations determines a point in \(n\) space with a separation from the nonlinear system. We start with a sphere of states, centered on the nonlinear trajectory with linearized trajectories tangent to the sphere's surface. This sphere is really nothing more than an orthonormal frame of vectors, but taking these vectors to form a sphere aids in visualization of the process. The initial state vectors defined in each direction are chosen to be orthonormal, each one perpendicular to the others and of unit length. Now the system is allowed to evolve over time. After a short time, the sphere of vectors has become an ellipsoid, with all vectors approaching the direction of greatest growth. This presents a problem. If this is allowed to continue indefinitely, all the vectors will collapse onto the same vector and become indistinguishable. Additionally, if Figure 5. Time series diagrams of angular position vs. time. The left is periodic and the right chaotic. the largest vector continues to grow without limit, it will soon approach the size of the attractor (which is the metric diameter of the set of points that make up the attractor), at which point, the attractor folds back onto itself (vectors which have grown to large collapse to small vectors), causing a miscalculation (we lose the fact that the vector has grown, and perhaps note incorrectly that it has shrunk). These are the two major problems in calculating Lyapunov exponents. Both are solved simultaneously by renormalizing periodically using Gram-Schmidt orthonormalization. After a specified time, the vectors are measured, and are orthogonalized and brought back to a very small length. The process is repeated after orthonormalization so an average can be taken. The highest Lyapunov exponent (recall that an \(n\)-dimensional system has \(n\) Lyapunov exponents) can be calculated by measuring the lengths of the largest vector \(n\) times over a period of \(t\) seconds, where \(\ell_{n}\) is the length of the vector at measurement \(n\). Then the Lyapunov exponent is: \[\lambda=\frac{1}{n}\sum_{m=0}^{n-1}\log\frac{\ell_{m+1}}{\ell_{m}}. \tag{6}\] To find the other exponents, one must monitor the evolution of area or \(h\)-volume in the phase space. Then, using an analogous formula to the one above, replacing the lengths \(\ell\) with area (or volume) \(A\). The calculation then yields the sum of the first \(h\) exponents, where \(h\) is the number of dimensions of the space being measured (i.e. \(h=1\) represents length, \(h=2\) represents area.) ## 4. Conclusions The calculation of Lyapunov exponents from collected data is similar to the process outlined above for differential equations, but pitfalls abound. Experimental noise is an issue, and phase space reconstruction (see [4]) must be considered for systems when less than all of the phase variables can be measured. Since trajectories are built from experimental data and not equations, no numerical integration is required, however. See [2] for greater detail on calculation of Lyapunov exponents. Although many modern definitions of chaos are given in terms of orbits and periodicity, the definition given in the present paper is adequate for applications in many systems of classical mechanics. The aim of this paper is to encourage other projects. Examples of other chaotic systems include the driven simple pendulum, the double pendulum, or a multiple mass-spring system. These systems can be constructed in most college or university physics labs. We are grateful to Professor Paul De Young from the Hope College physics department for setting up our physical system. In addition to observation of chaotic behavior, such a project reinforces or introduces several other skills in ordinary differential equations. Modeling using the Lagrangian or Newton's laws can be practiced in these more complex systems. Additionally, chaos is an excellent context in which to introduce nonlinear equations and phase diagramming. Indeed, for simpler systems, it is possible to involve a computer algebra system such as Maple or Mathematica in drawing phase plots. ## References * [1] G.L. Baker and J.P. Gollub, _Chaotic Dynamics: An Introduction_, Cambridge University Press, 1990. * [2] A. Wolf, J.B. Swift., H.L. Swinney, J.A. Vastano, "Determining Lyapunov Exponents from a Time Series," _Physica 1&D_ (1985) 285-317. * [3] N.H. Packard, J.P. Crutchfield, J.D. Farmer, and R.S. Shaw, "Geometry from a Time Series," _Physical Review Letters_ (1980) 712-715. * [4] J.-C. Roux, R.H. Simoyi, H.L. Swinney, "Observation of a Strange Attractor," _Physica 8D_ (1983) 257-266. * [5] H.D.I. Abarbanel, R. Brown, J.J. Sidorowich, L.S. Tsimring, "The Analysis of Observed Chaotic Data in Physical Systems," _Reviews in Modern Physics_ (1993) 1331-1392. * [6] G. Benettin, L. Galgani, A. Giorgelli, J.-M. Strelcyn, "Lyapunov Characteristic Exponents for Smooth Dynamical Systems and for Hamiltonian Systems; A Method for Computing all of Them," _Meccanica_ (1980) 9-19. * [7] P. Bugl, _Differential Equations: Matrices and Models_, Prentice Hall, 199.5. # A practical method for calculating largest Lyapunov exponents from small data sets Michael T. Rosenstein, James J. Collins and Carlo J. De Luca NeuroMuscular Research Center and Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215, USA ###### Abstract Detecting the presence of chaos in a dynamical system is an important problem that is solved by measuring the largest Lyapunov exponent. Lyapunov exponents quantify the exponential divergence of initially close state-space trajectories and estimate the amount of chaos in a system. We present a new method for calculating the largest Lyapunov exponent from an experimental time series. The method follows directly from the definition of the largest Lyapunov exponent and is accurate because it takes advantage of all the available data. We show that the algorithm is fast, easy to implement, and robust to changes in the following quantities: embedding dimension, size of data set, reconstruction delay, and noise level. Furthermore, one may use the algorithm to calculate simultaneously the correlation dimension. Thus, one sequence of computations will yield an estimate of both the level of chaos and the system complexity. 17-134 Non-Holland 0167-2789(92)00033-6 ## 1 Introduction Over the past decade, distinguishing deterministic chaos from noise has become an important problem in many diverse fields, e.g., physiology [18], economics [11]. This is due, in part, to the availability of numerical algorithms for quantifying chaos using experimental time series. In particular, methods exist for calculating correlation dimension (\(D_{z}\)) [20], Kolmogorov entropy [21], and Lyapunov characteristic exponents [15; 17; 32; 39]. Dimension gives an estimate of the system complexity; entropy and characteristic exponents give an estimate of the level of chaos in the dynamical system. The Grassberger-Procaccia algorithm (GPA) [20] appears to be the most popular method used to quantify chaos. This is probably due to the simplicity of the algorithm [16] and the fact that the same intermediate calculations are used to estimate both dimension and entropy. However, the GPA is sensitive to variations in its parameters, e.g., number of data points [28], embedding dimension [28], reconstruction delay [3], and it is usually unreliable except for long, noise-free time scrics. Hence, the practical significance of the GPA is questionable, and the Lyapunov exponents may provide a more useful characterization of chaotic systems. For time series produced by dynamical systems, the presence of a positive characteristic exponent indicates chaos. Furthermore, in many applications it is sufficient to calculate only the largest Lyapunov exponent (\(\lambda_{1}\)). However, the existing methods for estimating \(\lambda_{1}\) suffer from at least one of the following drawbacks: (1) unreliable for small data sets, (2) computationally intensive, (3) relatively difficult to implement. For this reason, we have developed a new method for calculating the largest Lyapunov exponent. The method is reliable for small data sets, fast, and easy to implement. "Easy to implement" is largely a subjective quality, although we believe it has had a notable positive effect on the popularity of dimension estimates. The remainder of this paper is organized as follows. Section 2 describes the Lyapunov spectrum and its relation to Kolmogorov entropy. A synopsis of previous methods for calculating Lyapunov exponents from both system equations and experimental time series is also given. In section 3 we describe the new approach for calculating \(\lambda_{1}\) and show how it differs from previous methods. Section 4 presents the results of our algorithm for several chaotic dynamical systems as well as several non-chaotic systems. We show that the method is robust to variations in embedding dimension, number of data points, reconstruction delay, and noise level. Section 5 is a discussion that includes a description of the procedure for calculating \(\lambda_{1}\) and \(D_{2}\) simultaneously. Finally, section 6 contains a summary of our conclusions. ## 2 Background For a dynamical system, sensitivity to initial conditions is quantified by the Lyapunov exponents. For example, consider two trajectories with nearby initial conditions on an attracting manifold. When the attractor is chaotic, the trajectories diverge, on average, at an exponential rate characterized by the largest Lyapunov exponent [15]. This concept is also generalized for the _spectrum_ of Lyapunov exponents, \(\lambda_{i}\) (\(i=1,\,2,\ldots,n\)), by considering a small \(n\)-dimensional sphere of initial conditions, where \(n\) is the number of equations (or, equivalently, the number of state variables) used to describe the system. As time (\(t\)) progresses, the sphere evolves into an ellipsoid whose principal axes expand (or contract) at rates given by the Lyapunov exponents. The presence of a positive exponent is sufficient for diagnosing chaos and represents local instability in a particular direction. Note that for the existence of an attractor, the overall dynamics must be dissipative, i.e., globally stable, and the total rate of contraction must outweigh the total rate of expansion. Thus, even when there are sevcral positivc Lyapunov exponents, the sum across the entire spectrum is negative. Wolf et al. [39] explain the Lyapunov spectrum by providing the following geometrical interpretation. First, arrange the \(n\) principal axes of the ellipsoid in the order of most rapidly expanding to most rapidly contracting. It follows that the associated Lyapunov exponents will be arranged such that \[\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\;, \tag{1}\] where \(\lambda_{1}\) and \(\lambda_{n}\) correspond to the most rapidly expanding and contracting principal axes, respectively. Next, recognize that the length of the first principal axis is proportional to \(\mathrm{e}^{\lambda_{1}t}\); the area determined by the first two principal axes is proportional to \(\mathrm{e}^{(\lambda_{1}+\lambda_{2})t}\); and the volume determined by the first \(k\) principal axes is proportional to \(\mathrm{e}^{(\lambda_{1}+\lambda_{2}+\cdots+\lambda_{k})t}\). Thus, the Lyapunov spectrum can be defined such that the exponential growth of a \(k\)-volume element is given by the sum of the \(k\) largest Lyapunov exponents. Note that information created by the system is represented as a change in the volume defined by the expanding principal axes. The sum of the corresponding exponents, i.e., the positive exponents, equals the Kolmogorov entropy (\(K\)) or mean rate of information gain [15]: \[K=\sum_{\lambda_{i}>0}\lambda_{i}\;. \tag{2}\] When the equations describing the dynamical system are available, one can calculate the entire Lyapunov spectrum [5, 34]. (See [39] for example computer code.) The approach involves numerically solving the system's \(n\) equations for \(n+1\) nearby initial conditions. The growth of a corresponding set of vectors is measured, and as the system evolves, the vectors are repeatedly reorthonormalized using the Gram-Schmidt procedure. This guarantees that only one vector has a component in the direction of most rapid expansion, i.e., the vectors maintain a proper phase-space orientation. In experimental settings, however, the equations of motion are usually unknown and this approach is not applicable. Furthermore, experimental data often consist of time series from a single observable, and one must employ a technique for attractor reconstruction, e.g., method of delays [27, 37], singular value decomposition [8]. As suggested above, one cannot calculate the entire Lyapunov spectrum by choosing arbitrary directions for measuring the separation of nearby initial conditions. One must measure the separation along the _Lyapunov directions_ which correspond to the principal axes of the ellipsoid previously considered. Thcse Lyapunov directions are dependent upon the system flow and are defined using the Jacobian matrix, i.e., the tangent map, at each point of interest along the flow [15]. Hence, one must preserve the proper phase space orientation by using a suitable approximation of the tangent map. This requirement, however, becomes unnecessary when calculating only the largest Lyapunov exponent. If we assume that there exists an ergodic measure of the system, then the multiplicative ergodic theorem of Oseledec [26] justifies the use of arbitrary phase space directions when calculating the largest Lyapunov exponent with smooth dynamical systems. We can expect (with probability 1) that two randomly chosen initial conditions will diverge exponentially at a rate given by the largest Lyapunov exponent [6, 15]. In other words, we can expect that a random vector of initial conditions will converge to the most unstable manifold, since exponential growth in this direction quickly dominates growth (or contraction) along the other Lyapunov directions. Thus, the largest Lyapunov exponent can be defined using the following equation where \(d(t)\) is the average divergence at time \(t\) and \(C\) is a constant that normalizes the initial separation: \[d(t)-C\;\mathrm{e}^{\lambda_{t}t}\;. \tag{3}\] For experimental applications, a number of researchers have proposed algorithms that estimate the largest Lyapunov exponent [1, 10, 12, 16, 17, 29, 33, 38, 39, 40], the positive Lyapunov spectrum, i.e., only positive exponents [39], or the complete Lyapunov spectrum [7, 9, 13, 15, 32, 35, 41]. Each method can be considered as a variation of one of several earlier approaches [15, 17, 32, 39] and as suffering from at least one of the following drawbacks: (1) unreliable for small data sets, (2) computationally intensive, (3) relatively difficult to implement. These drawbacks motivated our search for an improved method of estimating the largest Lyapunov exponent. ## 3 Current approach The first step of our approach involves reconstructing the attractor dynamics from a single time series. We use the method of delays [27, 37] since one goal of our work is to develop a fast and easily implemented algorithm. The reconstructed trajectory, \(\mathbf{X}\), can be expressed as a matrix where each row is a phase-space vector. That is, \[\mathbf{X}=\left(\begin{array}{cccc}\mathbf{X}_{1}&\mathbf{X}_{2}&\cdots& \mathbf{X}_{M}\end{array}\right)^{\mathrm{T}}\;, \tag{4}\] where \(\mathbf{X}_{i}\) is the state of the system at discrete time \(i\). For an \(N\)-point time series, \(\{x_{1},\)\(x_{2},\)\(\ldots,\)\(x_{N}\}\), each \(\mathbf{X}_{i}\) is given by \[\mathbf{X}_{i}=\left(\begin{array}{cccc}x_{i}&x_{i+,J}&\cdots&\mathbf{x}_{i +(m-1),J}\end{array}\right)\;, \tag{5}\] where \(J\) is the _lag_ or _reconstruction delay_, and \(m\) is the _embedding dimension_. Thus, \(\mathbf{X}\) is an \(m\) matrix, and the constants \(m\), \(M\), \(J\), and \(N\) are related as \[M=N-(m-1)J. \tag{6}\] The embedding dimension is usually estimated in accordance with Takens' theorem, i.e., \(m>2n\), although our algorithm often works well when \(m\) is below the Takens criterion. A method used to choose the lag via the correlation sum was addressed by Liebert and Schuster [23] (based on [19]). Nevertheless, determining the proper lag is still an open problem [4]. We have found a good approximation of \(J\) to equal the lag where the autocorrclation function drops to \(1\)\(1/\)e of its initial value. Calculating this \(J\) can be accomplished using the fast Fourier transform (FFT), which requires far less computation than the approach of Liebert and Schuster. Note that our algorithm also works well for a wide range of lags, as shown in section 4.3. After reconstructing the dynamics, the algorithm locates the _nearest neighbor_ of each point on the trajectory. The nearest neighbor, \(X_{j}\), is found by searching for the point that minimizes the distance to the particular _reference point_, \(X_{j}\). This is expressed as \[d_{j}(0)=\min_{X_{j}}\,\|X_{j}-X_{j}\|\, \tag{7}\] where \(d_{j}(0)\) is the initial distance from the \(j\)th point to its ncarcst neighbor, and \(\|\ \|\) denotes the Euclidean norm. We impose the additional constraint that nearest neighbors have a temporal separation greater than the mean period of the time series#1: Footnote *1: We estimated the mean period as the reciprocal of the mean frequency of the power spectrum, although we expect any comparable estimate, e.g., using the median frequency of the magnitude spectrum, to yield equivalent results. \[|j-j|>\text{mean period}. \tag{8}\] This allows us to consider each pair of neighbors as nearby initial conditions for different trajectories. The largest Lyapunov exponent is then estimated as the mean rate of separation of the nearest neighbors. To this point, our approach for calculating \(\lambda_{1}\) is similar to previous methods that track the exponential divergence of nearest neighbors. Howevcr, it is important to not some differences: (1) The algorithm by Wolf et al. [39] fails to take advantage of all the available data because it focuses on one "fiducial" trajectory. A single nearest neighbor is followed and repeatedly replaced when its separation from the reference trajectory grows beyond a certain limit. Additional computation is also required because the method approximates the Gram-Schmidt procedure by replacing a neighbor with one that preserves its phase space orientation. However, as shown in section 2, this preservation of phase-space orientation is unnecessary when calculating only the largest Lyapunov exponent. (2) If a nearest neighbor precedes (temporally) its reference point, then our algorithm can be viewed as a "prediction" approach. (In such instances, the predictive model is a simple delay line, the prediction is the location of the nearest neighbor, and the prediction error equals the separation between the nearest neighbor and its reference point.) However, other prediction methods use more elaborate schemes, e.g., polynomial mappings, adaptive filters, neural networks, that require much more computation. The amount of computation for the Wales method [38] (based on [36]) is also greater, although it is comparable to the present approach. We have found the Wales algorithm to give excellent results for discrete systems derived from difference equations, e.g., logistic, Henon, but poor results for continuous systems derived from differential equations, e.g., Lorenz, Rossler. (3) The current approach is principally based on the work of Sato et al. [33] which estimates \(\lambda_{1}\) as \[\lambda_{1}(i)=\frac{1}{i\,\Delta t}\,\,\frac{1}{(M-i)}\,\sum_{j=1}^{M-i}\ln \frac{d_{j}(i)}{d_{j}(0)}\, \tag{9}\]where \(\Delta t\) is the sampling period of the time series, and \(d_{j}(i)\) is the distance between the \(j\)th pair of nearest neighbors after \(i\) discrete-time steps, i.e., \(i\,\Delta t\) seconds. (Recall that \(M\) is the number of reconstructed points as given in eq. (6).) In order to improve convergence (with respect to \(i\)), Sato et al. [33] give an alternate form of eq. (9): \[\lambda_{1}(i,\,k)=\frac{1}{k\,\Delta t}\,\frac{1}{(M-k)}\sum_{i=1}^{M-k}\ln \frac{d_{j}(i+k)}{d_{j}(i)}\;. \tag{10}\] In eq. (10), \(k\) is held constant, and \(\lambda_{1}\) is extracted by locating the plateau of \(\lambda_{1}(i,\,k)\) with respect to \(i\). We have found that locating this plateau is sometimes problematic, and the resulting estimates of \(\lambda_{1}\) are unreliable. As discussed in section 5.3, this difficulty is due to the normalization by \(d_{j}(i)\). The remainder of our method proceeds as follows. From the definition of \(\lambda_{1}\) given in eq. (3), we assume the \(j\)th pair of nearest neighbors diverge approximately at a rate given by the largest Lyapunov exponent: \[d_{j}(i)\approx C_{j}\,{\rm e}^{\lambda_{1}(i\,\Delta t)}\;, \tag{11}\] where \(C_{j}\) is the initial separation. By taking the logarithm of both sides of eq. (11), we obtain \[\ln\,d_{j}(i)\approx\ln\,C_{j}\,\,\,\,\,\lambda_{1}(i\,\Delta t)\;. \tag{12}\] Eq. (12) represents a set of approximately parallel lines (for \(j=1\), \(2\), \(\ldots\), \(M\)), each with a slope roughly proportional to \(\lambda_{1}\). The largest Lyapunov exponent is easily and accurately calculated using a least-squares fit to the "average" line defined by \[y(i)=\frac{1}{\Delta t}\,\left\langle\ln\,d_{j}(i)\right\rangle\;, \tag{13}\] where \(\langle\,\,\,\,\rangle\) denotes the average over all values of \(i\). This process of averaging is the key to calculating accurate values of \(\lambda_{1}\) using small, noisy data sets. Note that in eq. (11), \(C_{j}\) performs the function of normalizing the separation of the neighbors, but as shown in eq. (12), this normalization is unnecessary for estimating \(\lambda_{1}\). By avoiding the normalization, the current approach gains a slight computational advantage over the method by Sato et al. The new algorithm for calculating largest Lyapunov exponents is outlined in fig. 1. This method is easy to implement and fast because it uses a simple measure of exponential divergence that circumvents the need to approximate the tangent map. The algorithm is also attractive from a practical standpoint because it does not require large data sets and it simultaneously yields the correlation dimension (discussed in section 5.5). Furthermore, the method is accurate for small data sets because it takes advantage of all the available data. In the next section, we present the results for several dynamical systems. Figure 1: Flowchart of the practical algorithm for calculating largest Lyapunov exponents. ## 4 Experimental results Table 1 summarizes the chaotic systems primarily examined in this paper. The differential equations were solved numerically using a fourth-order Runge-Kutta integration with a step size equal to \(\Delta t\) as given in table 1. For each system, the initial point was chosen near the attractor and the transient points were discarded. In all cases, the \(x\)-coordinate time series was used to reconstruct the dynamics. Fig. 2 shows a typical plot (solid curve) of \(\langle\ln d_{j}(i)\rangle\) versus \(i\,\Delta t^{*2}\); the dashed line has a slope equal to the theoretical value of \(\lambda_{\rm t}\). After a short transition, there is a long linear region that is used to extract the largest Lyapunov exponent. The curve saturates at longer times since the system is bounded in phase space and the average divergence cannot exceed the "length" of the attractor. The remainder of this section contains tabulated results from our algorithm under different conditions. The corresponding plots are meant to give the reader qualitative information about the facility of extracting \(\lambda_{1}\) from the data. That is, the more prominent the linear region, the easier one can extract the correct slope. (Repeatability is discussed in section 5.2.) ### Embedding dimension Since we normally have no a priori knowledge concerning the dimension of a system, it is imperative that we evaluate our method for different embedding dimensions. Table 2 and fig. 3 show our findings for several values of \(m\). In all but three cases (\(m=1\) for the Henon, Lorenz and Rossler systems), the error was less than \(\pm 10\%\), and most errors were less than \(\pm 5\%\). It is apparent that satisfactory results are obtained only when \(m\) is at least equal to the topological dimension of the system, i.e., \(m\equiv n\). This is due to the fact that chaotic systems are effectively stochastic when embedded in a phase space that is too small to accommodate the true dynamics. Notice that the algorithm performs quite well \begin{table} \begin{tabular}{l l l l l} System [ref.] & Equations & Parameters & \(\Delta t\) (s) & Expected \(\lambda_{1}\) [ref.] \\ Logistic [15] & \(x_{i\,:\,1}=\mu x_{i}(1-x_{i})\) & \(\mu\approx 4.0\) & 1 & 0.693 [15] \\ Hénon [22] & \(x_{i\,:\,1}=1-ax_{i}^{2}+y_{i}\) & \(a=1.4\) & 1 & 0.418 [39] \\ & \(y_{i\,:\,1}=bx_{i}\) & \(b=0.3\) & & \\ Lorenz [24] & \(\dot{x}=\sigma(y-x)\) & \(\sigma=16.0\) & 0.01 & 1.50 [39] \\ & \(\dot{y}=x(R-z)-y\) & \(R\approx 45.92\) & & \\ & \(\dot{z}=ry-bz\) & \(h=4.0\) & & \\ Rössler [31] & \(\dot{x}=-y-z\) & \(a=0.15\) & 0.10 & 0.090 [39] \\ & \(\dot{y}=x+ay\) & \(b=0.20\) & & \\ & \(\dot{z}=b+z(x-c)\) & \(c=10.0\) & & \\ \end{tabular} \end{table} Table 1: Chaotic dynamical systems with theoretical values for the largest Lyapunov exponent. \(\lambda_{\rm t}\). The sampling period is denoted by \(\Delta t\). Figure 2: Typical plot of \(\langle\ln(\)divergence\()\()\) versus time for the Lorenz attractor. The solid curve is the calculated result; the slope of the dashed curve is the expected result. when \(m\) is below the Takens criterion. Therefore, it seems one may choose the smallest embedding dimension that yields a convergence of the results. ### Length of time series Next we consider the performance of our algorithm for time series of various lengths. As shown in table 3 and fig. 4, the present method also works well when \(N\) is small (\(N-100\)-\(1000\) for the examined systems). Again, the error was less than \(\pm 10\%\) in almost all cases. (The greatest difficulty occurs with the Rossler attractor. For this system, we also found a 20-25% negative bias in the results for \(N=3000\)-\(5000\).) To our knowledge, the lower limit of \(N\) used in each case is less than the smallest value reported in the literature. (The only exception is due to Briggs [7], who examined the Lorenz system with \(N=600\). However, Briggs reported errors for \(\lambda_{1}\) that ranged from 54% to 132% for this particular time series length.) We also point out that the literature [1, 9, 13, 15, 35] contains results for values of \(N\) that are an order of magnitude greater than the largest values used here. It is important to mention that quantitative analyses of chaotic systems are usually sensitive to not only the data size (in samples), but also the observation time (in seconds). Hence, we cxamined the interdependence of \(N\) and \(N\,\Delta t\) for the Lorenz system. Fig. 5 shows the output of our algorithm for three different sampling conditions: (1) \(N=5000\), \(\Delta t=0.01\) s (\(N\,\Delta t=50\) s); (2) \(N=1000\), \(\Delta t=0.01\) s (\(N\,\Delta t=10\) s); and (3) \(N=1000\), \(\Delta t=0.05\) s (\(N\,\Delta t=50\) s). The latter two Fig. 3: Effects of embedding dimension. For each plot, the solid curves are the calculated results, and the slope of the dashed curve is the expected result. See table 2 for details. (a) Logistic map. (b) Hénon attractor. (c) López attractor. (d) Rössler attractor. time series were derived from the former by using the first 1000 points and every fifth point, respectively. As expected, the best results were obtained with a relatively long observation time and closely-spaced samples (case (1)). However, we saw comparable results with the long observation time and widely-spaced samples (case (3)). As long as \(\Delta t\) is small enough to ensure a minimum number of points per orbit of the attractor (approximately \(n\) to \(10n\) points [39]), it is better to decrease \(N\) by reducing the sampling rate and not the observation time. ### Reconstruction delay As commented in section 3, determining the proper reconstruction delay is still an open problem. For this reason, it is necessary to test our algorithm with different values of \(J\). (See table 4 and fig. 6.) Since discrete maps are most faithfully reconstructed with a delay equal to one, it is not surprising that the best results were seen with the lag equal to one for the logistic and Henon systems (errors of \(-1.7\%\) and \(-2.7\%\), respectively). For the Lorenz and Rossler systems, the algorithm performed well (error \(\leq 7\%\)) with all lags except the extreme ones (\(J=1\), \(41\) for Lorenz; \(J=2\), \(26\) for Rossler). Thus, we expect satisfactory results whenever the lag is determined using any common method such as those based on the autocorrelation function or the correlation sum. Notice that the smallest errors were obtained for the lag where the autocorrelation function drops to \(1-1/\mathrm{e}\) of its initial value. ### Additive noise Next, we consider the effects of additive noise, i.e., measurement or instrumentation noise. This \begin{table} \begin{tabular}{l r r r r r} \hline System & \(N\) & \(J\) & \(m\) & Calculated \(\lambda_{1}\) & \% error \\ \hline Logistic & 500 & 1 & 1 & \(0.675\) & \(-2.6\) \\ & & & 2 & \(0.681\) & \(-1.7\) \\ & & & 3 & \(0.680\) & \(-1.9\) \\ & & & 4 & \(0.680\) & \(-1.9\) \\ & & & 5 & \(0.651\) & \(-6.1\) \\ Hénon & 500 & 1 & 1 & \(0.195\) & \(53.3\) \\ & & & 2 & \(0.409\) & \(-2.2\) \\ & & & 3 & \(0.406\) & \(-2.9\) \\ & & & 4 & \(0.399\) & \(-4.5\) \\ & & & 5 & \(0.392\) & \(-6.2\) \\ Lorcz & 5000 & 11 & 1 & – & \\ & & & 3 & \(1.531\) & \(2.1\) \\ & & & 5 & \(1.498\) & \(-0.1\) \\ & & & 7 & \(1.562\) & \(4.1\) \\ & & & 9 & \(1.560\) & \(4.0\) \\ Rössler & 2000 & 8 & 1 & – & \\ & & & 3 & \(0.0879\) & \(-2.3\) \\ & & & 5 & \(0.0864\) & \(-4.0\) \\ & & & 7 & \(0.0853\) & \(-5.2\) \\ & & & 9 & \(0.0835\) & \(-7.2\) \\ \hline \end{tabular} \end{table} Table 2: Experimental results for several embedding dimensions. The number of data points, reconstruction delay, and embedding dimension are denoted by \(N\), \(J\), and \(m\), respectively. We were unable to extract \(\lambda_{1}\), with \(m\) equal to one for the Lorenz and Rössler systems because the reconstructed attractors are extremely noisy in a one-dimensional embedding space. \begin{table} \begin{tabular}{l r r r r r} \hline System & \(N\) & \(J\) & \(m\) & Calculated \(\lambda_{1}\) & \% error \\ \hline Logistic & 100 & 1 & 2 & \(0.659\) & \(-4.9\) \\ & 200 & & & \(0.705\) & \(1.7\) \\ & 300 & & & \(0.695\) & \(0.3\) \\ & 400 & & & \(0.692\) & \(-0.1\) \\ & 500 & & & \(0.686\) & \(-1.0\) \\ Hénon & 100 & 1 & 2 & \(0.426\) & \(1.9\) \\ & 200 & & & \(0.416\) & \(-0.5\) \\ & 300 & & & \(0.421\) & \(0.7\) \\ & 400 & & & \(0.409\) & \(2.2\) \\ & 500 & & & \(0.412\) & \(-1.4\) \\ I orenz & 1000 & 11 & 3 & \(1.751\) & \(16.7\) \\ & 2000 & & & \(1.345\) & \(-10.3\) \\ & 3000 & & & \(1.372\) & \(-8.5\) \\ & 4000 & & & \(1.392\) & \(-7.2\) \\ & 5000 & & & \(1.523\) & \(1.5\) \\ Rössler & 400 & 8 & 3 & \(0.0351\) & \(-61\) \\ & 800 & & & \(0.0655\) & \(-27.2\) \\ & 1200 & & & \(0.0918\) & \(2.0\) \\ & 1600 & & & \(0.0984\) & \(9.3\) \\ & 2000 & & & \(0.0879\) & \(-2.3\) \\ \hline \end{tabular} \end{table} Table 3: Experimental results for several time series lengths. The number of data points, reconstruction delay, and embedding dimension are denoted by \(N\), \(J\), and \(m\), respectively. was accomplished by examining several time series produced by a superposition of white noise and noise-free data (noise-free up to the computer precision). Before superposition, the white noise was scaled by an appropriate factor in order to achieve a desired signal-to-noise ratio (SNR). The SNR is the ratio of the power (or, equivalently, the variance) in the noise-free signal and that of the pure-noise signal. A signal-to-noise ratio greater than about 1000 can bc regarded as low noise and a SNR less than about 10 as high noise. The results are shown in table 5 and fig. 7. We expect satisfactory estimates of \(\lambda_{1}\) except in extremely noisy situations. With low noise, the Fig. 4: Effects of time series lengths. For each plot, the solid curves are the calculated results, and the slope of the dashed curve is the expected result. See table 3 for details. (a) I Logistic map. (h) Hénon attractor. (c) I\(\alpha\)renz attractor. (d) Rössler attractor. Fig. 5: Results for the Lorenz system using three different sampling conditions. Case (1): \(N=5000\), \(\Delta t=0.01\) s (\(N\,\Delta t=50\) s); case (2): \(N=1000\), \(\Delta t=0.01\) s (\(N\,\Delta t=10\) s); and case (3): \(N-1000\), \(\Delta t-0.05\) s (\(N\,\Delta t=50\) s). The slope of the dashed curve is the expected result. error was smaller than \(\pm\)10% in each case. At moderate noise levels (SNR ranging from about 100 to 1000), the algorithm performed reasonably well with an error that was generally near \(\pm\)25%. As expected, the poorest results were seen with the highest noise levels (SNR less than or equal to 10). (We believe that the improved performance with the logistic map and low signal-to-noise ratios is merely coincidental. The reader should equate the shortest linear regions in fig. 7 with the highest noise and greatest uncertainty in estimating \(\lambda_{1}\).) It seems one cannot expect to estimate the largest Lyapunov exponent in high-noise environments; however, the clear presence of a positive slope still affords one the qualitative confirmation of a positive exponent (and chaos). It is important to mention that the adopted noise model represents a "worst-case" scenario because white noise contaminates a signal across an infinite bandwidth. (Furthermore, we consider signal-to-noise ratios that are substantially lower than most values previously reported in the literature.) Fortunately, some of the difficulties are remedied by filtering, which is expected to preserve the exponential divergence of nearest neighbors [39]. Whenever we remove noise while leaving the signal intact, we can expect an improvement in system predictability and, hence, in our ability to detect chaos. In practice, however, caution is warranted because the underlying signal may have some frequency content in the stopband or the filter may substantially alter the phase in the passband. ### Two positive Lyapunov exponents As described in section 2, it is unnecessary to preserve phase space orientation when calculating the largest Lyapunov exponent. In order to provide experimental verification of this theory, we consider the performance of our algorithm with two systems that possess more than one positive exponent: Rossler-hyperchaos [30] and Mackey-Glass [25]. (See table 6 for details.) The results are shown in table 7 and fig. 8. For both systems. the errors were typically less than \(\pm\)10%. From these results, we conclude that the algorithm measures exponential divergence along the most unstable manifold and not along some other Lyapunov direction. However, notice the predominance of a negative bias in the errors presented in sections 4.1-4.4. We believe that over short time scales, some nearest neighbors explore Lyapunov directions other than that of the largest Lyapunov exponent. Thus, a small underestimation (less than 5%) of \(\lambda_{1}\) is expected. ### Non-chaotic systems As stated earlier, distinguishing deterministic chaos from noise has become an important problem. It follows that effective algorithms for detecting chaos must accurately characterize both chaotic and non-chaotic systems; a reliable algo \begin{table} \begin{tabular}{l c c c c c} System & \(N\) & \(J\) & \(m\) & Calculated \(\lambda_{1}\) & \% error \\ \hline Logistic & 500 & 1* & 2 & \(0.681\) & \(-1.7\) \\ & & 2 & \(0.678\) & \(-2.2\) \\ & & 3 & \(0.672\) & \(-3.0\) \\ & & 4 & \(0.563\) & \(-18.8\) \\ & & 5 & \(0.622\) & \(-10.2\) \\ Hénon & 500 & 1* & 2 & \(0.409\) & \(-2.2\) \\ & & 2 & \(0.406\) & \(-2.9\) \\ & & 3 & \(0.391\) & \(-6.5\) \\ & & 4 & \(0.338\) & \(19.1\) \\ & & 5 & \(0.330\) & \(-21.1\) \\ Lorenz & 5000 & 1 & 3 & \(1.640\) & \(9.3\) \\ & & 11* & \(1.561\) & \(4.1\) \\ & & 21 & \(1.436\) & \(-4.3\) \\ & & 31 & \(1.423\) & \(5.1\) \\ & & 41 & \(1.321\) & \(-11.9\) \\ Rössler & 2000 & 2 & 3 & \(0.0699\) & \(-22.3\) \\ & & 8* & \(0.0873\) & \(-3.0\) \\ & & 14 & \(0.0864\) & \(-4.0\) \\ & & 20 & \(0.0837\) & \(-7.0\) \\ & & 26 & \(0.0812\) & \(-9.8\) \\ \hline \end{tabular} \end{table} Table 4: Experimental results for several reconstruction delays. The number of data points, reconstruction delay, and embedding dimension are denoted by \(N\), \(J\), and \(m\), respectively. The asterisks denote the values of \(J\) that were obtained by locating the lag where the autocorrelation function drops to 1-1/e of its initial value. rithm is not "fooled" by difficult systems such as correlated noise. Hence, we further establish the utility of our method by examining its performance with the following non-chaotic systems: two-torus, white noise, bandlimited noise, and "scrambled" Lorenz. For each system, a 2000-point time series was generated. The two-torus is an example of a quasiperiodic, deterministic system. The corresponding time series, \(x(i)\), was created by a superposition of two sinusoids with incommensurate frequencies: \[x(i)=\sin(2\pi f_{1}\cdot i\;\Delta t)+\sin(2\pi f_{2}\cdot i\;\Delta t)\;, \tag{14}\] where \(f_{1}=1.732051\approx\sqrt{3}\;\mathrm{Hz}\), \(f_{2}=2.236068\approx\sqrt{5}\;\mathrm{Hz}\), and the sampling period was \(\Delta t=0.01\;\mathrm{s}\). White noise and bandlimited noise are stochastic systems that are analogous to discrete and continuous chaotic systems, respectively. The "scrambled" Lorenz also represents a continuous stochastic system, and the data set was generated by randomizing the phase information from the Lorenz attractor. This procedure yields a time series of correlated noise with spectral characteristics identical to that of the Lorenz attractor. For quasiperiodic and stochastic systems we expect flat plots of \(\langle\ln d_{j}(i)\rangle\) versus \(i\;\Delta t\). That is, on average the nearest neighbors should neither diverge nor converge. Additionally, with the stochastic systems we expect an initial "jump" from a small separation at \(t=0\). The results are shown in fig. 9, and as expected, the curves are mostly flat. However, notice the regions that could be mistaken as appropriate for extracting a positive Lyapunov exponent. Fortunately, our empirical Figure 6: Effects of reconstruction delay. For each plot, the solid curves are the calculated results, and the slope of the dashed curve is the expected result. See table 4 for details. (a) Logistic map. (b) Hénon attractor. (c) Lorenz attractor. (d) Rössler attractor. Figure 8: Results for systems with two positive I.yapunov exponents. For each plot, the solid curves are the calculated results. and the slope of the dashed curve is the expected result. See table 7 for details. (a) Rössler-hyperchaos. (b) Mackey–Glass. Figure 7: Effects of noise level. For each plot, the solid curves are the calculated results, and the slope of the dashed curve is the expected result. See table 5 for details. (a) Logistic map. (b) Hénon attractor. (c) Lorenz attractor. (d) Rössler attractor. results suggest that one may still detect non-chaotic systems for the following reasons: 1. The anomalous scaling region is not linear since the divergence of nearest neighbors is not exponential. 2. For stochastic systems, the anomalous scaling region flattens with increasing embedding dimension. Finite dimensional systems exhibit a convergence once the embedding dimension is large enough to accomodate the dynamics, whereas stochastic systems fail to show a convergence because they appear more ordered in high-cr embedding spaces. With the two-torus, we attribute the lack of convergence to the finite precision "noise" in the data set (Notice the small average divergence even at \(i\)\(\Delta t=1\).) Strictly speaking, we can only distinguish high-dimensional systems from low-dimensional ones, although in most applications a high-dimensional system may be considered random, i.e., infinite-dimensional. \begin{table} \begin{tabular}{l l l l l l} System [ref.] & Equations & Parameters & \(\Delta t\) (s) & Expected \(\lambda_{1}\), \(\lambda_{2}\) [ref.] \\ Rössler-hyperchaos [30] & \(\dot{x}=-y-z\) & \(a=0.25\) & \(0.1\) & \(\lambda_{1}=0.111\)[39] \\ & \(\dot{y}=x+ay+w\) & \(b=3.0\) & & & \(\lambda_{2}=0.021\)[39] \\ & \(\dot{z}=b+xz\) & \(e=0.05\) & & & \\ & \(\dot{w}=cw-dz\) & \(d=0.5\) & & & \\ Mackey–Glass [25] & \(\dot{x}=\frac{ax(t+s)}{1+[x(t+s)]^{\rm c}}-bx(t)\) & \(a=0.2\) & \(0.75\) & \(\lambda_{1}=4.37\)E\(-3\)[39] \\ & & \(b=0.1\) & & & \(\lambda_{2}=1.82\)E\(-3\)[39] \\ & & \(c=10.0\) & & & \\ & \(s=31.8\) & & & & \\ \end{tabular} \end{table} Table 6: Chaotic systems with two positive Lyapunov exponents (\(\lambda_{1}\), \(\lambda_{2}\)). To obtain a better representation of the dynamics, the numerical integrations were performed using a step size 100 times smaller than the sampling period, \(\Delta t\). The resulting time series were then downsampled by a factor of 100 to achieve the desired \(\Delta t\). \begin{table} \begin{tabular}{l r r r r r} System & \(N\) & \(J\) & \(m\) & SNR & Calculated \(\lambda_{1}\) & \% error \\ Logistic & 500 & 1 & 2 & 1 & 0.704 & 1.6 \\ & & & & 10 & 0.779 & 12.4 \\ & & & & 100 & 0.856 & 23.5 \\ & & & & 1000 & 0.621 & \(-10.4\) \\ & & & & 10000 & 0.628 & \(-9.4\) \\ Hénon & 500 & 1 & 2 & 1 & 0.643 & 53.8 \\ & & & & 10 & 0.631 & 51.0 \\ & & & & 100 & 0.522 & 24.9 \\ & & & & 1000 & 0.334 & \(-20.1\) \\ & & & & 10000 & 0.385 & \(-7.9\) \\ Lorenz & 5000 & 11 & 3 & 1 & 0.645 & \(-57.0\) \\ & & & & 10 & 1.184 & \(-21.1\) \\ & & & & 100 & 1.110 & \(-26.0\) \\ & & & & 1000 & 1.273 & \(-15.1\) \\ & & & & 10000 & 1.470 & \(-2.0\) \\ Rössler & 2000 & 8 & 3 & 1 & 0.0106 & \(-88.2\) \\ & & & & 10 & 0.0394 & \(-56.7\) \\ & & & & 100 & 0.0401 & \(-55.4\) \\ & & & & 1000 & 0.0659 & \(-26.8\) \\ & & & & 1000 & 0.0836 & 7.1 \\ \end{tabular} \end{table} Table 5: Experimental results for several noise levels. The number of data points, reconstruction delay, and embedding dimension are denoted by \(N\), \(J\), and \(m\), respectively. The signal-to-noise ratio (SNR) is the ratio of the power in the noise-free signal to that of the pure-noise signal. ## 5 Discussion ### Eckmann\(-\)Ruelle requirement In a recent paper, Eckmann and Ruelle [14] discuss the data-set size requirement for estimating dimensions and Lyapunov exponents. Their analysis for Lyapunov exponents proceeds as follows. When measuring the rate of divergence of trajectories with nearby initial conditions, one requires a number of neighbors for a given reference point. These neighbors should lie in a ball of radius \(r\), where \(r\) is small with respect to the diameter (\(d\)) of the reconstructed attractor. Thus, \[\frac{r}{d}=\rho\ll 1. \tag{15}\] (Eckmann and Ruelle suggest \(\rho\) to be a maximum of about 0.1.) Furthermore, the number of candidates for neighbors, \(\Gamma(r)\), should be much greater than one: \[\Gamma(r)\gg 1. \tag{16}\] Next, recognize that \[\Gamma(r)\approx\mathrm{const.}\times r^{D}\, \tag{17}\] and \[\Gamma(d)\approx N\, \tag{18}\] where \(D\) is the dimension of the attractor, and \(N\) is the number of data points. Using eqs. (16)-(18), we obtain the following relation: Figure 9: Effects of embedding dimension for non-chaotic systems. (a) Two-torus. (b) White noise. (c) Bandlimited noise. (d) “Scrambled†Lorenz. \[\Gamma(r)\approx N\Big{(}\frac{r}{d}\Big{)}^{D}\gg 1\;. \tag{19}\] Finally, eqs. (15) and (19) are combined to give the Eckmann-Ruelle requirement for Lyapunov exponents: \[\log N>D\,\log(1/\rho)\;. \tag{20}\] For \(\rho=0.1\), eq. (20) directs us to choose \(N\) such that \[N>10^{D}\;. \tag{21}\] This requirement was met with all time series considered in this paper. Notice that any rigorous definition of "small data set" should be a function of dimension. However, for comparative purposes we regard a small data set as one that is small with respect to those previously considered in the literature. ### Repeatability When using the current approach for estimating largest Lyapunov exponents, one is faced with the following issue of repeatability: Can one consistently locate the region for extracting \(\lambda_{1}\) without a guide, i.e., without a priori knowledge of the correct slope in the linear region"3? To address this issue, we consider the performance of our algorithm with multiple realizations of the Lorenz attractor. Three 5000-point time series from the Lorenz attractor were generated by partitioning one 15000-point data set into disjoint time series. Fig. 10 shows the results using a visual format similar to that first used by Abraham et al. [2] for estimating dimensions. Each curve is a plot of slope versus time, where the slope is calculated from a least-squares fit to 51-point segments of the \(\langle\ln d_{j}(i)\rangle\) versus \(i\,\Delta t\) curve. We observe a clear and repeatable plateau from about \(i\,\Delta t=0.6\) to about \(i\,\Delta t=1.6\). By using this range to define the region for extracting \(\lambda_{1}\), we obtain a reliable estimate of the largest Lyapunov exponent: \(\lambda_{1}=1.57\pm 0.03\). (Recall that the theoretical value is 1.50.) ### Relation to the Sato algorithm As stated in section 3, the current algorithm is principally based on the work of Sato et al. [33]. More specifically, our approach can be considered as a generalization of the Sato algorithm. To show this, we first rewrite eq. (10) using \(\langle\;\;\rangle\) to denote the average over all values of \(j\): \[\lambda_{1}(i,\,k)=\frac{1}{k\,\Delta t}\left\langle\ln\,\frac{d_{j}(i+k)}{d_{ j}(i)}\right\rangle. \tag{22}\] This equation is then rearranged and expressed in terms of the output from the current algorithm, \(y(i)\) (from eq. (13)): \[\lambda_{1}(i,\,k) =\frac{1}{k\,\Delta t}\left[\langle\ln\,d_{j}(i+k)\rangle-\langle \ln\,d_{j}(i)\rangle\right]\] \[\approx\frac{1}{k}\,\left[y(i+k)-y(i)\right]. \tag{23}\] Figure 10: Plot of \((d/d_{i}\langle\ln d_{j}(i)\rangle\) versus \(i\,\Delta t\) using our algorithm with three 5000-point realizations of the Lorenz attractor. Eq. (23) is interpreted as a finite-differences numerical differentiation of \(y(i)\), where \(k\) specifies the size of the differentiation interval. Next, we attempt to derive \(y(i)\) from the output of the Sato algorithm by summing \(\lambda_{1}(i,k)\). That is, we define \(y^{\prime}(i^{\prime})\) as \[y^{\prime}(i^{\prime}) = \sum_{i=0}^{i^{\prime}}\lambda_{1}(i,k) \tag{24}\] \[= \frac{1}{k}\left(\sum_{i=0}^{i^{\prime}}y(i+k)-\sum_{i=0}^{i^{ \prime}}y(i)\right).\] By manipulating this equation, we can show that eq. (23) is not invertible: \[y^{\prime}(i^{\prime}) = \frac{1}{k}\left(\sum_{i=0}^{i^{\prime}+k}y(i)-\sum_{i=0}^{k-1}y( i)-\sum_{i=0}^{i^{\prime}}y(i)\right) \tag{25}\] \[= \frac{1}{k}\left(\sum_{i=i^{\prime}+1}^{i^{\prime}+k}y(i)-\sum_{i =0}^{k-1}y(i)\right)\] \[= \frac{1}{k}\sum_{i=i^{\prime}+1}^{i^{\prime}+k}y(i)+\text{const}.\] If we disregard the constant in eq. (25), \(y^{\prime}(i^{\prime})\) is equivalent to \(y(i)\) smoothed by a \(k\)-point moving-average filter. The difficulty with the Sato algorithm is that the proper value of \(k\) is not usually apparent a priori. When choosing \(k\), one must consider the tradeoff between long, noisy plateaus of \(\lambda_{1}(i,k)\) (for small \(k\)) and short, smooth plateaus (for large \(k\)). In addition, since the transformation from \(y(i)\) to \(\lambda_{1}(i,k)\) is not invertible, choosing \(k\) by trial-and-error requires the repeated evaluation of eq. (22). With our algorithm, however, smoothing is usually unnecessary, and \(\lambda_{1}\) is extracted from a least-squares fit to the longest possible linear region. For those cases where smoothing is needed, a long filter length may be chosen since one knows the approximate location of the plateau after examining a plot of \(\langle\ln d_{j}(i)\rangle\) versus \(i\,\Delta t\). (For example, one may choose a filter length equal to about one-half the length of the noisy linear region.) ### Computational improvements In some instances, the speed of the method may be increased by measuring the separation of nearest neighbors using a smaller embedding dimension. For example, we reconstructed the Lorenz attractor in a three-dimensional phase space and located the nearest neighbors. The separations of those neighbors were then measured in a one-dimensional space by comparing only the first coordinates of each point. There was nearly a threefold savings in time for this portion of the algorithm. However, additional fluctuations were seen in the plots of \(\langle\ln d_{j}(i)\rangle\) versus \(i\,\Delta t\), making it more difficult to locate the region for extracting the slope. Similarly, the computational efficiency of the algorithm may be improved by disregarding every other reference point. We observed that many temporally adjacent reference points also have temporally adjacent nearest neighbors. Thus, two pairs of trajectories may exhibit identical divergence patterns (excluding a time shift of one sampling period), and it may be unnecessary to incorporate the effects of both pairs. Note that this procedure still satisfies the Eckmann-Ruelle requirement by maintaining the pool of nearest neighbors. ### Simultaneous calculation of correlation dimension In addition to calculating the largest Lyapunov exponent, the present algorithm allows one to calculate the correlation dimension, \(D_{2}\). Thus, one sequence of computations will yield an estimate of both the level of chaos and the system complexity. This is accomplished by taking advantage of the numerous distance calculations performed during the nearest-neighbors search. The Grassberger-Procaccia algorithm [20] estimates dimension by examining the scaling properties of the correlation sum, \(C_{m}(r)\). For a given embedding dimension, \(m\), \(C_{m}(r)\) is defined \[C_{m}(r)=\frac{2}{M(M-1)}\,\sum_{i\neq k}\theta(r-\|\mathbf{X}_{t}-\mathbf{X}_{k}\|)\, \tag{26}\] where \(\theta(\ \ )\) is the Heavyside function. Therefore, \(C_{m}(r)\) is interpreted as the fraction of pairs of points that are separated by a distance less than or equal to \(r\). Notice that the previous equation and eq. (7) of our algorithm require the same distance computations (disregarding the constraint in eq. (8)). By exploiting this redundancy, we obtain a more complete characterization of the system using a negligible amount of additional computation. ## 6 Summary We have presented a new method for calculating the largest Lyapunov exponent from experimental time series. The method follows directly from the definition of the largest Lyapunov exponent and is accurate because it takes advantage of all the available data. The algorithm is fast because it uses a simple measure of exponential divergence and works well with small data sets. In addition, the current approach is easy to implement and robust to changes in the following quantities: embedding dimension, size of data set, reconstruction delay, and noise level. Furthermore, one may use the algorithm to calculate simultaneously the correlation dimension. ## Acknowledgements This work was supported by the Rehabilitation Research and Development Service of Veterans Affairs. ## References * [1] H.D.I. Abarbanel, R. Brown and J.B. Kadtke, Prediction in chaotic nonlinear systems: methods for time series with broadband Fourier spectra, Phys. Rev. A 41 (1990) 1782. * [2] N.B. Abraham, A.M. Albano, B. Das, G. De Guzman, S. Yong, R.S. Gioggia, G.P. Puccioni and J.R. Tredicce, Calculating the dimension of attractors from small data sets, Phys. Lett. A 114 (1986) 217. * [3] A.M. Albano, J. Muench, C. Schwartz, A.I. Mees and P.E. Rapp, Singular-value decomposition and the Grassberger-Procaccia algorithm, Phys. Rev. A 38 (1988) 3017. * [4] A.M. Albano, A. Passamante and M.E. Farrell, Using higher-order correlations to define an embedding window, Physica D 54 (1991) 85. * [5] G. Benettin, C. Froeschle and J.P. Scheidecker, Kolmogorov entropy of a dynamical system with increasing number of degrees of freedom, Phys. Rev. A 19 (1979) 2454. * [6] G. Bennettin, L. Galgani and J.-M. Strelcyn, Kolmogorov entropy and numerical experiments, Phys. Rev. A 14 (1976) 2338. * [7] K. Briggs, An improved method for estimating Lyapunov exponents of chaotic time series, Phys. Lett. A 151 (1990) 27. * [8] D.S. Broomhead and G.P. King, Extracting qualitative dynamics from experimental data, Physica D 20 (1986) 217. * [9] R. Brown, P. Bryant and H.D.I. Abarbanel, Computing the Lyapunov spectrum of a dynamical system from observed time series, Phys. Rev. A 43 (1991) 2787. * [10] M. Casdagli, Nonlinear prediction of chaotic time series, Physica D 35 (1989) 335. * [11] P. Chen, Empirical and theoretical evidence of economic chaos, Sys. Dyn. Rev. 4 (1988) 81. * [12] J. Deppisch, H.-I. Bauer and T. Geisel, Hierarchical training of neural networks and prediction of chaotic time series, Phys. Lett. A 158 (1991) 57. * [13] J.-P. Eckmann, S.O. Kamphorst, D. Ruelle and S. Ciliberto, Lyapunov exponents from time series, Phys. Rev. A 34 (1986) 4971. * [14] J.-P. Eckmann and D. Ruelle, Fundamental limitations for estimating dimensions and Lyapunov exponents in dynamical systems, Physica D 56 (1992) 185. * [15] J.-P. Eckmann and D. Ruelle, Ergodic theory of chaos and strange attractors, Rev. Mod. Phys. 57 (1985) 617. * [16] S. Ellner, A.R. Gallant, D. McCaffrey and D. Nychka, Convergence rates and data requirements for Jacobian-based estimates of Lyapunov exponents from data, Phys. Lett. A 153 (1991) 357. * [17] J.D. Farmer and J.J. Sidorowich, Predicting chaotic time series, Phys. Rev. Lett. 59 (1987) 845. * [18] G.W. Frank, T. Lookman, M.A.H. Nerenberg, C. Essex, J. Lemieux and W. Blume, Chaotic time series analysis of epileptic seizures, Physica D 46 (1990) 427. * [19] A.M. Fraser, and H.L. Swinney, Independent coordinates for strange attractors from mutual information, Phys. Rev. A 33 (1986) 1134. * [20] P. Grassberger and I. Procaccia, Characterization of strange attractors. Phys. Rev. Lett. 50 (1983) 346. * [21] P. Grassberger and I. Procaccia, Estimation of the Kolmogorov entropy from a chaotic signal, Phys. Rev. A 28 (1983) 2991. * [22] M. Henon, A two-dimensional mapping with a strange attractor, Commun. Math. Phys. 50 (1976) 69. * [23] W. Liebert and H.G. Schuster, Proper choice of the time delay for the analysis of chaotic time series, Phys. Lett. A 142 (1989) 107. * [24] E.N. Lorenz, Deterministic nonperiodic flow, J. Atmos. Sci. 20 (1963) 130. * [25] M.C. Maekey and L. Glass, Oscillation and chaos in physiological control systems, Science 197 (1977) 287. * [26] V.I. Oseledec, A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems, Trans. Moscow Math. Soc. 19 (1968) 197. * [27] N.H. Packard, J.P. Crutchfield, J.D. Farmer and R.S. Shaw, Geometry from a time series, Phys. Rev. Lett. 45 (1980) 712. * [28] J.B. Ramsey and H.-J. Yuan. The statistical properties of dimension calculations using small data sets, Nonlinearity 3 (1990) 155. * [29] F. Rauf and H.M. Ahmed, Calculation of Lyapunov exponents through nonlinear adaptive filters, Proc. IEEE Int. Symp. on Circuits and Systems, (Singapore, 1991). * [30] O.E. Rossler, An equation for hyperchaos, Phys. Lett. A 71 (1979) 155. * [31] O.E. Rossler, An equation for continuous chaos, Phys. Lett. A 57 (1976) 397. * [32] M. Sano and Y. Sawada, Measurement of the Lyapunov spectrum from a chaotic time series, Phys. Rev. Lett. 55 (1983) 1082. * [33] S. Sato, M. Sano and Y. Sawada, Practical methods of measuring the generalized dimension and the largest Lyapunov exponent in high dimensional chaotic systems, Prog. Theor. Phys. 77 (1987) 1. * [34] I. Shimada and T. Nagashima. A numerical approach to ergodic problem of dissipative dynamical systems, Prog. Theor. Phys. 61 (1979) 1605. * [35] R. Stoop and J. Parisi, Calculation of Lyapunov expo nents avoiding spurious elements, Physica D 50 (1991) 89. * [36] G. Sugihara and R.M. May, Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series, Nature 344 (1990) 734. * [37] F. Takens, Detecting strange attractors in turbulence, Lecture Notes in Mathematics, Vol. 898 (1981) p. 366. * [38] D.J. Wales, Calculating the rate loss of information from chaotic time series by forecasting, Nature 350 (1991) 485. * [39] A. Wolf, J.B. Swift, H.L. Swinney and J.A. Vastano, Determining Lyapunov exponents from a time series, Physica D 16 (1985) 285. * [40] J. Wright, Method for calculating a Lyapunov exponent, Phys. Rev. A 29 (1984) 2924. * [41] X. Zeng, R. Eykholt and R.A. Pielke, Estimating the Lyapunov-exponent spectrum from short time series of low precision, Phys. Rev. Lett. 66 (1991) 3229. # On the State Space Geometry of the Kuramoto-Sivashinsky Flow in a Periodic Domain+ Footnote †: Received by the editors October 17, 2007; accepted for publication (in revised form) by D. Barkley October 14, 2009; published electronically January 6, 2010. [http://www.siam.org/journals/siads/9-1/70562.html](http://www.siam.org/journals/siads/9-1/70562.html) Predrag Cvitanovic School of Physics, Georgia Institute of Technology, Atlanta, GA 30332-0430 (cvitanovic@physics.gatech.edu, siminos@gatech.edu). The third author was partly supported by NSF grant DMS-0807574. Ruslan L. Davidchack Department of Mathematics, University of Leicester, University Road, Leicester LE1 7RH, UK (rld8@mcs.le.ac.uk). This author was partly supported by EPSRC under grant GR/S98986/01. and Berkooz [26] offer a delightful discussion of why this system deserves study as a staging ground for studying turbulence in full-fledged Navier-Stokes boundary shear flows. Flows described by partial differential equations (PDEs) are said to be infinite-dimensional because if one writes them down as a set of ordinary differential equations (ODEs), a set of infinitely many ODEs is needed to represent the dynamics of one PDE. Even though their state space is thus infinite-dimensional, the long-time dynamics of viscous flows, such as Navier-Stokes, and PDEs modeling them, such as Kuramoto-Sivashinsky equations, exhibits, when dissipation is high and the system spatial extent small, apparent "low-dimensional" dynamical behaviors. For some of these the asymptotic dynamics is known to be confined to a finite-dimensional _inertial manifold_, though the rigorous upper bounds on this dimension are not of much use in practice. For large spatial extent the complexity of the spatial motions also needs to be taken into account. The systems whose spatial correlations decay sufficiently fast, and for which the attractor dimension and number of positive Lyapunov exponents diverges with system size, are said [28, 42, 10] to be extensive, "spatio-temporally chaotic," or "weakly turbulent." Conversely, for small system sizes the accurate description might require a large set [20] of coupled ODEs, but dynamics can still be low-dimensional in the sense that it is characterized with one or a few positive Lyapunov exponents. There is no wide range of scales involved, nor decay of spatial correlations, and the system is in this sense only "chaotic." For a subset of physicists and mathematicians who study idealized "fully developed," "homogenous" turbulence the generally accepted usage is that the "turbulent" fluid is characterized by a range of scales and an energy cascade describable by statistical assumptions [16]. What experimentalists, engineers, geophysicists, and astrophysicists actually observe looks nothing like a "fully developed turbulence." In the physically driven wall-bounded shear flows, the turbulence is dominated by unstable _coherent structures_, that is, localized recurrent vortices, rolls, streaks, and the like. The statistical assumptions fail, and a dynamical systems description from first principles is called for [26]. The set of invariant solutions investigated here is embedded into a finite-dimensional inertial manifold [14] in a nontrivial, nonlinear way. "Geometry" in the title of this paper refers to our attempt to systematically triangulate this set in terms of dynamically invariant solutions (equilibria, periodic orbits, \(\ldots\)) and their unstable manifolds, in a PDE representation and numerical simulation algorithm-independent way. The goal is to describe a given turbulent flow quantitatively, not model it qualitatively by a low-dimensional model. For the case investigated here, the state space representation dimension \(d\sim 10^{2}\) is set by requiring that the exact invariant solutions that we compute be accurate to \(\sim 10^{-5}\). Here comes our quandary. If we ban the words turbulence and spatiotemporal chaos from our study of small extent systems, the relevance of what we do to larger systems is obscured. The exact unstable coherent structures that we determine pertain not only to the spatially small chaotic systems, but also the spatially large spatiotemporally chaotic and the spatially very large turbulent systems. So, for the lack of more precise nomenclature, we take the liberty of using the terms chaos, spatiotemporal chaos, and turbulence interchangeably. In previous work, the state space geometry and the natural measure for this system have been studied [6, 38, 39] in terms of unstable periodic solutions restricted to the antisymmetric subspace of the KS dynamics. The focus in this paper is on the role that continuous symmetries play in spatiotemporal dynamics. The notion of exact periodicity in time is replaced by the notion of relative spatiotemporal periodicity, and relative equilibria and relative periodic orbits here play the role that the equilibria and periodic orbits played in the earlier studies. Our search for relative periodic orbits in the KS system was inspired by the investigation of Lopez et al. [41] into relative periodic orbits of the complex Ginzburg-Landau equation. However, there is a vast literature on relative periodic orbits since their first appearance, in Poincare study of the three-body problem [5, 48], where the Lagrange points are the relative equilibria. Such orbits arise in dynamics of systems with continuous symmetries, such as motions of rigid bodies, gravitational \(N\)-body problems, molecules, and nonlinear waves. Recently Viswanath [49] has found both relative equilibria and relative periodic orbits in the plane Couette problem. A Hopf bifurcation of a traveling wave [1, 2, 35] induces a small time-dependent modulation. Brown and Kevrekidis [4] study bifurcation branches of periodic orbits and relative periodic orbits in the KS system in great detail. For our system size (\(\alpha=49.04\) in their notation) they identify a periodic orbit branch. In this context relative periodic orbits are referred to as "modulated traveling waves." For fully chaotic flows we find this notion too narrow. We compute 60,000 periodic orbits and relative periodic orbits that are in no sense small "modulations" of other solutions; hence our preference for the well established notion of a "relative periodic orbit." Building upon the pioneering work of [33, 23, 4], we undertake here a study of the KS dynamics for a specific system size, \(L=22\), sufficiently large to exhibit many of the features typical of turbulent dynamics observed in large KS systems but small enough to lend itself to a detailed exploration of the equilibria and relative equilibria, their stable/unstable manifolds, determination of a large number of relative periodic orbits, and a preliminary exploration of the relation between the observed spatiotemporal turbulent patterns and the relative periodic orbits. In presence of a continuous symmetry, any solution belongs to a group orbit of equivalent solutions. The problem: If one is to generalize the periodic orbit theory to this setting, one needs to understand what is meant by solutions being nearby (shadowing) when each solution belongs to a manifold of equivalent solutions. In a forthcoming publication [46] we resolve this puzzle by implementing symmetry reduction. Here we demonstrate that, for relative periodic orbits visiting the neighborhood of equilibria, if one picks any particular solution, the universe of all other solutions is rigidly fixed through a web of heteroclinic connections between them. This insight garnered from study of 1-dimensional KS PDEs is more remarkable still when applied to the plane Couette flow [20], with 3-dimensional velocity fields and two translational symmetries. The main results presented here are the following: (a) Dynamics visualized through physical, symmetry-invariant observables, such as "energy," dissipation rate, etc., and through projections onto dynamically invariant, PDE-discretization-independent state space coordinate frames (section 3). (b) Existence of a rigid "cage" built by heteroclinic connections between equilibria (section 4). (c) Preponderance of unstable relative periodic orbits and their likely role as the skeleton underpinning spatiotemporal turbulence in systems with continuous symmetries (section 6). ## 2 Kuramoto-Sivashinsky equation The KS system [37, 47], which arises in the description of stability of flame fronts, reaction-diffusion systems, and many other physical settings [33], is one of the simplest nonlinear PDEs that exhibit spatiotemporally chaotic behavior. In the formulation adopted here, the time evolution of the _flame front velocity_\(u=u(x,t)\) on a periodic domain \(u(x,t)=u(x+L,t)\) is given by \[u_{t}=F(u)=-\frac{1}{2}(u^{2})_{x}-u_{xx}-u_{xxxx}\,,\qquad x\in\left[-\frac{L}{ 2},\frac{L}{2}\right]. \tag{1}\] Here \(t\geq 0\) is the time, and \(x\) is the spatial coordinate. The subscripts \(x\) and \(t\) denote partial derivatives with respect to \(x\) and \(t\). In what follows we shall state results of all calculations either in units of the "dimensionless system size" \(\tilde{L}\), or the system size \(L=2\pi\tilde{L}\). Figure 1 presents a typical turbulent evolution for KS. All numerical results presented in this paper are for the system size \(\tilde{L}=22/2\pi=3.5014\dots\), for which a structurally stable chaotic attractor is observed (see Figure 4). Spatial periodicity \(u(x,t)=u(x+L,t)\) makes it convenient to work in the Fourier space, \[u(x,t)=\sum_{k=-\infty}^{+\infty}a_{k}(t)e^{ikx/\tilde{L}}\,, \tag{2}\] with the 1-dimensional PDE (1) replaced by an infinite set of ODEs for the complex Fourier coefficients \(a_{k}(t)\): \[\dot{a}_{k}=v_{k}(a)=(q_{k}^{2}-q_{k}^{4})\,a_{k}-i\frac{q_{k}}{2}\sum_{m=- \infty}^{+\infty}a_{m}a_{k-m}\,, \tag{3}\] Figure 1: A typical spatiotemporally chaotic solution of the KS equation, system size \(L=20\pi\sqrt{2}\approx 88.86\). The \(x\) coordinate is scaled with the most unstable wavelength \(2\pi\sqrt{2}\), which is approximately also the mean wavelength of the turbulent flow. The color bar indicates the color scheme for \(u(x,t)\), used also for the subsequent figures of this type. where \(q_{k}=k/\tilde{L}\). Since \(u(x,t)\) is real, \(a_{k}=a_{-k}^{*}\), and we can replace the sum by an \(m>0\) sum. Due to the hyperviscous damping \(u_{xxxx}\), long-time solutions of the KS equation are smooth, \(a_{k}\) drops off fast with \(k\), and truncations of (2.3) to \(16\leq N\leq 128\) terms yield accurate solutions for system sizes considered here (see Appendix A). Robustness of the long-time dynamics of the KS system as a function of the number of Fourier modes kept in truncations of (2.3) is, however, a subtle issue. Adding an extra mode to a truncation of the system introduces a small perturbation in the space of dynamical systems. However, due to the lack of structural stability as a function of both the truncation \(N\) and the system size \(L\), a small variation in a system parameter can (and often will) throw the dynamics into a different asymptotic state. For example, an asymptotic attractor which appears to be chaotic in an \(N\)-dimensional state space truncation can collapse into an attractive cycle for \((N+1)\) dimensions. Therefore, the selection of parameter \(L\) for which a structurally stable chaotic dynamics exists and can be studied is rather subtle. We have found that the value of \(L=22\) studied in section 4 satisfies these requirements. In particular, all of the equilibria and relative equilibria persist and remain unstable when \(N\) is increased from 32 (the value we use in our numerical investigations) to 64 and 128. Nearly all of the relative periodic orbits we have found for this system also exist and remain unstable for larger values of \(N\) as well as for smaller values of the integration step size (see Appendix C for details). ### Symmetries of the KS equation The KS equation is Galilean invariant: If \(u(x,t)\) is a solution, then \(u(x-ct,t)-c\), with \(c\) an arbitrary constant speed, is also a solution. Without loss of generality, in our calculations we shall set the mean velocity of the front to zero, \[\int dx\,u=0. \tag{2.4}\] As \(\dot{a}_{0}=0\) in (2.3), \(a_{0}\) is a conserved quantity fixed to \(a_{0}=0\) by the condition (2.4). \(G\), the group of actions \(g\in G\) on a state space (reflections, translations, etc.), is a symmetry of the KS flow (2.1) if \(g\,u_{t}=F(g\,u)\). The KS equation is time translationally invariant and space translationally invariant on a periodic domain under the 1-parameter group of \(O(2):\{\tau_{\ell/L},R\}\). If \(u(x,t)\) is a solution, then \(\tau_{\ell/L}\,u(x,t)=u(x+\ell,t)\) is an equivalent solution for any shift \(-L/2<\ell\leq L/2\), as is the reflection ("parity" or "inversion") \[R\,u(x)=-u(-x)\,. \tag{2.5}\] The translation operator action on the Fourier coefficients (2.2), represented here by a complex-valued vector \(a=\{a_{k}\in\mathbb{C}\ |\ k=1,2,\ldots\}\), is given by \[\tau_{\ell/L}\,a=\mathbf{g}(\ell)\,a, \tag{2.6}\] where \(\mathbf{g}(\ell)=\mathrm{diag}(e^{iq_{k}\,\ell})\) is a complex-valued diagonal matrix, which amounts to the \(k\)th mode complex plane rotation by an angle \(k\,\ell/\tilde{L}\). The reflection acts on the Fourier coefficients by complex conjugation, \[R\,a=-a^{*}\,. \tag{2.7}\]Reflection generates the dihedral subgroup \(D_{1}=\{1,R\}\) of \(O(2)\). Let \(\mathbb{U}\) be the space of real-valued velocity fields periodic and square integrable on the interval \(\Omega=[-L/2,L/2]\), \[\mathbb{U}=\{u\in L^{2}(\Omega)\mid u(x)=u(x+L)\}\,. \tag{2.8}\] A continuous symmetry maps each state \(u\in\mathbb{U}\) to a manifold of functions with identical dynamic behavior. Relation \(R^{2}=1\) induces linear decomposition \(u(x)=u^{+}(x)+u^{-}(x)\), \(u^{\pm}(x)=P^{\pm}u(x)\in\mathbb{U}^{\pm}\), into irreducible subspaces \(\mathbb{U}=\mathbb{U}^{+}\oplus\mathbb{U}^{-}\), where \[P^{+}=\frac{1+R}{2}\,,\qquad P^{-}=\frac{1-R}{2}\,, \tag{2.9}\] are the antisymmetric/symmetric projection operators. Applying \(P^{+}\), \(P^{-}\) on the KS equation (2.1), we have [33] \[u^{+}_{t} =-(u^{+}u^{+}_{x}+u^{-}u^{-}_{x})-u^{+}_{xx}-u^{+}_{xxxx}\,, \tag{2.10}\] \[u^{-}_{t} =-(u^{+}u^{-}_{x}+u^{-}u^{+}_{x})-u^{-}_{xx}-u^{-}_{xxxx}\,.\] If \(u^{-}=0\), KS flow is confined to the antisymmetric \(\mathbb{U}^{+}\) subspace, \[u^{+}_{t}=-u^{+}u^{+}_{x}-u^{+}_{xx}-u^{+}_{xxxx}\,, \tag{2.11}\] but otherwise the nonlinear terms in (2.10) mix the two subspaces. Any rational shift \(\tau_{1/m}u(x)=u(x+L/m)\) generates a discrete cyclic subgroup \(C_{m}\) of \(O(2)\), also a symmetry of the KS system. Reflection together with \(C_{m}\) generates another symmetry of the KS system, the dihedral subgroup \(D_{m}\) of \(O(2)\). The only nonzero Fourier components of a solution invariant under \(C_{m}\) are \(a_{jm}\neq 0\), \(j=1,2,\dots\), while for a solution invariant under \(D_{m}\) we also have the condition \(\operatorname{Re}a_{j}=0\) for all \(j\). \(D_{m}\) reduces the dimensionality of state space and aids computation of equilibria and periodic orbits within it. For example, the \(1/2\)-cell translations \[\tau_{1/2}\,u(x)=u\left(x+\frac{L}{2}\right) \tag{2.12}\] and reflections generate \(O(2)\) subgroup \(D_{2}=\{1,R,\tau,\tau R\}\), which reduces the state space into four irreducible subspaces (for brevity, here \(\tau=\tau_{1/2}\)): \[\tau R \tau R\] \[P^{(1)} =\frac{1}{4}(1+\tau+R+\tau R) S S S\] \[P^{(2)} =\frac{1}{4}(1+\tau-R-\tau R) S A A \tag{2.13}\] \[P^{(3)} =\frac{1}{4}(1-\tau+R-\tau R) A S A\] \[P^{(4)} =\frac{1}{4}(1-\tau-R+\tau R) A A S S\,.\]\(P^{(j)}\) is the projection operator onto \(u^{(j)}\) irreducible subspace, and the last three columns refer to the symmetry (or antisymmetry) of \(u^{(j)}\) functions under reflection and \(1/2\)-cell shift. By the same argument that identified (11) as the invariant subspace of KS, here the KS flow stays within the \(\mathbb{U}^{S}=\mathbb{U}^{(1)}+\mathbb{U}^{(2)}\) irreducible \(D_{1}\) subspace of \(u\) profiles symmetric under \(1/2\)-cell shifts. While in general the bilinear term \((u^{2})_{x}\) mixes the irreducible subspaces of \(D_{n}\), for \(D_{2}\) there are four subspaces invariant under the flow [33]: \(\{0\}\): the \(u(x)=0\) equilibrium, \(\mathbb{U}^{+}=\mathbb{U}^{(1)}+\mathbb{U}^{(3)}\): the reflection \(D_{1}\) irreducible space of antisymmetric \(u(x)\), \(\mathbb{U}^{S}=\mathbb{U}^{(1)}+\mathbb{U}^{(2)}\): the shift \(D_{1}\) irreducible space of \(L/2\) shift symmetric \(u(x)\), \(\mathbb{U}^{(1)}\): the \(D_{2}\) irreducible space of \(u(x)\) invariant under \(x\mapsto L/2-x\), \(u\mapsto-u\). With the continuous translational symmetry eliminated within each subspace, there are no relative equilibria and relative periodic orbits, and one can focus on the equilibria and periodic orbits only, as was done for \(\mathbb{U}^{+}\) in [6, 38, 39]. In the Fourier representation, the \(u\in\mathbb{U}^{+}\) antisymmetry amounts to having purely imaginary coefficients, since \(a_{-k}=a_{k}^{*}=-a_{k}\). The \(1/2\) cell-size shift \(\tau_{1/2}\) generated 2-element discrete subgroup \(\{1,\tau_{1/2}\}\) is of particular interest, because in the \(\mathbb{U}^{+}\) subspace the translational invariance of the full system reduces to invariance under discrete translation (12) by half a spatial period \(L/2\). Each of the above dynamically invariant subspaces is unstable under small perturbations, and generic solutions of the KS equation belong to the full space. Nevertheless, since all equilibria of the KS flow studied in this paper lie in the \(\mathbb{U}^{+}\) subspace (see section 4), \(\mathbb{U}^{+}\) plays an important role for the global geometry of the flow. The linear stability matrices of these equilibria have eigenvectors both in and outside of \(\mathbb{U}^{+}\) and need to be computed in the full state space. ### Equilibria and relative equilibria Equilibria (or the steady solutions) are the fixed profile time invariant solutions, \[u(x,t)=u_{q}(x). \tag{14}\] Due to the translational symmetry, the KS system also allows for relative equilibria (traveling waves, rotating waves), characterized by a fixed profile \(u_{q}(x)\) moving with constant speed \(c\), that is, \[u(x,t)=u_{q}(x-ct)\,. \tag{15}\] Here suffix \({}_{q}\) labels a particular invariant solution. Because of the reflection symmetry (5), the relative equilibria come in counter-traveling pairs \(u_{q}(x-ct)\), \(-u_{q}(-x+ct)\). The relative equilibrium condition for the KS PDE (1) is the ODE \[\tfrac{1}{2}(u^{2})_{x}+u_{xx}+u_{xxxx}=c\,u_{x}\,, \tag{16}\] which can be analyzed as a dynamical system in its own right. Integrating once, we get \[\tfrac{1}{2}u^{2}-cu+u_{x}+u_{xxx}=E. \tag{17}\]This equation can be interpreted as a three-dimensional dynamical system with spatial coordinate \(x\) playing the role of "time," and the integration constant \(E\) can be interpreted as "energy"; see section 3. For \(E>0\) there is rich \(E\)-dependent dynamics, with fractal sets of bounded solutions investigated in depth by Michelson [43]. For \(\tilde{L}<1\) the only equilibrium of the system is the globally attracting constant solution \(u(x,t)=0\), denoted \(\mathrm{E}_{0}\) from now on. With increasing system size \(L\) the system undergoes a series of bifurcations. The resulting equilibria and relative equilibria are described in the classical papers of Kevrekidis, Nicolaenko, and Scovel [33], and Greene and Kim [23], among others. The relevant bifurcations up to the system size investigated here are summarized in Figure 2: At \(\tilde{L}=22/2\pi=3.5014\dots\), the equilibria are the constant solution \(\mathrm{E}_{0}\); the equilibrium \(\mathrm{E}_{1}\), called GLMRT by Greene and Kim [40, 23]; the 2- and 3-cell states \(\mathrm{E}_{2}\) and \(\mathrm{E}_{3}\); and the pairs of relative equilibria \(\mathrm{TW}_{\pm 1}\), \(\mathrm{TW}_{\pm 2}\). All equilibria are in the antisymmetric subspace \(\mathbb{U}^{+}\), while \(\mathrm{E}_{2}\) is also invariant under \(D_{2}\), and \(\mathrm{E}_{3}\) under \(D_{3}\). In the Fourier representation the time dependence of the relative equilibria is \[a_{k}(t)e^{-itcq_{k}}=a_{k}(0). \tag{18}\] Differentiating with respect to time, we obtain the Fourier space version of the relative equilibrium condition (16), \[v_{k}(a)-iq_{k}ca_{k}=0, \tag{19}\] Figure 2: The energy (16) of the equilibria and relative equilibria that exist up to \(L=22\), \(\tilde{L}=3.5014\dots\), plotted as a function of the system size \(\tilde{L}=L/2\pi\) (additional equilibria, not present at \(L=22\), are given in [23]). Solid curves denote \(n\)-cell solutions \(\mathrm{E}_{2}\) and \(\mathrm{E}_{3}\), dotted curves the GLMRT equilibrium \(\mathrm{E}_{1}\), and dashed curves the relative equilibria \(\mathrm{TW}_{\pm 1}\) and \(\mathrm{TW}_{\pm 2}\). The parameter \(\alpha\) of [33, 23] is related to the system size by \(\tilde{L}=\sqrt{\alpha/4}\). which we solve for (time-independent) \(a_{k}\) and \(c\). Periods of spatially periodic equilibria are \(L/n\) with integer \(n\). Every time the system size crosses \(\tilde{L}=n\), \(n\)-cell states are generated through pitchfork bifurcations off the \(u=0\) equilibrium. Due to the translational invariance of the KS equation, they form invariant circles in the full state space. In the \(\mathbb{U}^{+}\) subspace considered here, they correspond to \(2n\) points, each shifted by \(L/2n\). For a sufficiently small \(L\) the number of equilibria is small and concentrated on the low wavenumber end of the Fourier spectrum. In a periodic box of size \(L\) both equilibria and relative equilibria are periodic solutions embedded in 3-dimensional space, conveniently represented as loops in \((u,u_{x},u_{xx})\) space; see Figure 3(d). In this representation the continuous translation symmetry is automatic--a rotation in the \([0,L]\) periodic domain only moves the points along the loop. For an equilibrium the points are stationary in time; for relative equilibrium they move in time, but in either case, the loop remains invariant. So we do not have the problem that we encounter in the Fourier representation, where, seen from the frame of one of the equilibria, the rest trace out circles under the action of continuous symmetry translations. From (2.3) we see that the origin \(u(x,t)=0\) has Fourier modes as the linear stability eigenvectors (see Appendix B). The \(|k|<\tilde{L}\) long wavelength perturbations of the flat-front Figure 3: (a) E\({}_{1}\), (b) E\({}_{2}\), and (c) E\({}_{3}\) equilibria. The E\({}_{0}\) equilibrium is the \(u(x)=0\) solution. (d) \((u,u_{x},u_{xx})\) representation of (red) E\({}_{1}\), (green) E\({}_{2}\), (blue) E\({}_{3}\) equilibria, (purple) \(\mathrm{TW}_{+1}\), and (orange) \(\mathrm{TW}_{-1}\) relative equilibria. \(L=22\) system size. equilibrium are linearly unstable, while for \(|k|\) sufficiently larger than \(\tilde{L}\) the short wavelength perturbations are strongly contractive. The high \(k\) eigenvalues, corresponding to rapid variations of the flame front, decay so fast that the corresponding eigendirections are physically irrelevant. Indeed, [50] shows that the chaotic solutions of spatially extended dissipative systems evolve within an inertial manifold spanned by a finite number of physical modes, hyperbolically isolated from a set of residual degrees of freedom with high \(k\), themselves individually isolated from each other. The most unstable mode, nearest to \(|k|=\tilde{L}/\sqrt{2}\), sets the scale of the mean wavelength \(\sqrt{2}\) of the KS turbulent dynamics; see Figure 1. ### Relative periodic orbits, symmetries, and periodic orbits The KS equation (1) is time translationally invariant, and space translationally invariant under the 1-dimensional Lie group of \(O(2)\) rotations: If \(u(x,t)\) is a solution, then \(u(x+\ell,t)\) and \(-u(-x,t)\) are equivalent solutions for any \(-L/2<\ell\leq L/2\). As a result of invariance under \(\tau_{\ell/L}\), the KS equation can have relative periodic orbit solutions with a profile \(u_{p}(x)\), period \(T_{p}\), and a nonzero shift \(\ell_{p}\), \[\tau_{\ell_{p}/L}u(x,T_{p})=u(x+\ell_{p},T_{p})=u(x,0)=u_{p}(x)\,. \tag{20}\] Relative periodic orbits (20) are periodic in the \(c_{p}=\ell_{p}/T_{p}\) corotating frame (see Figure 16), but in the stationary frame their trajectories are quasi-periodic. Due to the reflection symmetry (5) of the KS equation, every relative periodic orbit \(u_{p}(x)\) with shift \(\ell_{p}\) has a symmetric partner \(-u_{p}(-x)\) with shift \(-\ell_{p}\). Due to invariance under reflections, the KS equation can also have relative periodic orbits _with reflection_, which are characterized by a profile \(u_{p}(x)\) and period \(T_{p}\), \[Ru(x+\ell,T_{p})=-u(-x-\ell,T_{p})=u(x+\ell,0)=u_{p}(x), \tag{21}\] giving the family of equivalent solutions parameterized by \(\ell\) (as the choice of the reflection point is arbitrary, the shift can take any value in \(-L/2<\ell\leq L/2\)). Armbruster, Guckenheimer, and Holmes [2, 1] and Brown and Kevrekidis [4] (see also [35]) link the birth of relative periodic orbits to an infinite period global bifurcation involving a heteroclinic loop connecting equilibria or a bifurcation of relative equilibria, and also report creation of relative periodic orbit branches through bifurcation of periodic orbits. As \(\ell\) is continuous in the interval \([-L/2,L/2]\), the likelihood of a relative periodic orbit with \(\ell_{p}=0\) shift is zero, unless an exact periodicity is enforced by a discrete symmetry, such as the dihedral symmetries discussed above. If the shift \(\ell_{p}\) of a relative periodic orbit with period \(T_{p}\) is such that \(\ell_{p}/L\) is a rational number, then the orbit is periodic with period \(nT_{p}\). The likelihood of finding such periodic orbits is also zero. However, due to the KS equation invariance under the dihedral \(D_{n}\) and cyclic \(C_{n}\) subgroups, the following types of periodic orbits are possible: (a) The periodic orbit lies within a subspace pointwise invariant under the action of \(D_{n}\) or \(C_{n}\). For instance, for \(D_{1}\) this is the \(\mathbb{U}^{+}\) antisymmetric subspace, \(-u_{p}(-x)=u_{p}(x)\), and \(u(x,T_{p})=u(x,0)=u_{p}(x)\). The periodic orbits found in [6, 39] are all in \(\mathbb{U}^{+}\), as the dynamics is restricted to antisymmetric subspace. For \(L=22\) the dynamics in \(\mathbb{U}^{+}\) is dominated by attracting (within the subspace) heteroclinic connections, and thus we have no periodic orbits of this type, or in any other of the \(D_{n}\)-invariant subspaces; see section 4. (b) The periodic orbit satisfies \[u(x,t+T_{p})=\gamma u(x,t) \tag{22}\] for some group element \(\gamma\in O(2)\) such that \(\gamma^{m}=e\) for some integer \(m\) so that the orbit repeats after time \(mT_{p}\) (see [22] for a general discussion of conditions on the symmetry of a periodic orbit). If an orbit is of reflection type (21), \(R\tau_{\ell/L}u(x,T_{p})=-u(-x-\ell,T_{p})=u(x,0)\), then it is preperiodic to a periodic orbit with period \(2T_{p}\). Indeed, since \((R\tau_{\ell/L})^{2}=R^{2}=1\) and the KS solutions are time translation invariant, it follows from (21) that \[u(x,2T_{p})=R\tau_{\ell/L}u(x,T_{p})=(R\tau_{\ell/L})^{2}u(x,0)=u(x,0).\] Thus any shift acquired during time \(0\) to \(T_{p}\) is compensated by the opposite shift during evolution from \(T_{p}\) to \(2T_{p}\). All periodic orbits we have found for \(L=22\) are of type (22) with \(\gamma=R\). Preperiodic orbits with \(\gamma\in C_{n}\) have been found by Brown and Kevrekidis [4] for KS system sizes larger than ours, but we have not found any for \(L=22\). Preperiodic orbits are a hallmark of any dynamical system with a discrete symmetry, where they have a natural interpretation as periodic orbits in the fundamental domain [13, 12]. ## 3 Energy transfer rates In physical settings where the observation times are much longer than the dynamical turnover and Lyapunov times (statistical mechanics, quantum physics, turbulence), periodic orbit theory [12] provides highly accurate predictions of measurable long-time averages such as the dissipation and the turbulent drag [20]. Physical predictions have to be independent of a particular choice of ODE representation of the PDE under consideration and, most importantly, invariant under all symmetries of the dynamics. In this section we discuss a set of such physical observables for the 1-dimensional KS invariant under reflections and translations. They offer a representation of dynamics in which the symmetries are explicitly factored out. We shall use these observables in section 8 to visualize a set of solutions. The space average of a function \(a=a(x,t)=a(u(x,t))\) on the interval \(L\), \[\left\langle a\right\rangle=\frac{1}{L}\oint dx\,a(x,t), \tag{23}\] is in general time-dependent. Its mean value is given by the time average \[\overline{a}=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}d\tau\left\langle a \right\rangle=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\frac{1}{L}\oint d\tau \,dx\,a(x,\tau)\,. \tag{24}\] The mean value of \(a=a(u_{q})\equiv a_{q}\) evaluated on equilibrium or relative equilibrium \(u(x,t)=u_{q}(x-ct)\), labeled by \(q\) as in (15), is \[\overline{a}_{q}=\left\langle a\right\rangle_{q}=a_{q}. \tag{25}\] Evaluation of the infinite time average (24) on a function of a periodic orbit or relative periodic orbit \(u_{p}(x,t)=u_{p}(x+\ell_{p},\,t+T_{p})\) requires only a single \(T_{p}\) traversal, \[\overline{a}_{p}=\frac{1}{T_{p}}\int_{0}^{T_{p}}d\tau\left\langle a\right\rangle. \tag{26}\]Equation (2.1) can be written as \[u_{t}=-V_{x},\qquad V(x,t)=\tfrac{1}{2}u^{2}+u_{x}+u_{xxx}\,. \tag{3.5}\] If \(u\) is "flame-front velocity," then \(E\), defined in (2.17), can be interpreted as the mean energy density. So, even though KS is a phenomenological small-amplitude equation, the time-dependent \(L^{2}\) norm of \(u\), \[E=\frac{1}{L}\oint dx\,V(x,t)=\frac{1}{L}\oint dx\,\frac{u^{2}}{2}\,, \tag{3.6}\] has a physical interpretation [23] as the average "energy" density of the flame front. This analogy to the mean kinetic energy density for the Navier-Stokes equation motivates what follows. The energy (3.6) is intrinsic to the flow and independent of the particular ODE basis set chosen to represent the PDE. However, as the Fourier amplitudes are eigenvectors of the translation operator, in the Fourier space the energy is a diagonalized quadratic norm, \[E=\sum_{k=-\infty}^{\infty}E_{k},\qquad E_{k}=\frac{1}{2}|a_{k}|^{2}\,, \tag{3.7}\] and explicitly invariant term-by-term under translations (2.6) and reflections (2.5). Take time derivative of the energy density (3.6), substitute (2.1), and integrate by parts. Total derivatives vanish by the spatial periodicity on the \(L\) domain: \[\dot{E} =\left\langle u_{t}\,u\right\rangle=-\left\langle\left(\frac{u^{ 2}}{2}+u_{x}+u_{xxx}\right)_{x}u\right\rangle \tag{3.8}\] \[=\left\langle\frac{u_{x}\,u^{2}}{2}+u_{x}^{2}+u_{x}\,u_{xxx} \right\rangle.\] The first term in (3.8) vanishes by integration by parts, \(3\left\langle u_{x}\,u^{2}\right\rangle=\left\langle(u^{3})_{x}\right\rangle=0\), and integrating the third term by parts yet again, one gets [23] that the energy variation \[\dot{E}=P-D,\qquad P=\left\langle u_{x}^{2}\right\rangle,\quad D=\left\langle u _{xx}^{2}\right\rangle \tag{3.9}\] balances the power \(P\) pumped in by antidiffusion \(u_{xx}\) against the energy dissipation rate \(D\) by hyperviscosity \(u_{xxxx}\) in the KS equation (2.1). The time averaged energy density \(\overline{E}\) computed on a typical orbit goes to a constant, so the mean values (3.2) of drive and dissipation exactly balance each other: \[\overline{\dot{E}}=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}d\tau\,\dot{E}= \overline{P}-\overline{D}=0. \tag{3.10}\] In particular, the equilibria and relative equilibria fall onto the diagonal in Figure 14(a) below, and so do time averages computed on periodic orbits and relative periodic orbits: \[\overline{E}_{p}=\frac{1}{T_{p}}\int_{0}^{T_{p}}d\tau\,E(\tau),\qquad\overline {P}_{p}=\frac{1}{T_{p}}\int_{0}^{T_{p}}d\tau\,P(\tau)=\overline{D}_{p}. \tag{3.11}\]In the Fourier basis (14) the conservation of energy on average takes the form \[0=\sum_{k=-\infty}^{\infty}\,(q_{k}^{2}-q_{k}^{4})\,\overline{E}_{k},\qquad E_{k} (t)=\frac{1}{2}|a_{k}(t)|^{2}\,. \tag{15}\] The large \(k\) convergence of this series is insensitive to the system size \(L\); \(\overline{E_{k}}\) have to decrease much faster than \(q_{k}^{-4}\). Deviation of \(E_{k}\) from this bound for small \(k\) determines the active modes. For equilibria an \(L\)-independent bound on \(E\) is given by Michelson [43]. The best current bound [18, 3] on the long-time limit of \(E\) as a function of the system size \(L\) scales as \(E\propto L^{2}\). ## 4 Geometry of state space with \(L=22\) We now turn to exploring Hopf's vision numerically, on a specific KS system. An instructive example is offered by the dynamics for the \(L=22\) system to which we specialize for the rest of this paper. The size of this small system is \(\sim 2.5\) mean wavelengths (\(\tilde{L}/\sqrt{2}=2.4758\ldots\)), and the competition between states with wavenumbers \(2\) and \(3\) leads to what, in the context of boundary shear flows, would be called [24] the "empirically observed sustained turbulence," but in the present context may equally well be characterized as a "chaotic attractor." A typical long orbit is shown in Figure 4. Asymptotic attractor structure of small systems like the one studied here is very sensitive to system parameter variations, and, as is true of any realistic unsteady flow, there is no rigorous way of establishing that this turbulence is sustained for all time, rather than being merely a very long transient on its way to an attracting periodic state. For large system size, as shown in Figure 1, it is hard to imagine a scenario under which attracting periodic states (as shown in [17], they do exist) would have significantly large immediate basins of attraction. Regardless of the (non)existence of a \(t\to\infty\) chaotic attractor, study of the invariant unstable solutions and the associated Smale horseshoe structures in a system's state space offers valuable insights into the observed unstable "coherent structures." Figure 4: A typical chaotic orbit of the KS flow, system size \(L=22\). Because of the strong \(k^{4}\) contraction, for a small system size the long-time dynamics is confined to a low-dimensional inertial manifold [30]. Indeed, numerically the covariant Lyapunov vectors [21] of the \(L=22\) chaotic attractor separate into eight "physical" vectors with small Lyapunov exponents (\(\lambda_{j}\)) = (0.048, 0, 0, \(-0.003\), \(-0.189\), \(-0.256\), \(-0.290\), \(-0.310\)) and the remaining 54 "hyperbolically isolated" vectors with rapidly decreasing exponents (\(\lambda_{j}\)) = (\(-1.963\), \(-1.967\), \(-5.605\), \(-5.605\), \(-11.923\), \(-11.923\), \(\ldots\)) \(\approx-(j/\tilde{L})^{4}\), in full agreement with the investigations by Yang et al. [50] of KS equations for large system sizes. The chaotic dynamics mostly takes place close to an 8-dimensional manifold, with strong contraction in other dimensions. The two zero exponents are due to the time and space translational symmetries of the KS equation, and the two corresponding dimensions can be quotiented out by means of discrete-time Poincare sections and \(O(2)\) group orbit slices. It was shown in [6, 39] that within unstable-manifold curvilinear coordinate frames, the dynamics on the attractor can sometimes be well approximated by local 1- or 2-dimensional Poincare return maps. Hence a relatively small number of real Fourier modes, such as the 62 to 126 used in calculations presented here, suffices to obtain invariant solutions numerically accurate to within \(10^{-5}\). We next investigate the properties of equilibria and relative equilibria and determine numerically a large set of the short period relative periodic orbits for KS in a periodic cell of size \(L=22\). ## 5 Equilibria and relative equilibria for \(L=22\) In addition to the trivial equilibrium \(u=0\) (denoted E\({}_{0}\)), we find three equilibria with dominant wavenumber \(k\) (denoted E\({}_{k}\)) for \(k=1,2,3\). All equilibria, shown in Figure 3, are symmetric with respect to the reflection symmetry (5). In addition, E\({}_{2}\) and E\({}_{3}\) are symmetric with respect to translation (12), by \(L/2\) and \(L/3\), respectively. E\({}_{2}\) and E\({}_{3}\) essentially lie in the 2nd and 3rd Fourier component complex planes, with small deformations of the \(k=2j\) and \(k=3j\) harmonics, respectively. The stability of the equilibria is characterized by the eigenvalues \(\lambda_{j}\) of the stability matrix. The leading 10 eigenvalues for each equilibrium are listed in Table 1; those with \(\mu>-2.5\) are also plotted in Figure 5. We have computed (available upon request) the Figure 5: Leading equilibrium stability eigenvalues, \(L=22\) system size. corresponding eigenvectors as well. As an equilibrium with \(\mathrm{Re}\,\lambda_{j}>0\) is unstable in the direction of the corresponding eigenvector \(\mathbf{e}^{(j)}\), the eigenvectors provide flow-intrinsic (PDE discretization-independent) coordinates which we use for visualization of unstable manifolds and homo/heteroclinic connections between equilibria. We find such coordinate frames, introduced by Gibson and coworkers [20, 19], better suited to visualization of nontrivial solutions \begin{table} \begin{tabular}{c c c c c} \hline \(\mathrm{E}_{1}\) & \(\mu_{j}\) & \(\nu_{j}\) & Symmetry & \(\tau_{1/4}\mathrm{E}_{n}\) Symmetry \\ \hline \(\lambda_{1,2}\) & \(0.1308\) & \(0.3341\) & - & - \\ \(\lambda_{3,4}\) & \(0.0824\) & \(0.3402\) & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\lambda_{5}\) & \(0\) & & - & - \\ \(\lambda_{6,7}\) & \(-0.2287\) & \(0.1963\) & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\lambda_{8}\) & \(-0.2455\) & & - & - \\ \(\lambda_{9}\) & \(-2.0554\) & & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\lambda_{10}\) & \(-2.0619\) & & - & - \\ \(\mathrm{E}_{2}\) & & & & \\ \hline \(\lambda_{1,2}\) & \(0.1390\) & \(0.2384\) & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\lambda_{3}\) & \(0\) & & \(\tau_{1/2}\) & \(\tau_{1/2}\) \\ \(\lambda_{4,5}\) & \(-0.0840\) & \(0.1602\) & \(\mathbb{U}^{(1)}\) & \(\mathbb{U}^{+}\) \\ \(\lambda_{6}\) & \(-0.1194\) & & \(\tau_{1/2}\) & \(\tau_{1/2}\) \\ \(\lambda_{7,8}\) & \(-0.2711\) & \(0.3563\) & \(\mathbb{U}^{+}\), \(\mathbb{U}^{(1)}\), \(\tau_{1/2}\) & \(\mathbb{U}^{+}\), \(\mathbb{U}^{(1)}\), \(\tau_{1/2}\) \\ \(\lambda_{9}\) & \(-2.0130\) & & \(\mathbb{U}^{(1)}\) & \(\mathbb{U}^{+}\) \\ \(\lambda_{10}\) & \(-2.0378\) & & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\mathrm{E}_{3}\) & & & & \\ \hline \(\lambda_{1}\) & \(0.0933\) & & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\lambda_{2}\) & \(0.0933\) & & - & - \\ \(\lambda_{3}\) & \(0\) & & \(\tau_{1/3}\) & \(\tau_{1/3}\) \\ \(\lambda_{4}\) & \(-0.4128\) & & \(\mathbb{U}^{+}\), \(\tau_{1/3}\) & \(\mathbb{U}^{(1)}\), \(\tau_{1/3}\) \\ \(\lambda_{5,6}\) & \(-0.6108\) & \(0.3759\) & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\lambda_{7,8}\) & \(-0.6108\) & \(0.3759\) & - & - \\ \(\lambda_{9}\) & \(-1.6641\) & & - & - \\ \(\lambda_{10}\) & \(-1.6641\) & & \(\mathbb{U}^{+}\) & \(\mathbb{U}^{(1)}\) \\ \(\mathrm{TW}_{\pm 1}\) & & & & \\ \hline \(\lambda_{1,2}\) & \(0.1156\) & \(0.8173\) & - & - \\ \(\lambda_{3,4}\) & \(0.0337\) & \(0.4189\) & - & - \\ \(\lambda_{5}\) & \(0\) & & - & - \\ \(\lambda_{6}\) & \(-0.2457\) & & - & - \\ \(\lambda_{7,8}\) & \(-0.3213\) & \(0.9813\) & - & - \\ \(\mathrm{TW}_{\pm 2}\) & & & & \\ \hline \(\lambda_{1}\) & \(0.3370\) & & - & - \\ \(\lambda_{2}\) & \(0\) & & - & - \\ \(\lambda_{3,4}\) & \(-0.0096\) & \(0.6288\) & - & - \\ \(\lambda_{5,6}\) & \(-0.2619\) & \(0.5591\) & - & - \\ \(\lambda_{7,8}\) & \(-0.3067\) & \(0.0725\) & - & - \\ \hline \end{tabular} \end{table} Table 1: Leading eigenvalues \(\lambda_{j}=\mu_{j}\pm\nu_{j}\) and symmetries of the corresponding eigenvectors of KS equilibria and relative equilibria for \(L=22\) system size. We have used as our reference states those that lie within the antisymmetric subspace \(\mathbb{U}^{+}\), and also listed the symmetries of the \(L/4\) translated ones. than the more standard Fourier mode (eigenvectors of the \(u(x,t)=0\) solution) projections. The eigenvalues of \(\mathrm{E}_{0}\) are determined by the linear part of the KS equation (B.4): \(\lambda_{k}=(k/\tilde{L})^{2}-(k/\tilde{L})^{4}\). For \(L=22\), there are three pairs of unstable eigenvalues, corresponding, in decreasing order, to three unstable modes \(k=2\), \(3\), and \(1\). For each mode, the corresponding eigenvectors lie in the plane spanned by \(\mathrm{Re}\,a_{k}\) and \(\mathrm{Im}\,a_{k}\). Table 1 lists the symmetries of the stability eigenvectors of equilibria \(\mathrm{E}_{1}\) to \(\mathrm{E}_{3}\). Consistent with the bifurcation diagram of Figure 2, we find two pairs of relative equilibria (2.15) with velocities \(c=\pm 0.73699\) and \(\pm 0.34954\), which we label \(\mathrm{TW}_{\pm 1}\) and \(\mathrm{TW}_{\pm 2}\), for "traveling waves." The profiles of the two relative equilibria and their time evolution with eventual decay into the chaotic attractor are shown in Figure 6. The leading eigenvalues of \(\mathrm{TW}_{\pm 1}\) and \(\mathrm{TW}_{\pm 2}\) are listed in Table 1. Table 2 lists equilibrium energy \(E\), the local Poincare section return time \(T\), radially expanding Floquet multiplier \(\Lambda_{e}\), and the least contracting Floquet multiplier \(\Lambda_{c}\) for all \(L=22\) Figure 6: Relative equilibria: \(\mathrm{TW}_{+1}\) with velocity \(c=0.737\) and \(\mathrm{TW}_{+2}\) with velocity \(c=0.350\). The upper panels show the relative equilibria profiles. The lower panels show evolution of slightly perturbed relative equilibria and their decay into generic turbulence. Each relative equilibrium has a reflection symmetric partner related by \(u(x)\to-u(-x)\) traveling with velocity \(-c\). equilibria and relative equilibria. The return time \(T=2\pi/\nu_{e}\) is given by the imaginary part of the leading complex eigenvalue, the expansion multiplier per one turn of the most unstable spiral-out by \(\Lambda_{e}\approx\exp(\mu_{e}T)\), and the contraction rate along the slowest contracting stable eigendirection by \(\Lambda_{c}\approx\exp(\mu_{c}T)\). For E\({}_{3}\) and TW\({}_{\pm 2}\), whose leading eigenvalues are real, we use \(T=1/\lambda_{1}\) as the characteristic time scale. While the complex eigenvalues set time scales of recurrences, this time scale is useful for comparison of leading expanding and the slowest contracting multiplier. We learn that the shortest "turn-over" time is \(\approx 10\)-\(20\), and that if there exist horseshoe sets of unstable periodic orbits associated with these equilibria, they have unstable multipliers of order of \(\Lambda_{e}\sim 5\)-\(10\), and that they are surprisingly thin in the folding direction, with contracting multipliers of order of \(10^{-2}\), as also observed in [39]. ### Unstable manifolds of equilibria and their heteroclinic connections As shown in Table 1, the E\({}_{1}\) equilibrium has two unstable planes within which the solutions are spiralling out (that is, two pairs of complex conjugate eigenvalues). The E\({}_{2}\) has one such plane, while the E\({}_{3}\) has two real positive eigenvalues, so the solutions are moving radially away from the equilibrium within the plane spanned by the corresponding eigenvectors. Since E\({}_{1}\) has a larger unstable subspace, it is expected to have much less influence on the long-time dynamics compared to E\({}_{2}\) and E\({}_{3}\). Many methods have been developed for visualization of stable and unstable manifolds; see [34] for a survey. For high-dimensional contracting flows, visualization of stable manifolds is impossible, unless the system can be restricted to an approximate low-dimensional inertial manifold, as, for example, in [29]. The unstable manifold visualization also becomes harder as its dimension increases. Here we concentrate on visualizations of 1- and 2-dimensional unstable manifolds. Our visualization is unsophisticated compared to the methods of [34], yet sufficient for our purposes since, as we shall see, the unstable manifolds we study terminate in another equilibrium, and thus there is no need to track them for long times. To construct an invariant manifold containing solutions corresponding to the pair of unstable complex conjugate eigenvalues, \(\lambda=\mu\pm i\nu\), \(\mu>0\), we start with a set of initial conditions near equilibrium E\({}_{k}\), \[a(0)=a_{\rm E_{k}}+\epsilon\exp(\delta){\bf e}^{(j)}, \tag{10}\] where \(\delta\) takes a set of values uniformly distributed in the interval \([0,2\pi\mu/\nu]\), \({\bf e}^{(j)}\) is a unit vector in the unstable plane, and \(\epsilon>0\) is small. The manifold starting within the first unstable plane of E\({}_{1}\), with eigenvalues \begin{table} \begin{tabular}{l|c c c c} & \(E\) & \(T\) & \(\Lambda_{e}\) & \(\Lambda_{c}\) \\ \hline E\({}_{1}\) & 0.2609 & 18.81 & 11.70 & 0.01 \\ E\({}_{2}\) & 0.4382 & 26.35 & 39.00 & 0.11 \\ E\({}_{3}\) & 1.5876 & 10.72 & 2.72 & 0.01 \\ TW\({}_{\pm 1}\) & 0.4649 & 7.69 & 2.43 & 0.15 \\ TW\({}_{\pm 2}\) & 0.6048 & 2.97 & 2.72 & 0.97 \\ \hline \end{tabular} \end{table} Table 2: Properties of equilibria and relative equilibria determining the system dynamics in their vicinity. \(T\) is the characteristic time scale of the dynamics, \(\Lambda_{e}\) and \(\Lambda_{c}\) are the leading expansion and contraction multipliers, and \(E\) is the energy (10). \(i\,0.3341\), is shown in Figure 7. It appears to fall directly into the chaotic attractor. The behavior of the manifold starting within the second unstable plane of \(\mathrm{E}_{1}\), eigenvalues \(0.0824\pm i\,0.3402\), is remarkably different: As can be seen in Figure 8, almost all orbits within the manifold converge to the equilibrium \(\mathrm{E}_{2}\). The manifold also contains a heteroclinic connection from \(\mathrm{E}_{1}\) to \(\mathrm{E}_{3}\), and is bordered by the \(\lambda_{1}\)-eigendirection unstable manifold of \(\mathrm{E}_{3}\). Figure 8: The left panel shows the unstable manifold of equilibrium \(\mathrm{E}_{1}\) starting within the plane corresponding to the second pair of unstable eigenvalues. The coordinate axes \(v_{1}\), \(v_{2}\), and \(v_{3}\) are projections onto three orthonormal vectors \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\), and \(\mathbf{v}_{3}\), respectively, constructed from vectors \(\mathrm{Re}\,\mathbf{e}^{(3)}\), \(\mathrm{Im}\,\mathbf{e}^{(3)}\), and \(\mathrm{Re}\,\mathbf{e}^{(6)}\) by Gram–Schmidt orthogonalization. The right panel shows spatial representation of three orbits. Orbits \(B\) and \(C\) pass close to the equilibrium \(\mathrm{E}_{3}\). Figure 7: The left panel shows the unstable manifold of equilibrium \(\mathrm{E}_{1}\) starting within the plane corresponding to the first pair of unstable eigenvalues. The coordinate axes \(v_{1}\), \(v_{2}\), and \(v_{3}\) are projections onto three orthonormal vectors \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\), and \(\mathbf{v}_{3}\), respectively, constructed from vectors \(\mathrm{Re}\,\mathbf{e}^{(1)}\), \(\mathrm{Im}\,\mathbf{e}^{(1)}\), and \(\mathrm{Re}\,\mathbf{e}^{(6)}\) by Gram–Schmidt orthogonalization. The right panel shows spatial representation of two orbits \(A\) and \(B\). The change of color from blue to red indicates increasing values of \(u(x)\), as in the colorbar of Figure 1. The 2-dimensional unstable manifold of \(\mathrm{E}_{2}\) is shown in Figure 9. All orbits within the manifold, except for the heteroclinic connections from \(\mathrm{E}_{2}\) to \(\mathrm{E}_{3}\), converge to \(\mathrm{E}_{2}\) shifted by \(L/4\), so this manifold, minus the heteroclinic connections, can be viewed as a homoclinic connection. The equilibrium \(\mathrm{E}_{3}\) has a pair of real unstable eigenvalues equal to each other. Therefore, within the plane spanned by the corresponding eigenvectors, the orbits move radially away from the equilibrium. In order to trace out the unstable manifold, we start with a set of initial Figure 10: (a) _(blue/green) The unstable manifold of the \(\mathrm{E}_{2}\) equilibrium, projection in the coordinate axes of Figure 9. (black line) The circle of \(\mathrm{E}_{2}\) equilibria related by the translation invariance. (purple line) The circle of \(\mathrm{E}_{3}\) equilibria. (red) The heteroclinic connection from the \(\mathrm{E}_{2}\) equilibrium to the \(\mathrm{E}_{3}\) equilibrium splits the manifold into two parts, colored blue and green._ (b) \(\mathrm{E}_{2}\) equilibrium to \(\mathrm{E}_{3}\) equilibrium heteroclinic connection, \((\mathrm{Re}\,\mathbf{e}^{(2)},\mathrm{Re}\,\mathbf{e}^{(3)},(\mathrm{Im}\, \mathbf{e}^{(2)}+\mathrm{Im}\,\mathbf{e}^{(3)})/\sqrt{2})\) _projection. Here we omit the unstable manifold of \(\mathrm{E}_{2}\), keeping only a few neighboring trajectories in order to indicate the unstable manifold of \(\mathrm{E}_{3}\). The \(\mathrm{E}_{2}\) and \(\mathrm{E}_{3}\) families of equilibria arising from the continuous translational symmetry of the KS equation on a periodic domain are indicated by the two circles._ Figure 9: _The left panel shows the \(2\)-dimensional unstable manifold of equilibrium \(\mathrm{E}_{2}\). The coordinate axes \(v_{1}\), \(v_{2}\), and \(v_{3}\) are projections onto three orthonormal vectors \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\), and \(\mathbf{v}_{3}\), respectively, constructed from vectors \(\mathrm{Re}\,\mathbf{e}^{(1)}\), \(\mathrm{Im}\,\mathbf{e}^{(1)}\), and \(\mathrm{Re}\,\mathbf{e}^{(7)}\) by Gram–Schmidt orthogonalization. The right panel shows spatial representation of three orbits. Orbits \(B\) and \(C\) pass close to the equilibrium \(\mathrm{E}_{3}\). See Figure 10 for a different visualization._ conditions within the unstable plane, \[a(0)=a_{\mathrm{E}_{3}}+\epsilon(\mathbf{v}_{1}\cos\phi+\mathbf{v}_{2}\sin\phi)\,, \quad\phi\in[0,2\pi]\,, \tag{10}\] where \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) are orthonormal vectors within the plane spanned by the two unstable eigenvectors. The unstable manifold of \(\mathrm{E}_{3}\) is shown in Figure 11. The 3-fold symmetry of the manifold is related to the symmetry of \(\mathrm{E}_{3}\) with respect to translation by \(L/3\). The manifold contains heteroclinic orbits connecting \(\mathrm{E}_{3}\) to three different points of the circle of translated \(\mathrm{E}_{2}\) equilibrium solutions. Note also that the segments of orbits \(B\) and \(C\) between \(\mathrm{E}_{3}\) and \(\mathrm{E}_{2}\) in Figures 8 and 9 represent the same heteroclinic connections as orbits \(B\) and \(C\) in Figure 11. Heteroclinic connections are nongeneric for high-dimensional systems, but can be robust in systems with continuous symmetry; see [36] for a review. Armbruster, Guckenheimer, and Holmes [2] study a fourth order truncation of KS dynamics on the center-unstable manifold of \(\mathrm{E}_{2}\) close to a bifurcation off the constant \(u(x,t)=0\) solution and prove existence of a heteroclinic connection; see also [1]. Kevrekidis, Nicolaenko, and Scovel [33] study the dynamics numerically and establish the existence of a robust heteroclinic connection for a range of parameters close to the onset of the 2-cell branch in terms of the symmetry and a flow invariant subspace. We adopt their arguments to explain the new heteroclinic connections shown in Figure 12 that we have found for \(L=22\). For our system size there are exactly two representatives of the \(\mathrm{E}_{2}\) family that lie in the intersection of \(\mathbb{U}^{+}\) and \(\mathbb{U}^{(1)}\) related to each other by an \(L/4\) shift. Denote them by \(\mathrm{E}_{2}\) and \(\tau_{1/4}\mathrm{E}_{2}\), respectively. The unstable eigenplane of \(\mathrm{E}_{2}\) lies on \(\mathbb{U}^{+}\), while that of \(\tau_{1/4}\mathrm{E}_{2}\) lies on \(\mathbb{U}^{(1)}\); cf. Table 1. The \(\mathrm{E}_{3}\) family members that live in \(\mathbb{U}^{+}\) have one of their unstable eigenvectors (the one related to the heteroclinic connection to the \(\mathrm{E}_{2}\) family) on \(\mathbb{U}^{+}\), while the other does not lie on symmetry invariant subspace. Figure 11: The left panel shows the \(2\)-dimensional unstable manifold of equilibrium \(\mathrm{E}_{3}\). The coordinate axes \(v_{1}\), \(v_{2}\), and \(v_{3}\) are projections onto three orthonormal vectors \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\), and \(\mathbf{v}_{3}\), respectively, constructed from vectors \(\mathbf{e}^{(1)}\), \(\mathbf{e}^{(2)}\), and \(\mathbf{e}^{(4)}\) by Gram–Schmidt orthogonalization. The black line shows a family of \(\mathrm{E}_{2}\) equilibria related by translational symmetry. The right panel shows spatial representation of three orbits. Orbits \(B\) and \(C\) are two different heteroclinic orbits connecting \(\mathrm{E}_{3}\) to the same point on the \(\mathrm{E}_{2}\) line. Similarly, for the \(\mathrm{E}_{1}\) family we observe that the equilibria in \(\mathbb{U}^{+}\) have an unstable plane on \(\mathbb{U}^{+}\) (again related to the heteroclinic connection) and a second one with no symmetry. Thus \(\tau_{1/4}\mathrm{E}_{2}\) appears as a sink on \(\mathbb{U}^{+}\), while all other equilibria appear as sources. This explains the heteroclinic connections from \(\mathrm{E}_{1}\), \(\mathrm{E}_{2}\), and \(\mathrm{E}_{3}\) to \(\tau_{1/4}\mathrm{E}_{2}\). Observing that \(\tau_{1/4}\mathbb{U}^{+}=\mathbb{U}^{(1)}\) and taking into account Table 1, we understand that within \(\mathbb{U}^{(1)}\) we have connections from \(\tau_{1/4}\mathrm{E}_{2}\) (and members of \(\mathrm{E}_{1}\) and \(\mathrm{E}_{3}\) families) to \(\mathrm{E}_{2}\) and the formation of a heteroclinic loop. Due to the translational invariance of the KS system there is a heteroclinic loop for any two points of the \(\mathrm{E}_{2}\) family related by an \(\tau_{1/4}\)-shift. ## 6 Relative periodic orbits for \(L=22\) The relative periodic orbits satisfy the condition (2.20), \(u(x+\ell_{p},T_{p})=u(x,0)\), where \(T_{p}\) is the period and \(\ell_{p}\) the phase shift. We have limited our search to orbits with \(T_{p}<200\) and found over 30,000 relative periodic orbits with \(\ell_{p}>0\). The details of the algorithm used and the search strategy employed are given in Appendix C. Each relative periodic orbit with phase shift \(\ell_{p}>0\) has a reflection symmetric partner \(u_{p}(x)\to-u_{p}(-x)\) with phase shift \(-\ell_{p}\). The small period relative periodic orbits outline the coarse structure of the chaotic attractor, while the longer period relative periodic orbits resolve the finer details of the dynamics. The first four orbits with the shortest periods we have found are shown in Figure 13(a)-(d). The shortest relative periodic orbit with \(T_{p}=16.4\) is also the most unstable, with one positive Floquet exponent equal to 0.328. The other short orbits are less unstable, with the largest Floquet exponent in the range 0.018-0.073, typical of the long-time attractor average. We have found relative periodic orbits which stay close to the unstable manifold of \(\mathrm{E}_{2}\). As is illustrated in Figure 13(e)-(h), all such orbits have shift \(\ell_{p}\approx L/4\), similar to the shift of orbits within the unstable manifold of \(\mathrm{E}_{2}\), which start at \(\mathrm{E}_{2}\) and converge to \(\tau_{1/4}\mathrm{E}_{2}\) (see Figure 12: Heteroclinic connections on \(\mathbb{U}^{+}\): (red) The unstable manifold of \(\mathrm{E}_{1}\) equilibrium. (blue/green) The unstable manifold of \(\mathrm{E}_{2}\) equilibrium. (black) Heteroclinic connections from the \(\mathrm{E}_{3}\) equilibrium to the \(\tau_{1/4}\mathrm{E}_{2}\) equilibrium, where \(\tau_{1/m}u(x)=u(x+L/m)\) is a rational shift (2.6). Projection from \(128\) dimensions onto the plane given by the vectors \(a_{\mathrm{E}_{2}}-a_{\tau_{1/4}\mathrm{E}_{2}}\) and \(a_{\mathrm{E}_{3}}-a_{\tau_{1/2}\mathrm{E}_{3}}\). Figure 9). This confirms that the cage of unstable manifolds of equilibria plays an important role in organizing the chaotic dynamics of the KS equation. ## 7 Preperiodic orbits As discussed in section 2.3, a relative periodic orbit will be periodic, that is, \(\ell_{p}=0\), if it either (a) lives within the \(\mathbb{U}^{+}\) antisymmetric subspace, \(-u(-x,0)=u(x,0)\), or (b) returns to its reflection or its discrete rotation after a period, \(u(x,t+T_{p})=\gamma u(x,t)\), \(\gamma^{m}=e\), and is thus periodic with period \(mT_{p}\). The dynamics of KS flow in the antisymmetric subspace and periodic orbits with symmetry (a) have been investigated previously [6, 38, 39]. The KS flow does not have any periodic orbits of this type for \(L=22\). Using the algorithm and strategy described in Appendix C, we have found over 30,000 preperiodic orbits with \(T_{p}<200\) which possess symmetry of type (b) with \(\gamma=R\in D_{1}\). Someof the shortest such orbits that we have found are shown in Figure 13(i)-(l). Several were found as repeats of preperiodic orbits during searches for relative periodic orbits with nonzero shifts, while most have been found as solutions of the preperiodic orbit condition (2.21) with reflection, which takes the form \[-\mathbf{g}(-\ell)a^{*}(T_{p})=a(0) \tag{7.1}\] in the Fourier space representation (compare this to the condition (C.1) for relative periodic orbits). ## 8 Energy transfer rates for \(L=22\) In Figure 14 we plot (3.9), the time-dependent \(\dot{E}\) in the power input \(P\) versus dissipation rate \(D\) plane, for \(L=22\) equilibria and relative equilibria, a selected relative periodic orbit, and for a typical turbulent long-time trajectory. Projections from the \(\infty\)-dimensional state space onto the 3-dimensional \((E,P,D)\) representation of the flow, such as Figures 14 and 15, can be misleading. The most one can say is that if points are clearly separated in an \((E,P,D)\) plot (for example, in Figure 14, E\({}_{1}\) equilibrium is outside the recurrent set), they are also separated in the full state space. The converse is not true--states of very different topology can have similar energies. An example is the relative periodic orbit \((T_{p},\ell_{p})=(32.8,10.96)\) (see Figure 13(b)), which is the least unstable short relative periodic orbit that we have detected in this system. It appears to be well embedded within the turbulent flow. The mean power \(\overline{P_{p}}\) evaluated as in (3.11) (see Figure 14) is numerically quite close to the long-time turbulent time average \(\overline{P}\). Similarly close prediction of mean dissipation rate in the plane Couette flow from a single-period periodic orbit computed by Kawahara and Kida [32] has lead to optimistic hopes that Figure 14: (a) _Power input \(P\) vs. dissipation rate \(D\), and (b) energy \(E\) vs. power input \(P\), for several equilibria and relative equilibria, a relative periodic orbit, and a typical turbulent long-time trajectory. Projections of the heteroclinic connections are given in Figure 15. System size \(L=22\)._ turbulence is different from low-dimensional chaos, insofar as the determination of one special periodic orbit could yield all long-time averages. Regrettably, this is not true--as always, here too one needs a hierarchy of periodic orbits of increasing length to obtain accurate predictions [12]. For any given relative periodic orbit a convenient visualization is offered by the _mean velocity frame_, that is, a reference frame that rotates with velocity \(c_{p}=\ell_{p}/T_{p}\). In the mean velocity frame a relative periodic orbit becomes a periodic orbit, as in Figure 16(b). However, each relative periodic orbit has its own mean velocity frame, and thus sets of relative periodic orbits are difficult to visualize simultaneously. ## 9 Summary In this paper we study the Kuramoto-Sivashinsky flow as a staging ground for testing dynamical systems approaches to moderate Reynolds number turbulence in full-fledged (_not_ a few-modes model), infinite-dimensional state space PDE settings [26], and present a detailed geometrical portrait of dynamics in the KS state space for the \(L=22\) system size, the smallest system size for which this system empirically exhibits "sustained turbulence." Compared to earlier work [6, 38, 39, 41], the main advances here are the new insights into the role that continuous symmetries, discrete symmetries, low-dimensional unstable manifolds of equilibria, and the connections between equilibria play in organizing the flow. The key new feature of the translationally invariant KS system on a periodic domain are the attendant continuous families of relative equilibria (traveling waves) and relative periodic orbits. We have now understood the preponderance of solutions of relative type, and lost fear of them: A large number of unstable relative periodic orbits and periodic orbits has been determined here numerically. Figure 15: Two projections of the \((E,P,\dot{E})\) representation of the flow. (a) Heteroclinic connections from \(\mathrm{E}_{2}\) to \(\mathrm{E}_{3}\) (green), from \(\mathrm{E}_{1}\) to \(\mathrm{E}_{3}\) (red), and from \(\mathrm{E}_{3}\) to \(\mathrm{E}_{2}\) (shades of blue), superimposed over a generic long-time turbulent trajectory (grey). (b) A plot of \(\dot{E}=P-D\) yields a clearer visualization than (a). System size \(L=22\). Visualization of infinite-dimensional state space flows, especially in presence of continuous symmetries, is not straightforward. At first glance, turbulent dynamics visualized in the state space appears hopelessly complex, but under a detailed examination it is much less so than feared: For strongly dissipative flows (KS, Navier-Stokes) it is pieced together from low-dimensional local unstable manifolds connected by fast transient interludes. In this paper we offer two low-dimensional visualizations of such flows: (1) projections onto 2- or 3-dimensional PDE representation-independent dynamically invariant frames, and (2) projections onto the physical, symmetry invariant but time-dependent, energy transfer rates. Relative periodic orbits require a reformulation of the periodic orbit theory [11], as well as a rethinking of the dynamical systems approaches to constructing symbolic dynamics, outstanding problems that we hope to address in the near future [46, 45]. What we have learned from the \(L=22\) system is that many of these relative periodic orbits appear organized by the unstable manifold of E\({}_{2}\), closely following the homoclinic loop formed between E\({}_{2}\) and \(\tau_{1/4}\)E\({}_{2}\). In the spirit of the parallel studies of boundary shear flows [24], the KS system size of \(L=22\) was chosen as the smallest system size for which KS empirically exhibits "sustained turbulence." This is convenient both for the analysis of the state space geometry, and for the numerical reasons, but the price is high--much of the observed dynamics is specific to this unphysical, externally imposed periodicity. What needs to be understood is the nature of equilibrium and relative periodic orbit solutions in the \(L\to\infty\) limit, and the structure of the \(L=\infty\) periodic orbit theory. In summary, KS equilibria (and plane Couette flow; see [20]), relative equilibria, periodic orbits, and relative periodic orbits embody Hopf's vision [27]: together they form the repertoire Figure 16: The relative periodic orbit with \((T_{p},\ell_{p})=(33.5,4.04)\) from Figure 13(c) which appears well embedded within the turbulent flow: (a) A stationary state space projection, traced for four periods \(T_{p}\). The coordinate axes \(v_{1}\), \(v_{2}\), and \(v_{3}\) are those of Figure 9. (b) In the comoving mean velocity frame. of recurrent spatio-temporal patterns explored by turbulent dynamics. ## Appendix A Integrating the KS equation numerically The KS equation in terms of Fourier modes, \[\hat{u}_{k}=\mathcal{F}[u]_{k}=\frac{1}{L}\int_{0}^{L}u(x,t)e^{-iq_{k}x}dx,\qquad u (x,t)=\mathcal{F}^{-1}[\hat{u}]=\sum_{k\in\mathbb{Z}}\hat{u}_{k}e^{iq_{k}x}, \tag{12}\] is given by \[\hat{\hat{u}}_{k}=\left(q_{k}^{2}-q_{k}^{4}\right)\hat{u}_{k}-\frac{iq_{k}}{2} \mathcal{F}[(\mathcal{F}^{-1}[\hat{u}])^{2}]_{k}\,. \tag{13}\] Since \(u\) is real, the Fourier modes are related by \(\hat{u}_{-k}=\hat{u}_{k}^{*}\). The above system is truncated as follows: The Fourier transform \(\mathcal{F}\) is replaced by its discrete equivalent \[a_{k}=\mathcal{F}_{N}[u]_{k}=\sum_{n=0}^{N-1}u(x_{n})e^{-iq_{k}x_{n}},\qquad u( x_{n})=\mathcal{F}_{N}^{-1}[a]_{n}=\frac{1}{N}\sum_{k=0}^{N-1}a_{k}e^{iq_{k}x_ {n}}, \tag{14}\] where \(x_{n}=nL/N\) and \(a_{N-k}=a_{k}^{*}\). Since \(a_{0}=0\) due to Galilean invariance and setting \(a_{N/2}=0\) (assuming \(N\) is even), the number of independent variables in the truncated system is \(N-2\): \[\dot{a}_{k}=v_{k}(a)=\left(q_{k}^{2}-q_{k}^{4}\right)a_{k}-\frac{iq_{k}}{2} \mathcal{F}_{N}[(\mathcal{F}_{N}^{-1}[a])^{2}]_{k}\,, \tag{15}\] where \(k=1,\ldots,N/2-1\), although in the Fourier transform we need to use \(a_{k}\) over the full range of \(k\) values from \(0\) to \(N-1\). As \(a_{k}\in\mathbb{C}\), (15) represents a system of ODEs in \(\mathbb{R}^{N-2}\). The discrete Fourier transform \(\mathcal{F}_{N}\) can be computed by FFT. In Fortran and C, the FFTW library [15] can be used. In order to find the fundamental matrix of the solution, or compute Lyapunov exponents of the KS flow, one needs to solve the equation for a displacement vector \(b\) in the tangent space: \[\dot{b}=\frac{\partial v(a)}{\partial a}b. \tag{16}\] Since \(\mathcal{F}_{N}\) is a linear operator, it is easy to show that \[\dot{b}_{k}=\left(q_{k}^{2}-q_{k}^{4}\right)b_{k}-iq_{k}\mathcal{F}_{N}[ \mathcal{F}_{N}^{-1}[a]\otimes\mathcal{F}_{N}^{-1}[b]]_{k}, \tag{17}\] where \(\otimes\) indicates the componentwise product of two vectors; that is, \(a\otimes b=\operatorname{diag}(a)\,b=\operatorname{diag}(b)\,a\). This equation needs to be solved simultaneously with (15). Equations (15) and (17) were solved using the exponential time differencing fourth order Runge-Kutta method (ETDRK4) [7, 31]. Appendix B Determining stability properties of equilibria, traveling waves, and relative periodic orbits Let \(f^{t}\) be the flow map of the KS equation; that is, \(f^{t}(a)=a(t)\) is the solution of (A.4) with initial condition \(a(0)=a\). The stability properties of the solution \(f^{t}(a)\) are determined by the fundamental matrix \(J(a,t)\) consisting of partial derivatives of \(f^{t}(a)\) with respect to \(a\). Since \(a\) and \(f^{t}\) are complex-valued vectors, the real-valued matrix \(J(a,t)\) contains partial derivatives evaluated separately with respect to the real and imaginary parts of \(a\), that is, \[J(a,t)=\frac{\partial f^{t}(a)}{\partial a}=\left(\begin{array}{ccc}\frac{ \partial f^{t}_{R,1}}{\partial a_{R,1}}&\frac{\partial f^{t}_{R,1}}{\partial a _{I,1}}&\\ \frac{\partial f^{t}_{I,1}}{\partial a_{R,1}}&\frac{\partial f^{t}_{I,1}}{ \partial a_{R,2}}&\\ \frac{\partial f^{t}_{R,1}}{\partial a_{R,1}}&\frac{\partial f^{t}_{I,1}}{ \partial a_{I,1}}&\frac{\partial f^{t}_{I,1}}{\partial a_{R,2}}&\ldots\\ \frac{\partial f^{t}_{R,2}}{\partial a_{R,1}}&\frac{\partial f^{t}_{R,2}}{ \partial a_{I,1}}&\frac{\partial f^{t}_{R,2}}{\partial a_{R,2}}&\\ \vdots&&\ddots\end{array}\right),\] (B.1) where \(a_{k}=a_{R,k}+ia_{I,k}\) and \(f^{t}_{k}=f^{t}_{R,k}+if^{t}_{I,k}\). The partial derivatives \(\frac{\partial f^{t}}{\partial a_{R,j}}\) and \(\frac{\partial f^{t}}{\partial a_{I,j}}\) are determined by solving (A.6) with initial conditions \(b_{k}(0)=b_{N-k}(0)=1+0i\) and \(b_{k}(0)=-b_{N-k}(0)=0+1i\), respectively, for \(k=j\) and \(b_{k}(0)=0\) otherwise. The stability of a periodic orbit with period \(T_{p}\) is determined by the location of eigenvalues of \(J(a_{p},T_{p})\) with respect to the unit circle in the complex plane. Because of the translation invariance, the stability of a relative periodic orbit is determined by the eigenvalues of the matrix \({\bf g}(\ell_{p})\,J(a_{p},T_{p})\), where \({\bf g}(\ell)\) is the action of the translation operator introduced in (2.6), which in real-valued representation takes the form of a block diagonal matrix with the \(2\times 2\) blocks \[\left(\begin{array}{cc}\cos q_{k}\ell&\sin q_{k}\ell\\ -\sin q_{k}\ell&\cos q_{k}\ell\end{array}\right),\quad k=1,2,\ldots,N/2-1\,.\] For an equilibrium solution \(a_{q}\), \(f^{t}(a_{q})=a_{q}\), and so the fundamental matrix \(J(a_{q},t)\) can be expressed in terms of the time-independent stability matrix \(A(a_{q})\) as follows: \[J(a_{q},t)=e^{A(a_{q})t},\] where \[A(a_{q})=\frac{\partial v}{\partial a}\bigg{|}_{a=a_{q}}.\] (B.2) Using the real-valued representation of (B.1), the partial derivatives of \(v(a)\) with respect to the real and imaginary parts of \(a\) are given by \[\frac{\partial v_{k}}{\partial a_{R,j}}=\left(q_{k}^{2}-q_{k}^{4} \right)\delta_{kj}-iq_{k}{\cal F}_{N}[{\cal F}_{N}^{-1}[a]\otimes{\cal F}_{N}^ {-1}[b_{R}^{(j)}]]_{k},\] (B.3) \[\frac{\partial v_{k}}{\partial a_{I,j}}=\left(q_{k}^{2}-q_{k}^{4} \right)i\delta_{kj}-iq_{k}{\cal F}_{N}[{\cal F}_{N}^{-1}[a]\otimes{\cal F}_{N} ^{-1}[b_{I}^{(j)}]]_{k},\] where \(b_{R}^{(j)}\) and \(b_{I}^{(j)}\) are complex-valued vectors such that \(b_{R,k}^{(j)}=b_{R,N-k}^{(j)}=1+0i\) and \(b_{I,k}^{(j)}=-b_{I,N-k}^{(j)}=0+1i\) for \(k=j\) and \(b_{R,k}^{(j)}=b_{I,k}^{(j)}=0\) otherwise. In terms of \(a_{R,k}\) and \(a_{I,k}\) we have \[\frac{\partial v_{R,k}}{\partial a_{R,j}} =\left(q_{k}^{2}-q_{k}^{4}\right)\delta_{kj}+q_{k}(a_{I,k+j}+a_{I,k- j}), \tag{10}\] \[\frac{\partial v_{R,k}}{\partial a_{I,j}} =-q_{k}(a_{R,k+j}-a_{R,k-j}),\] \[\frac{\partial v_{I,k}}{\partial a_{R,j}} =-q_{k}(a_{R,k+j}+a_{R,k-j}),\] \[\frac{\partial v_{I,k}}{\partial a_{I,j}} =\left(q_{k}^{2}-q_{k}^{4}\right)\delta_{kj}-q_{k}(a_{I,k+j}-a_{I, k-j}),\] where \(\delta_{kj}\) is Kronecker delta. The stability of equilibria is characterized by the sign of the real part of the eigenvalues of \(A(a_{q})\). The stability of a relative equilibrium is determined in the comoving reference frame, so the fundamental matrix takes the form \(\mathbf{g}(c_{q}t)\,J(a_{q},t)\). The stability matrix of a relative equilibrium is thus equal to \(A(a_{q})+c_{q}\mathcal{L}\), where \(\mathcal{L}=iq_{k}\delta_{kj}\) is the Lie algebra translation generator, which in the real-space representation takes the form \(\mathcal{L}=\mathrm{diag}(0,q_{1},0,q_{2},\ldots)\). ## Appendix C Levenberg-Marquardt searches for relative periodic orbits To find relative periodic orbits of the KS flow, we use multiple shooting and the Levenberg-Marquardt (LM) algorithm implemented in the routine lmder from the MINPACK software package [44]. In order to find periodic orbits, a system of nonlinear algebraic equations needs to be solved. For flows, this system is underdetermined, so, traditionally, it is augmented with a constraint that restricts the search space to be transversal to the flow (otherwise, most of the popular solvers of systems of nonlinear algebraic equations, e.g., those based on Newton's method, cannot be used). When detecting relative periodic orbits, a constraint is added for each continuous symmetry of the flow. For example, when detecting relative periodic orbits in the complex Ginzburg-Landau equation, Lopez et al. [41] introduce three additional constraints. Our approach differs from those used previously in that we do not introduce the constraints. Being an optimization solver, the LM algorithm has no problem with solving an underdetermined system of equations, and, even though lmder explicitly restricts the number of equations to be not smaller than the number of variables, the additional equations can be set identically equal to zero [8, 9]. In fact, there is numerical evidence that, when implemented with additional constraints, the solver usually takes more steps to converge from the same seed, or fails to converge at all [8, 9]. In what follows we give a detailed description of the algorithm and the search strategy which we have used to find a large number of relative periodic orbits defined in (20) and preperiodic orbits defined in (21). When searching for relative periodic orbits of truncated KS equation (10), we need to solve the system of \(N-2\) equations \[\mathbf{g}(\ell)f^{T}(a)-a=0, \tag{11}\] with \(N\) unknowns \((a,T,\ell)\), where \(f^{t}\) is the flow map of the KS equation. In the case of preperiodic orbits, the system has the form \[-\mathbf{g}(-\ell)[f^{T}(a)]^{*}-a=0 \tag{12}\] (see (11)). We have tried two different implementations of the multiple shooting. The emphasis was on the simplicity of the implementations, so, even though both implementations worked equally well, each of them had its own minor drawbacks. In the first implementation, we fix the total number of steps within each shooting stage and change the numerical integrator step size \(h\) in order to adjust the total integration time to a desired value \(T.\) Let \((\hat{a},\hat{T},\hat{\ell})\) be the starting guess for a relative periodic orbit obtained through a close return within a chaotic attractor (see below). We require that the initial integration step size not exceed \(h_{0}\), so we round off the number of integration steps to \(n=\lceil\hat{T}/h_{0}\rceil\), where \(\lceil x\rceil\) denotes the nearest integer larger than \(x\). The integration step size is equal to \(h=T/n\). With the number of shooting stages equal to \(m\), the system in (C.1) is rewritten as follows: \[F^{(1)} =f^{\tau}(a^{(1)})-a^{(2)}=0,\] \[F^{(2)} =f^{\tau}(a^{(2)})-a^{(3)}=0,\] (C.3) \[\ldots\] \[F^{(m-1)} =f^{\tau}(a^{(m-1)})-a^{(m)}=0,\] \[F^{(m)} =\mathbf{g}(\ell)f^{\tau^{\prime}}(a^{(m)})-a^{(1)}=0,\] where \(\tau=\lfloor n/m\rfloor h\) (\(\lfloor x\rfloor\) is the nearest integer smaller than \(x\)), \(\tau^{\prime}=nh-(m-1)\tau\), and \(a^{(j)}=f^{(j-1)\tau}(a)\), \(j=1,\ldots,m\). For the detection of preperiodic orbits, the last equation in (C.3) should be replaced with \[F^{(m)}=-\mathbf{g}(-\ell)[f^{\tau^{\prime}}(a^{(m)})]^{*}-a^{(1)}=0.\] With the fundamental matrix of (C.3) written as \[J=\left(\,\frac{\partial F^{(j)}}{\partial a^{(k)}}\quad\frac{\partial F^{(j)} }{\partial T}\quad\frac{\partial F^{(j)}}{\partial\ell}\,\right),\quad j,k=1, \ldots,m\,,\] (C.4) the partial derivatives with respect to \(a^{(k)}\) can be calculated using the solution of (A.6) as described in Appendix B. The partial derivatives with respect to \(T\) are given by \[\frac{\partial F^{(j)}}{\partial T}=\left\{\begin{array}{ll}\frac{\partial f ^{\tau}(a^{(j)})}{\partial\tau}\frac{\partial\tau}{\partial T}=v(f^{\tau}(a^{( j)}))\lfloor n/m\rfloor/n\,,&j=1,\ldots,m-1\,,\\ \mathbf{g}(\ell)v(f^{\tau^{\prime}}(a^{(j)}))(1-\frac{m-1}{n}\lfloor n/m\rfloor ),&j=m\,.\end{array}\right.\] (C.5) Note that, even though \(\partial f^{t}(a)/\partial t=v(f^{t}(a))\), it should not be evaluated using the equation for the vector field \(v\). The reason for this is that, since the flow \(f^{t}\) is approximated by a numerical solution, the derivative of the numerical solution with respect to the step size \(h\) may differ from the vector field \(v\), especially for larger step sizes. We evaluate the derivative by a forward difference using numerical integration with step sizes \(h\) and \(h+\delta\): \[\frac{\partial f^{jh}(a)}{\partial t}=\frac{1}{j\delta}\left[f^{j(h+\delta)}(a )-f^{jh}(a)\right],\quad j\in\mathbb{Z}^{+}\,,\] (C.6)with \(t=jh\) and \(\delta=10^{-7}\) for double precision calculations. Partial derivatives \(\partial F^{(j)}/\partial\ell\) are all equal to zero except for \(j=m\), where it is given by \[\frac{\partial F^{(m)}}{\partial\ell}=\frac{d\mathbf{g}}{d\ell}f^{\tau^{\prime} }(a^{(m)})=\operatorname{diag}(iq_{k}e^{iq_{k}\,\ell})f^{\tau^{\prime}}(a^{(m )})\,.\] (C.7) This fundamental matrix is supplied to lnder augmented with two rows of zeros corresponding to the two identical zeros augmenting (C.3) in order to make the number of equations formally equal to the number of variables, as discussed above. In the second implementation, we keep \(h\) and \(\tau\) fixed and vary only \(\tau^{\prime}=T-(m-1)\tau\). In this case, we need to be able to determine the numerical solution of the KS equation not only at times \(t_{j}=jh\), \(j=1,2,\dots\), but at any intermediate time as well. We do this by a cubic polynomial interpolation through points \(f^{t_{j}}(a)\) and \(f^{t_{j+1}}(a)\) with slopes \(v(f^{t_{j}}(a))\) and \(v(f^{t_{j+1}}(a))\). The difference from the first implementation is that partial derivatives \(\partial F^{(j)}/\partial T\) are zero for all \(j=1,\dots,m-1\), except for \[\frac{\partial F^{(m)}}{\partial T}=\mathbf{g}(\ell)v(f^{\tau^{\prime}}(a^{(m )}))\,,\] (C.8) which, for consistency, needs to be evaluated from the cubic polynomial, not from the flow equation evaluated at \(f^{\tau^{\prime}}(a^{(m)})\). For detecting relative periodic orbits of the KS flow with \(L=22\), we used \(N=32\), \(h=0.25\) (or \(h_{0}=0.25\) within the first implementation), and a number of shooting stages such that \(\tau\approx 40.0\). While both implementations were equally successful in detecting periodic orbits of KS flow, we found the second implementation more convenient. The following search strategy was adopted: The search for relative periodic orbits with \(T\in[10,200]\) was conducted within a rectangular region containing the chaotic attractor. To generate a seed, a random point was selected within the region, and the flow (A.4) was integrated for a transient time \(t=40\), sufficient for an orbit to settle on the attractor at some point \(\hat{a}\). This point was taken to be the seed location. In order to find orbits with different periods, the time interval \([10,200]\) was subdivided into windows of length 10, i.e., \([t_{\min},t_{\max}]\), where \(t_{\min}=10j\) and \(t_{\max}=10(j+1)\), with \(j=1,2,\dots,19\). To determine the seed time \(\hat{T}\) and shift \(\hat{\ell}\), we located an approximate global minimum of \(\|\mathbf{g}(\ell)f^{t}(a)-a\|\) (or of \(\|-\mathbf{g}(-\ell)[f^{t}(a)]^{*}-a\|\) in the case of preperiodic orbits) as a function of \(t\in[t_{\min},t_{\max}]\) and \(\ell\in(-L/2,L/2]\). We did this simply by finding the minimum value of the function on a grid of points with resolution \(h\) in time and \(L/50\) in \(\ell\). Approximately equal numbers of seeds were generated for the detection of relative periodic orbits and preperiodic orbits and within each time window. The hit rate, i.e., the fraction of seeds that converged to relative periodic orbits or preperiodic orbits, varied from about 70% for windows with \(t_{\max}\leq 80\) to about 30% for windows with \(t_{\min}\geq 160\). The total number of hits for relative periodic orbits and preperiodic orbits was over \(10^{6}\) each. Each newly found orbit was compared, after factoring out the translation and reflection symmetries, to those already detected. As the search progressed, we found fewer and fewer new orbits, with the numbers first saturating for smaller period orbits. At the end of the search we could find very few new orbits with periods \(T<120\). Thus we found over 30,000 distinct prime relative periodic orbits with \(\ell>0\) and over 30,000 distinct prime preperiodic orbits with \(T<200\). In Figure 17 we show the numbers of detected relative periodic orbits and preperiodic orbits with periods less than \(T\). The figure shows that the numbers of relative periodic orbits and preperiodic orbits are approximately equal and that they grow exponentially with increasing \(T\) up to \(T\sim 130\), so that we are mostly missing orbits with \(T>130\). The straight line fits to the logarithm of the numbers of orbits in the interval \(T\in[70,120]\), represented by the lines in Figure 17, indicate that the total numbers of relative periodic orbits and preperiodic orbits with \(T<200\) could be over \(10^{5}\) each. To test the structural stability of the detected orbits and their relevance to the full KS PDE, the numerical accuracy was improved by increasing the number of Fourier modes (\(N=64\)) and reducing the step size (\(h=0.1\)). Only a handful of orbits failed this higher-resolution test. These orbits were not included in the list of the 60,000+ orbits detected. ## References * [1]D. Armbruster, J. Guckenheimer, and P. Holmes, _Heteroclinic cycles and modulated travelling waves in systems with \(O(2)\) symmetry_, Phys. D, 29 (1988), pp. 257-282. * [2]D. Armbruster, J. Guckenheimer, and P. Holmes, _Kuramoto-Sivashinsky dynamics on the center-unstable manifold_, SIAM J. Appl. Math., 49 (1989), pp. 676-691. * [3]J. C. Bronski and T. N. Gambill, _Uncertainty estimates and \(L_{2}\) bounds for the Kuramoto-Sivashinsky equation_, Nonlinearity, 19 (2006), pp. 2023-2039. arXiv:math/0508481 * [4]H. S. Brown and I. G. Kevrekidis, _Modulated traveling waves for the Kuramoto-Sivashinsky equation_, in Pattern Formation: Symmetry Methods and Applications, D. Benest and C. Froeschle, eds., Fields Inst. Commun. 5, AMS, Providence, RI, 1996, pp. 45-66. * [5]A. Chenciner, _Three Body Problem_, Web article, 2007; online at [http://scholarpedia.org/article/Three_body_problem](http://scholarpedia.org/article/Three_body_problem). * [6]F. Christiansen, P. Cvitanovic, and V. Putkaradze, _Spatiotemporal chaos in terms of unstable recurrent patterns_, Nonlinearity, 10 (1997), pp. 55-70. * [7]S. M. Cox and P. C. Matthews, _Exponential time differencing for stiff systems_, J. Comput. Phys., 176 (2002), pp. 430-455. * [8]J. J. Crofts, _Efficient Method for Detection of Periodic Orbits in Chaotic Maps and Flows_, Ph.D. thesis, Department of Mathematics, University of Leicester, Leicester, UK, 2007. arXiv:nlin.CD/0706.1940 * [9]J. J. Crofts and R. L. Davidchack, _On the use of stabilizing transformations for detecting unstable periodic orbits in high-dimensional flows_, Chaos, 19 (2009), paper 033138. * [10]M. C. Cross and P. C. Hohenberg, _Pattern formation outside of equilibrium_, Rev. Mod. Phys, 65 (1993), pp. 851-1112. * [11]P. Cvitanovic, _Continuous symmetry reduced trace formulas_, in Chaos: Classical and Quantum, P. Cvitanovic, R. Artuso, R. Mainieri, G. Tanner, and G. Vattay, eds., Web book, Niels Bohr Institute, Copenhagen, Denmark, [http://ChaosBook.org/-predrag/papers/trace.pdf](http://ChaosBook.org/-predrag/papers/trace.pdf), 2007. * [12]P. Cvitanovic, R. Artuso, R. Mainieri, G. Tanner, and G. Vattay, eds., _Chaos: Classical and Quantum_, Niels Bohr Institute, Copenhagen, 2008; [http://ChaosBook.org/version12](http://ChaosBook.org/version12). * [13]P. Cvitanovic and B. Eckhardt, _Symmetry decomposition of chaotic dynamics_, Nonlinearity, 6 (1993), pp. 277-311. arXiv:chao-dyn/9303016 * [14]C. Foias, B. Nicolaenko, G. R. Sell, and R. Temam, _Inertial manifold for the Kuramoto-Sivashinsky equation_, C. R. Acad. Sci. Paris Ser. I, 301 (1985), pp. 285-288. * [15]M. Frigo and S. G. Johnson, _The design and implementation of FFTW\(3\)_, Proc. IEEE, 93 (2005), pp. 216-231. * [16]U. Frisch, _Turbulence_, Cambridge University Press, Cambridge, UK, 1996. * [17]U. Frisch, Z. S. She, and O. Thual, _Viscoelastic behavior of cellular solutions to the Kuramoto-Sivashinsky model_, J. Fluid Mech., 168 (1986), pp. 221-240. * [18]L. Giacomelli and F. Otto, _New bounds for the Kuramoto-Sivashinsky equation_, Comm. Pure Appl. Math., 58 (2005), pp. 297-318. * [19]J. F. Gibson, _Movies of Plane Couette_, Technical report, Georgia Institute of Technology, Atlanta, GA, 2008, online at [http://ChaosBook.org/tutorials](http://ChaosBook.org/tutorials). * [20]J. F. Gibson, J. Halcrow, and P. Cvitanovic, _Visualizing the geometry of state-space in plane Couette flow_, J. Fluid Mech., 611 (2008), pp. 107-130. arXiv:0705.3957 * [21]F. Ginelli, P. Poggi, A. Turchi, H. Chate, R. Livi, and A. Politi, _Characterizing dynamics with covariant Lyapunov vectors_, Phys. Rev. Lett., 99 (2007), paper 130601. arXiv:0706.0510 * [22]M. Golubitsky and I. Stewart, _The Symmetry Perspective_, Birkhauser Boston, Cambridge, MA, 2002. * [23]J. M. Greene and J. S. Kim, _The steady states of the Kuramoto-Sivashinsky equation_, Phys. D, 33 (1988), pp. 99-120. * [24]J. Hamilton, J. Kim, and F. Waleffe, _Regeneration mechanisms of near-wall turbulence structures_, J. Fluid Mech., 287 (1995), pp. 317-348. * [25]B. Hof, C. W. H. van Doorne, J. Westerweel, F. T. M. Nieuwstadt, H. Faisst, B. Eckhardt, H. Wedin, R. R. Kerswell, and F. Waleffe, _Experimental observation of nonlinear traveling waves in turbulent pipe flow_, Science, 305 (2004), pp. 1594-1598. * [26]P. Holmes, J. L. Lumley, and G. Berkooz, _Turbulence, Coherent Structures, Dynamical Systems and Symmetry_, Cambridge University Press, Cambridge, UK, 1996. * [27]E. Hopf, _A mathematical example displaying features of turbulence_, Comm. Appl. Math., 1 (1948), pp. 303-322. * [28]J. M. Hyman, B. Nicolaenko, and S. Zaleski, _Order and complexity in the Kuramoto-Sivashinsky model of weakly turbulent interfaces_, Phys. D, 23 (1986), pp. 265-292. * [29]M. E. Johnson, M. S. Jolly, and I. G. Kevrekidis, _The Oseberg transition: Visualization of global bifurcations for the Kuramoto-Sivashinsky equation_, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001), pp. 1-18. * [30]M. Jolly, R. Rosa, and R. Temam, _Evaluating the dimension of an inertial manifold for the Kuramoto-Sivashinsky equation_, Adv. Differential Equations, 5 (2000), pp. 31-66. * [31]A.-K. Kassam and L. N. Trefethen, _Fourth-order time-stepping for stiff PDEs_, SIAM J. Sci. Comput., 26 (2005), pp. 1214-1233. * [32]G. Kawahara and S. Kida, _Periodic motion embedded in plane Couette turbulence: Regeneration cycle and burst_, J. Fluid Mech., 449 (2001), pp. 291-300. * [33]I. G. Kevrekidis, B. Nicolaenko, and J. C. Scovel, _Back in the saddle again: A computer assisted study of the Kuramoto-Sivashinsky equation_, SIAM J. Appl. Math., 50 (1990), pp. 760-790. * [34]B. Krauskopf, H. Osinga, E. Dodel, M. Henderson, J. Guckenheimer, A. Vladimirsky, M. Dellnitz, and O. Junge, _A survey of methods for computing (un)stable manifolds of vector fields_, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 15 (2005), pp. 763-791. * [35]M. Krupa, _Bifurcations of relative equilibria_, SIAM J. Math. Anal., 21 (1990), pp. 1453-1486. * [36]M. Krupa, _Robust heteroclinic cycles_, J. Nonlinear Sci., 7 (1997), pp. 129-176. * [37]Y. Kuramoto and T. Tsuzuki, _Persistent propagation of concentration waves in dissipative media far from thermal equilibrium_, Progr. Theoret. Phys., 55 (1976), p. 365. * [38]Y. Lan, _Dynamical Systems Approach to One-Dimensional Spatiotemporal Chaos--A Cyclist's View_, Ph.D. thesis, School of Physics, Georgia Institute of Technology, Atlanta, GA, 2004; available online at [http://etd.gatech.edu/theses/available/etd-10282004-154606/](http://etd.gatech.edu/theses/available/etd-10282004-154606/). * [39]Y. Lan and P. Cvitanovic, _Unstable recurrent patterns in Kuramoto-Sivashinsky dynamics_, Phys. Rev. E, 78 (2008), paper 026208. arXiv.org:0804.2474 * [40]R. E. LaQuey, S. M. Mahajan, P. H. Rutherford, and W. M. Tang, _Nonlinear saturation of the trapped-ion mode_, Phys. Rev. Lett., 34 (1974), pp. 391-394. * [41]V. Lopez, P. Boyland, M. T. Heath, and R. D. Moser, _Relative periodic solutions of the complex Ginzburg-Landau equation_, SIAM J. Appl. Dyn. Syst., 4 (2005), pp. 1042-1075. * [42]P. Manneville, _Dissipative Structures and Weak Turbulence_, Academic Press, Boston, 1990. * [43]D. Michelson, _Steady solutions of the Kuramoto-Sivashinsky equation_, Phys. D, 19 (1986), pp. 89-111. * [44]J. J. More, B. S. Garbow, and K. E. Hillstrom, _User Guide for MINPACK-1_, report ANL-80-74, Argonne National Laboratory, Argonne, IL, 1980. * [45]E. Siminos, _Recurrent Spatio-Temporal Structures in Presence of Continuous Symmetries_, Ph.D. thesis, School of Physics, Georgia Institute of Technology, Atlanta, GA, 2009; available online at [http://ChaosBook.org/projects/Siminos/thesis.pdf](http://ChaosBook.org/projects/Siminos/thesis.pdf). * [46]E. Siminos, P. Cvitanovic, and R. L. Davidchack, _Recurrent spatio-temporal structures of translationally invariant Kuramoto-Sivashinsky flow_, manuscript, 2009. * [47]G. I. Sivashinsky, _Nonlinear analysis of hydrodynamical instability in laminar flames--I. Derivation of basic equations_, Acta Astronaut., 4 (1977), pp. 1177-1206. * [48]V. Szebehely, _Theory of Orbits_, Academic Press, New York, 1967. * [49]D. Viswanath, _Recurrent motions within plane Couette turbulence_, J. Fluid Mech., 580 (2007), pp. 339-358. arXiv:physics/0604062 * [50]H.-L. Yang, K. A. Takeuchi, F. Ginelli, H. Chate, and G. Radons, _Hyperbolicity and the effective dimension of spatially-extended dissipative systems_, Phys. Rev. Lett., 102 (2009), paper 074102. arXiv: 0807.5073 # Error analysis of QR algorithms for computing Lyapunov exponents+ Footnote †: Received October 25, 2000. Accepted for publication October 25, 2001. Recommended by W.B. Gragg. Edward J. Mcdonald Department of Mathematics, University of Strathclyde, Glasgow, G1 1XH, Scotland. Email: ta.emcd@maths.strath.ac.uk. Desmond J. Higham Department of Mathematics, University of Strathclyde, Glasgow, G1 1XH, Scotland. Email: djh@maths.strath.ac.uk. This research was supported by the Engineering and Physical Sciences Research Council of the UK under grant GR/M42206. ###### Abstract Lyapunov exponents give valuable information about long term dynamics. The discrete and continuous QR algorithms are widely used numerical techniques for computing approximate Lyapunov exponents, although they are not yet supported by a general error analysis. Here, a rigorous convergence theory is developed for both the discrete and continuous QR algorithm applied to a constant coefficient linear system with real distinct eigenvalues. For the discrete QR algorithm, the problem essentially reduces to one of linear algebra for which the timestepping and linear algebra errors uncouple and precise convergence rates are obtained. For the continuous QR algorithm, the stability, rather than the local accuracy, of the timestepping algorithm is relevant, and hence the overall convergence rate is independent of the stepsize. In this case it is vital to use a timestepping method that preserves orthogonality in the ode system. We give numerical results to illustrate the analysis. Further numerical experiments and a heuristic argument suggest that the convergence properties carry through to the case of complex conjugate eigenvalue pairs. d remarkRemark hypothesisHypothesis claimClaim class=AMS] 65L05 65F15 ## 1 Introduction Several authors have derived numerical algorithms for the computation of Lyapunov exponents of ordinary differential equations (odes); see [4, 5] for an overview. However, there is little error analysis to justify the use of these algorithms. In this work, we consider the discrete and continuous QR algorithms, and in order to establish a rigorous convergence result we restrict attention to a simple class of test problems--linear, constant coefficient systems. In this case the Lyapunov exponents reduce to the real parts of the eigenvalues of the Jacobian matrix and the discrete and continuous QR algorithms become closely related to the orthogonal iteration process in numerical linear algebra. In order to develop a convergence theory for the discrete QR algorithm, it is necessary to deal simultaneously with the limits \(\Delta t\to 0\) and \(T\to\infty\), where \(\Delta t\) is the timestep used for the ode solver and \([0,T]\) is the truncation of \([0,\infty)\). We resolve this by allowing \(T\) to behave like a negative power of \(\Delta t\). For the continuous QR algorithm, the use of a timestepping method that preserves orthogonality is vital for convergence. In SS2 we define the QR algorithms, state our assumptions and give the main convergence theorems. These theorems are proved in SS3. Section 4 presents some numerical results that back up the theory. Although the convergence results apply only for the case of real distinct eigenvalues, we also perform numerical tests involving complex conjugate pairs. It appears that the algorithms remain convergent in this case, despite the fact that only diagonal entries are used (rather than \(2\times 2\) blocks). This effect is illustrated in detail. Section 5 discusses the results. With the notable exception of [4], there has been relatively little attention paid to Lyapunov exponent algorithms in the numerical analysis literature, and hence a general convergence theory is lacking. In [4], a number of fundamental results are given that quantify the error under various simplifying assumptions. In particular, a semi-heuristic discussion of the convergence of the QR algorithms on constant coefficient odes is given; [4, pages 412-413]. However, the discussion does not mention how to resolve the \(\Delta t\to 0\), \(T\to\infty\) issue andrates_ of convergence are not given. Our work can be regarded as an attempt to make rigorous that discussion in [4]. Overall, we aim to provide a _rigorous_ analysis of the _rate_ of convergence of the QR algorithms on a tractable class of test problems. The analysis makes use of convergence theory from numerical linear algebra, but also relies on results from the classical numerical ode literature and more recent ideas from geometric integration. ## 2 Motivation and Convergence Results We begin by describing the algorithms on time-dependent linear systems. The algorithms may also be applied to nonlinear systems after linearizing along a solution trajectory [4]. For the \(n\)-dimensional linear system \[\dot{y}(t)=A(t)y(t), \tag{1}\] we let \(Y(t)\in{\rm I\!R}^{n\times n}\) denote the fundamental solution matrix for (1), so that \(\dot{Y}(t)=A(t)Y(t)\) and \(Y(0)=I\). A continuous QR factorization of \(Y(t)\) then gives \[Y(t)=Q(t)R(t), \tag{2}\] where \(Q(t)\in{\rm I\!R}^{n\times n}\) is orthogonal and \(R(t)\in{\rm I\!R}^{n\times n}\) is upper triangular with positive diagonal entries. (Throughout this work, we will ask for positive diagonal entries in a triangular QR factor--this makes the QR factorization of a nonsingular matrix unique [6].) Under appropriate regularity assumptions, it may be shown [4] that the Lyapunov exponents for the system (1) satisfy \[\lambda^{[k]}=\lim_{t\to\infty}\frac{1}{t}\log R_{kk}(t),\quad 1\leq k\leq n. \tag{3}\] ### Discrete QR Algorithm The discrete QR algorithm for (1) is based on the following process. Choose a sequence \(0=t_{0}<t_{1}<t_{2}<\cdots\), with \(\lim_{j\to\infty}t_{j}=\infty\). Set \(Q_{0}=I\) and for \(j=0,1,2,\ldots\) let \[\dot{Z}_{j}(t)=A(t)Z_{j}(t),\quad Z_{j}(t_{j})=Q_{j},\quad t_{j}\leq t\leq t_{j +1}, \tag{4}\] and take the QR factorization \[Z_{j}(t_{j+1})=Q_{j+1}R_{j+1}. \tag{5}\] To see why (4) and (5) are useful, let \(F_{j}(t)\in{\rm I\!R}^{n\times n}\) be such that \(\dot{F_{j}}(t)=A(t)F_{j}(t)\) and \(F_{j}(t_{j})=I\). Then \[Z_{j}(t_{j+1})=F_{j}(t_{j+1})Q_{j}\quad{\rm and}\quad Y(t_{j+1})=F_{j}(t_{j+1}) Y(t_{j}). \tag{6}\] It follows from (5) and (6) that \[Y(t_{j+1})=Z_{j}(t_{j+1}){Q_{j}}^{T}Y(t_{j})=Q_{j+1}R_{j+1}{Q_{j}}^{T}Y(t_{j}).\] Continuing this argument we find that \[Y(t_{j+1}) =Q_{j+1}R_{j+1}{Q_{j}}^{T}Q_{j}R_{j}{Q_{j-1}}^{T}Y(t_{j-2})\] \[\vdots\] \[=Q_{j+1}R_{j+1}R_{j}\cdots R_{1}.\]Hence, (2.4) and (2.5) contain the information needed to construct the QR factorization of \(Y(t)\) at each point \(t_{j}\), and so, from (2.3), \[\lambda^{[k]}=\lim_{N\to\infty}\frac{1}{t_{N}}\log\left(\prod_{j=1}^{N}(R_{j})_{ kk}\right),\quad 1\leq k\leq n. \tag{2.7}\] To convert (2.4) and (2.5) into a numerical algorithm, two types of approximation are introduced. 1. The ode system (2.4) is solved numerically. We will suppose that a constant spacing \(\Delta t:=t_{j+1}-t_{j}\) is used and a one-step numerical method with stepsize \(\Delta t\) is applied for each iteration. 2. The infinite time interval is truncated, so that a finite number of iteration steps is used. ### Continuous QR Algorithm The continuous QR algorithm proceeds as follows. Differentiating (2.2) we have \[\dot{Y}=\dot{Q}R+Q\dot{R}=AQR, \tag{2.8}\] from which we obtain \[Q^{T}\dot{Q}-Q^{T}AQ=-\dot{R}R^{-1}. \tag{2.9}\] Note that \(\dot{R}R^{-1}\) is upper triangular and, since \(Q^{T}Q=I\), \(Q^{T}\dot{Q}\) is skew-symmetric. It thus follows from (2.9) that \(Q^{T}\dot{Q}=H(t,Q)\), where \[H_{ij}=\left\{\begin{array}{cc}(Q^{T}AQ)_{ij},&i>j,\\ 0,&i=j,\\ -(Q^{T}AQ)_{ji},&i<j.\end{array}\right. \tag{2.10}\] Thus, the matrix system \[\dot{Q}(t)=Q(t)H(t,Q(t)), \tag{2.11}\] can be solved to obtain \(Q(t)\). From (2.9) \[\dot{R}=(Q^{T}AQ-Q^{T}\dot{Q})R,\] so using (2.10) and the skew-symmetry of \(Q^{T}\dot{Q}\) this gives \[\dot{R}_{ii}=(Q^{T}AQ)_{ii}R_{ii},\quad i=1,\ldots,n. \tag{2.12}\] Therefore, \[\lambda^{[k]}=\lim_{t\to\infty}\frac{1}{t}\log R_{kk}(t)=\lim_{t\to\infty} \frac{1}{t}\int_{0}^{t}(Q^{T}(s)A(s)Q(s))_{kk}\,ds. \tag{2.13}\] In order to implement the continuous QR algorithm numerically two types of approximation are required. 1. The nonlinear ode system (2.11) is solved numerically. We assume that a constant stepsize \(\Delta t:=t_{j}-t_{j-1}\) is used. 2. The integral in (2.13) is approximated numerically over a finite range \([0,T]\). Following [4] we use the composite trapezoidal rule. Note also that solutions of the ode system (11) preserve orthogonality--if \(Q(0)^{T}Q(0)=I\), then \(Q(t)^{T}Q(t)=I\), for all \(t>0\). It is natural to ask for this property to be maintained by the numerical method. (Indeed, we will show that this is vital for convergence.) We consider two classes of numerical method that preserve orthogonality. 1. **Projected Runge-Kutta (PRK) methods**. Here, a Runge-Kutta method is applied over each timestep, and the (generally non-orthogonal) solution is perturbed to an orthogonal one. This can be done by replacing the matrix by its orthogonal polar factor, which corresponds to a projection in the Frobenius norm. Alternatively, the matrix can be replaced by its orthogonal QR factor, a process that is closely related to the Frobenius norm projection [7]. 2. **Gauss-Legendre-Runge-Kutta (GLRK) methods**. These are one-step methods that automatically preserve orthogonality of the numerical solution. Both types of integrator have been examined in [3]. We note that orthogonal integration can be viewed within the much more general framework of Lie group methods; see [8]. ### Convergence Results In order to prove sharp convergence results for the algorithms, we restrict attention to the case where \(A(t)\) is constant, \(A(t)\equiv A\), where \(A\) has real, distinct eigenvalues \(\{\lambda^{[k]}\}_{k=1}^{n}\), ordered so that \[\exp(\lambda^{[1]})>\exp(\lambda^{[2]})>\cdots>\exp(\lambda^{[n]}). \tag{14}\] In this case \(\{\lambda^{[k]}\}_{k=1}^{n}\) are also the Lyapunov exponents of the ode. We also assume that for each \(1\leq k\leq n-1\) no vector in the space spanned by the first \(k\) eigenvectors of \(A\) is orthogonal to the space spanned by the first \(k\) columns of the identity matrix. This is an extremely mild assumption that generalizes the traditional assumption made about the starting vector in the power method; see, for example, [2, page 158]. #### 2.3.1 Convergence of Discrete QR Algorithm We let \(Z_{j}\) denote the approximation to \(Z_{j}(t_{j})\) produced by the one-step numerical method on (4), and we suppose that \[Z_{j+1}=S(\Delta tA)Z_{j},\] where \(S(z)\) is a rational function such that \[S(z)=\exp(z)\left(1+\mathcal{O}\left(\Delta t^{p+1}\right)\right),\quad\text{ for some integer }p\geq 1. \tag{15}\] This covers the case where the numerical method is a (explicit or implicit) Runge-Kutta formula of order \(p\). The discrete QR algorithm for computing an approximation \(l^{[k]}\) to \(\lambda^{[k]}\) then has the following form, with \(Q_{0}=I\). **Discrete QR algorithm** for \(j=0,1,\ldots,N-1\) \(S(\Delta tA)Q_{j}=:Q_{j+1}R_{j+1}\) (QR factorization) end \[\ell^{[k]}:=\frac{1}{T}\log\prod_{j=1}^{N}(R_{j})_{kk},\quad\text{where}\,T: =N\Delta t. \tag{16}\] In analysing the error \(|\lambda^{[k]}-l^{[k]}|\) there are two limits to be considered. We must allow \(\Delta t\to 0\) in order to reduce the error of the ode solver, but we must also allow \(T\to\infty\) to reduce the error from truncating the time interval. Hence, in contrast to standard finite-timeconvergence theory [9], we require \(N\to\infty\) faster than \(\Delta t\to 0\). We can accomplish this by setting \[T=K\Delta t^{-\alpha}, \tag{2.17}\] where \(K,\alpha>0\) are constants. (So the number of timesteps \(N=K\Delta t^{-(\alpha+1)}\).) In this framework we consider the single limit \(\Delta t\to 0\). In practice this corresponds to repeating the discrete QR algorithm with a smaller \(\Delta t\)_and_ a larger time interval \(T\). The result that we prove is stated below. **Theorem 2.1**: _With the notation and assumptions above, there exists a constant \(C\) such that, for all sufficiently small \(\Delta t\),_ \[|\ell^{[k]}-\lambda^{[k]}|\leq C\left(\Delta t^{\alpha}+\Delta t^{p}\right), \quad 1\leq k\leq n. \tag{2.18}\] See SS3.2. Our proof of Theorem 2.1 relies on the underlying convergence theory for orthogonal iteration [2, 11]--this is also equivalent to the analysis for the QR algorithm [1, 15]. However, the application of that theory is not entirely straightforward since we must study (a variant of) orthogonal iteration on a matrix that is parametrized by \(\Delta t\). In particular, the naturally arising linear contraction factor \(r^{[k]}_{\Delta t}\), which is defined in SS3.2, has the property that \(r^{[k]}_{\Delta t}\to 1\) as \(\Delta t\to 0\). This, however, is balanced by the fact that the number \(N\) of iterations increases rapidly as \(\Delta t\to 0\), and, as shown in (3.13) below, \((r^{[k]}_{\Delta t})^{N}\to 0\). (This also emphasizes that both limits \(\Delta t\to 0\) and \(T\to\infty\) must be addressed in a convergence theory.) #### Convergence of Continuous QR Algorithm The continuous QR algorithm for computing an approximation \(\ell^{[k]}\) to \(\lambda^{[k]}\) can be summarized as follows, with \(Q_{0}=I\). **Continuous QR algorithm** Solve (2.11) numerically to obtain \(\{Q_{j}\approx Q(t_{j})\}_{j=0}^{N}\). \[\ell^{[k]}=\frac{1}{T}\frac{\Delta t}{2}\sum_{j=1}^{N}\left[(Q_{j-1}^{T}AQ_{j- 1})_{kk}+(Q_{j}^{T}AQ_{j})_{kk}\right],\quad\mathrm{where}\;T=N\Delta t. \tag{2.19}\] The following convergence theorem holds. **Theorem 2.2**: _Suppose the ode (2.11) is solved using a PRK or GLRK method of classical order \(p\geq 1\). Then with the notation and assumptions above there exists a constant \(C\) such that, for sufficiently small \(\Delta t\),_ \[|\ell^{[k]}-\lambda^{[k]}|\leq\frac{C}{T},\quad 1\leq k\leq n. \tag{2.20}\] See SS3.3. Note that \(\Delta t\) does not appear in the error bound (2.20). This emphasizes that the structural properties of the ode method (orthogonality preservation and stability) are relevant, but not the precise classical order of convergence. ### Schur Matrix and Orthogonal Iteration We begin by reviewing some relevant concepts from numerical linear algebra. See [2, 6, 10] for more details. **Definition 1**: Given \(B\in\bbbr^{n\times n}\) with distinct, real eigenvalues \(\{\mu^{[k]}\}_{k=1}^{n}\) ordered so that \(|\mu^{[1]}|>|\mu^{[2]}|>\cdots>|\mu^{[n]}|\), there exists an orthogonal matrix \(Q_{\star}\), referred to as a Schur matrix, such that \[Q_{\star}^{T}BQ_{\star}=\Upsilon,\] where \(\Upsilon\) is upper triangular with main diagonal given by \(\mu^{[1]},\ldots,\mu^{[n]}\). The Schur matrix is unique up to a factor of \(\pm 1\) multiplying each column. The columns of \(Q_{\star}\) (denoted by \(q_{\star}^{[k]}\)) are called _Schur vectors_, and it follows that the subspace spanned by \(\{q_{\star}^{[1]},q_{\star}^{[2]},\ldots,q_{\star}^{[k]}\}\) is identical to the subspace spanned by the eigenvectors of \(B\) that correspond to the eigenvalues \(\mu^{[1]},\mu^{[2]},\ldots,\mu^{[k]}\). \({}_{\Box}\)_ Orthogonal iteration may be regarded as a technique for computing an approximate Schur decomposition. Given \(B\in\bbbr^{n\times n}\), orthogonal iteration proceeds as follows, with \(Q_{0}=I\). \[\begin{array}{|l|}\mbox{\bf Orthogonal Iteration}\\ \hline\mbox{for }j=0,1,\ldots\\ \mbox{\rm$BQ_{j}=:Q_{j+1}R_{j+1}$ \quad(QR factorization)}\\ \mbox{\rm end}\end{array}\] Under the mild assumption that for each \(1\leq k\leq n-1\) no vector contained in \(\mathrm{span}\{q_{\star}^{[1]},q_{\star}^{[2]},\ldots,q_{\star}^{[k]}\}\) is orthogonal to the space spanned by the first \(k\) columns of the identity matrix, this iteration converges linearly, in the manner outlined in Lemma 1 below. We let \(q_{j}^{[k]}\) denote the \(k\)th column of \(Q_{j}\), and let \(\mu_{j}^{[k]}\) denote \(\big{(}Q_{j}^{T}BQ_{j}\big{)}_{kk}\). To be definite, we regard \(\|\cdot\|\) as the Euclidean norm. We also write \(\|v\pm w\|\) to mean \(\min\{\|v+w\|,\|v-w\|\}\). **Lemma 2**: _With the assumptions and notation above, there exist constants \(C\) and \(D\) such that_ \[\|q_{j}^{[k]}\pm q_{\star}^{[k]}\|\leq C(r^{[k]})^{j}\quad\mathrm{and}\quad|\mu _{j}^{[k]}-\mu^{[k]}|\leq D(r^{[k]})^{j},\quad 1\leq k\leq n, \tag{1}\] _where_ \[r^{[1]} =|\mu^{[2]}/\mu^{[1]}|,\] \[r^{[k]} =\max(|\mu^{[k+1]}/\mu^{[k]}|,|\mu^{[k]}/\mu^{[k-1]}|),\quad 1<k<n,\] \[r^{[n]} =|\mu^{[n]}/\mu^{[n-1]}|.\] _\({}_{\Box}\)_ Proof: This result is stated without proof in [11]. Convergence analysis for orthogonal iteration is usually performed in terms of subspaces: generally, the subspace spanned by the first \(k\) columns of \(Q_{j}\) converges to the subspace spanned by the first \(k\) columns of \(Q_{\star}\) at a linear rate determined by \(|\mu^{[k+1]}/\mu^{[k]}|\)[2, 6, 16]. By considering subspaces of dimensions \(k\) and \(k-1\), the result (1) follows. \({}_{\blacksquare}\) ### Discrete QR Convergence Analysis #### 3.2.1 Orthogonal Iteration Error In this subsection and the next, we use \(\kappa_{i}\) to denote generic constants. Comparing the two algorithms, we see that the matrices \(Q_{j}\) in the discrete QR algorithm are precisely the matrices \(Q_{j}\) that arise when orthogonal iteration is applied to \(B=S(\Delta tA)\). Hence, we may appeal to the convergence theory in Lemma 2. However, it is vital to exploit the fact that \(\Delta t\) is a small parameter and \(S(z)\) approximates \(\exp(z)\). Using a second subscript to emphasize \(\Delta t\)-dependence, we let \(S(\Delta tA)=Q_{\star}\Upsilon_{\Delta t}Q_{\star}^{T}\) denote a Schur decompositionof \(S(\Delta tA)\) with \((\Upsilon_{\Delta t})_{kk}=\mu_{\Delta t}^{[k]}\). Similarly, we let \(Q_{j,\Delta t}\) have \(k\)th column \(q_{j,\Delta t}^{[k]}\) and write \(\mu_{j,\Delta t}^{[k]}\) for \(q_{j,\Delta t}^{[k]}{}^{T}S(\Delta tA)q_{j,\Delta t}^{[k]}\). First we note that with the ordering (2.14) on the eigenvalues of \(A\), for sufficiently small \(\Delta t\) we have, from (2.15), \[\mu_{\Delta t}^{[1]}>\mu_{\Delta t}^{[2]}>\cdots>\mu_{\Delta t}^{[n]}>0. \tag{3.2}\] Following the proof of Lemma 3.2 for this parameterized matrix, we find that the Schur vector convergence bound holds with a constant independent of \(\Delta t\); that is, \[\|q_{j,\Delta t}^{[k]}\pm q_{\star}^{[k]}\|\leq\kappa_{1}(r_{\Delta t}^{[k]})^{ j}, \tag{3.3}\] where \(r_{\Delta t}^{[k]}\) is defined as in Lemma 3.2 with each \(\mu^{[k]}\) replaced by \(\mu_{\Delta t}^{[k]}\). We then have \[|\mu_{j,\Delta t}^{[k]}-\mu_{\Delta t}^{[k]}| =|q_{j,\Delta t}^{[k]}{}^{T}S(\Delta tA)q_{j,\Delta t}^{[k]}-q_{ \star}^{[k]}{}^{T}S(\Delta tA)q_{\star}^{[k]}|\] \[=|q_{j,\Delta t}^{[k]}{}^{T}\left(I+\left(S(\Delta tA)-I\right) \right)q_{j,\Delta t}^{[k]}-q_{\star}^{[k]}{}^{T}\left(I+\left(S(\Delta tA)-I \right)q_{\star}^{[k]}\right|\] \[=|q_{j,\Delta t}^{[k]}{}^{T}q_{j,\Delta t}^{[k]}-q_{\star}^{[k]}{ }^{T}q_{\star}^{[k]}+\left(q_{j,\Delta t}^{[k]}\pm q_{\star}^{[k]}\right)^{T} \left(S(\Delta tA)-I\right)q_{j,\Delta t}^{[k]}\] \[\quad-q_{\star}^{[k]}{}^{T}\left(S(\Delta tA)-I\right)(q_{\star} ^{[k]}\pm q_{j,\Delta t}^{[k]})|\] \[\leq 2\|q_{j,\Delta t}^{[k]}\pm q_{\star}^{[k]}\|\|S(\Delta tA)-I\|\] \[\leq\kappa_{2}\Delta t\|A\|\|q_{j,\Delta t}^{[k]}\pm q_{\star}^{ [k]}\| \tag{3.4}\] \[\leq\kappa_{3}\Delta t(r_{\Delta t}^{[k]})^{j},\] for sufficiently small \(\Delta t\), where we have used the property (2.15) and the bound (3.3). The inequality (3.4) shows that when orthogonal iteration is applied to a matrix of the form \(S(\Delta tA)=I+\Delta tA+\mathcal{O}\left(\Delta t^{2}\right)\) then the "constant" in the eigenvalue convergence bound is \(\mathcal{O}\left(\Delta t\right)\). We note from (2.16) that the discrete QR algorithm does not use \(Q_{j,\Delta t}^{T}S(\Delta tA)Q_{j,\Delta t}\), but rather the shifted version \(R_{j}:=Q_{j,\Delta t}^{T}S(\Delta tA)Q_{j-1,\Delta t}\). However, it is readily shown that \(\|q_{j,\Delta t}^{[k]}-q_{j-1,\Delta t}^{[k]}\|\leq\kappa_{4}\Delta t(r_{\Delta t }^{[k]})^{j}\), and hence the bound (3.4) also implies \[|q_{j,\Delta t}^{[k]}{}^{T}S(\Delta tA)q_{j-1,\Delta t}^{[k]}-q_{\star}^{[k]} {}^{T}S(\Delta tA)q_{\star}^{[k]}|\leq\kappa_{5}\Delta t(r_{\Delta t}^{[k]})^ {j}. \tag{3.5}\] In summary, (3.5) shows that the computed diagonal entries \((R_{j,\Delta t})_{kk}\) in (2.16) approximate the corresponding eigenvalues \(\mu_{\Delta t}^{[k]}\) of \(S(A\Delta t)\) according to \[(R_{j,\Delta t})_{kk}=\mu_{\Delta t}^{[k]}(1+\gamma_{j,\Delta t}^{[k]}),\quad \mathrm{where}\;|\gamma_{j,\Delta t}^{[k]}|\leq\kappa_{6}\Delta t(r_{\Delta t }^{[k]})^{j}. \tag{3.6}\] #### 3.2.2 ODE Error We now incorporate the ode solving error in order to obtain the overall error bound. Since \(A\) is diagonalizable, it is straightforward to show from (2.15) that the eigenvalue \(\mu_{\Delta t}^{[k]}\) of \(S(\Delta tA)\) is related to the eigenvalue \(\exp(\Delta tA^{[k]})\) of \(\exp(\Delta tA)\) by \[\mu_{\Delta t}^{[k]}=\exp(\Delta t\lambda^{[k]})(1+\delta_{\Delta t}^{[k]}), \quad\mathrm{where}\;|\delta_{\Delta t}^{[k]}|\leq\kappa_{7}\Delta t^{p+1}. \tag{3.7}\]Now, using (3.6) and (3.7), the computed Lyapunov exponent \(\ell^{[k]}\) in (2.16) satisfies \[\ell^{[k]} =\frac{1}{T}\log\prod_{j=1}^{N}\left(\exp(\Delta t\lambda^{[k]})(1+ \delta_{\Delta t}^{[k]})(1+\gamma_{j,\Delta t}^{[k]})\right), \tag{3.8}\] \[=\lambda^{[k]}+\frac{1}{T}\left(\sum_{j=1}^{N}\log(1+\delta_{ \Delta t}^{[k]})+\sum_{j=1}^{N}\log(1+\gamma_{j,\Delta t}^{[k]})\right).\] We note from (3.6) and (3.7) that both \(|\gamma_{j,\Delta t}^{[k]}|\) and \(|\delta_{\Delta t}^{[k]}|\) can be made arbitrarily small by reducing \(\Delta t\). Hence, for sufficiently small \(\Delta t\), \[0<|\log(1+\delta_{\Delta t}^{[k]})|\leq 2|\delta_{\Delta t}^{[k]}|\quad\text{ and}\quad 0<|\log(1+\gamma_{j,\Delta t}^{[k]})|\leq 2|\gamma_{j,\Delta t}^{[k]}|.\] In (3.8), using (3.6) and (3.7) and recalling that \(T=N\Delta t\), this gives \[|\ell^{[k]}-\lambda^{[k]}|\leq\kappa_{8}\left(\frac{1}{T}\sum_{j=1}^{N}\gamma_ {j,\Delta t}^{[k]}+\frac{\delta_{\Delta t}^{[k]}}{\Delta t}\right)\leq\kappa_{ 9}\left(\frac{\Delta t}{T}\sum_{j=1}^{N}(r_{\Delta t}^{[k]})^{j}+\Delta t^{p} \right).\] Summing the geometric series gives \[|\ell^{[k]}-\lambda^{[k]}|\leq\kappa_{9}\left(\frac{\Delta t}{T}\frac{r_{ \Delta t}^{[k]}(1-(r_{\Delta t}^{[k]})^{N})}{1-r_{\Delta t}^{[k]}}+\Delta t^{p }\right). \tag{3.9}\] Now, it follows from (2.15) that \[\frac{\mu_{\Delta t}^{[k+1]}}{\mu_{\Delta t}^{[k]}}=\exp\left(\Delta t(\lambda ^{[k+1]}-\lambda^{[k]})\right)\left(1+\mathcal{O}\left(\Delta t^{p+1}\right) \right),\] and hence, \[0<r_{\Delta t}^{[k]}\leq\exp\left(-\Delta t\epsilon^{[k]}\right)\left(1+ \mathcal{O}\left(\Delta t^{p+1}\right)\right), \tag{3.10}\] where \[\epsilon^{[1]}:=\lambda^{[1]}-\lambda^{[2]}>0,\quad\epsilon^{[n]}:=\lambda^{[n -1]}-\lambda^{[n]}>0 \tag{3.11}\] and \[\epsilon^{[k]}:=\min\{\lambda^{[k]}-\lambda^{[k+1]},\lambda^{[k-1]}-\lambda^{[ k]}\}>0,\quad\text{for }1<k<n. \tag{3.12}\] So, for small \(\Delta t\), \[0<r_{\Delta t}^{[k]}<\exp(-\Delta t\epsilon^{[k]}/2)\] and, using (2.17), \[0<\left(r_{\Delta t}^{[k]}\right)^{N}<\exp(-N\Delta t\epsilon^{[k]}/2)=\exp(- K\Delta t^{-\alpha}\epsilon^{[k]}/2)\to 0\text{ as }\Delta t\to 0. \tag{3.13}\] It also follows from (3.10) that \[0<r_{\Delta t}^{[k]}\leq 1+\kappa_{10}\Delta t\quad\text{and}\quad 1-r_{\Delta t }^{[k]}\geq\kappa_{11}\Delta t. \tag{3.14}\] Using (3.13) and (3.14) in (3.9) leads to the bound \[|\ell^{[k]}-\lambda^{[k]}|\leq\kappa_{12}\left(\frac{1}{N\Delta t}+\Delta t^{p }\right)\leq\kappa_{13}\left(\Delta t^{\alpha}+\Delta t^{p}\right), \tag{3.15}\] which establishes Theorem 2.1. ### Continuous QR Convergence Analysis #### 3.3.1 Convergence of \(q_{j}^{[k]}\) to \(q_{\star}^{[k]}\) It follows from the theory of QR flows that any solution of the system (11) approaches a fixed point as \(t\to\infty\); see, for example, [16]. This fixed point must be a Schur matrix \(Q_{\star}\) of \(A\). Our analysis below is aimed at showing that the ode solver applied to (11) also asymptotes to \(Q_{\star}\). This is not a trivial task because, regarding (11) as an ode in \(\mathbbm{R}^{n\times n}\), if the problem is linearized about \(Q(t)=Q_{\star}\) then no conclusion can be drawn about stability--eigenvalues of zero real part arise. Hence, a straightforward linearization argument cannot be applied. We also note that although the only _orthogonal_ fixed points of (11) correspond to Schur matrices of \(A\), there are many other non-orthogonal fixed points. For example, \(\sigma Q_{\star}\) for any \(\sigma\in\mathbbm{R}\)is also a fixed point. It follows that a numerical method that does not preserve orthogonality may drift towards a non-orthogonal steady-state. We have observed this behaviour in practice, and its consequences are illustrated in SS4. The following lemma forms the main part of our convergence proof. **Lemma 3.3**: _If a PRK or GLRK method is used to solve the ode(11), then for sufficiently small \(\Delta t\) the \(k\)th column of the numerical solution, \(q_{j}^{[k]}\), converges to a Schur vector \(q_{\star}^{[k]}\) linearly:_ \[\|q_{\star}^{[k]}-q_{j}^{[k]}\|\leq C(\widehat{r}_{\max}+D\Delta t^{p})^{j \Delta t}, \tag{16}\] _where \(C\) and \(D\) are constants, with_ \[\widehat{r}_{\max}=\max_{1\leq i\leq n}\widehat{r}^{[i]},\quad\widehat{r}^{[k ]}=\exp(-\epsilon^{[k]})\] _and \(\epsilon^{[k]}\) is defined in (11) and (12)._ First we let \(\Psi(Q):=QH(Q)\), where \(H(Q)\) is defined in (11). Now, note that \(\Psi(Q)\) is locally Lipschitz, so given any bounded region \(\mathcal{B}\) there exists a constant \(L=L(\mathcal{B})\) such that \[\|\Psi(W)-\Psi(Q_{\star})\|\leq L\|W-Q_{\star}\|,\quad\forall\,W\in\mathcal{B},\;\text{with}\;W^{T}W=I.\] Since \(\Psi(Q_{\star})=0\), we have \[\|\Psi(W)\|\leq L\|W-Q_{\star}\|,\quad\forall\,W\in\mathcal{B},\;\text{with} \;W^{T}W=I. \tag{17}\] Also, we note that any Runge-Kutta method applied to \(\hat{Q}(t)=\Psi(Q(t))\) has a factor of \(\|\Psi(Q_{j})\|\) in its local error expression--this follows from classical order theory [9]. Hence, if we let \(Q_{j}(t)\) denote the local solution of (11) over \([t_{i},t_{i+1}]\), so that \(\hat{Q}_{j}(t)=\Psi(Q_{j}(t))\) and \(Q_{j}(t_{j})=Q_{j}\), then \[\|Q_{j+1}-Q_{j}(t_{j+1})\|\leq\kappa_{2}\Delta t^{p+1}\|\Psi(Q_{j})\|, \tag{18}\] for any GLRK method. In the case of PRK methods, projection can no more than double the local error and "approximately projecting" onto the orthogonal QR factor increases the local error by at most a factor \(1+2\sqrt{2}\) asymptotically [7]. So (18) is valid for both GLRK and PRK methods. Now we know from [16] that for the exact flow of (11), the \(k\)th column of \(Q(t)\), which we denote \(q^{[k]}(t)\), converges linearly to a Schur vector \(q_{\star}^{[k]}\) at rate \(r^{[k]}\). So we may choose a time \(\widehat{T}\) such that \(Q(\widehat{T})\in\mathcal{B}\), where \(\mathcal{B}\) is a ball containing \(Q_{\star}\) with the property that if \(\hat{Q}(0)\in\mathcal{B}\) and \(\hat{Q}(t)\) solves (11) then \[\|\hat{q}^{[k]}(t+\Delta t)-q_{\star}^{[k]}\|\leq(\widehat{r}^{[k]})^{\Delta t }\|\hat{q}^{[k]}(t)-q_{\star}^{[k]}\|,\quad\forall\,t\geq\widehat{T}. \tag{19}\]Now, since the numerical method is convergent over finite time intervals, the inequality (3.16) will hold for \(j\Delta t\leq\widehat{T}\) and \(\Delta t\) sufficiently small. For later times, we have, from (3.17) and (3.18), \[\|q_{j+1}^{[k]}-q_{j}^{[k]}(t_{j+1})\|\leq\kappa_{3}\Delta t^{p+1}\|Q_{j}-Q_{ \star}\|. \tag{3.20}\] For ease of notation, let \[e_{j}^{[k]}:=\|q_{j}^{[k]}-q_{\star}^{[k]}\|. \tag{3.21}\] Then using (3.19) and (3.20), we obtain \[e_{j+1}^{[k]} \leq\|q_{j+1}^{[k]}-q_{j}^{[k]}(t_{j+1})\|+\|q_{j}^{[k]}(t_{j+1}) -q_{\star}^{[k]}\| \tag{3.22}\] \[\leq(\kappa_{3}\Delta t^{p+1}+(\widehat{r}_{\max})^{\Delta t}) \max_{1\leq i\leq n}e_{j}^{[i]}.\] For \(\Delta t\) sufficiently small, it can be shown that \[\kappa_{3}\Delta t^{p+1}+(\widehat{r}_{\max})^{\Delta t}\leq(\widehat{r}_{ \max}+\kappa_{4}\Delta t^{p})^{\Delta t},\] where \(\kappa_{4}=8\kappa_{3}\widehat{r}_{\max}\). Hence (3.22) gives \[\max_{1\leq i\leq n}e_{j+1}^{[i]}\leq(\widehat{r}_{\max}+\kappa_{4}\Delta t^{p })^{\Delta t}\max_{1\leq i\leq n}e_{j}^{[i]}.\] It follows that \[\max_{1\leq i\leq n}e_{j+1}^{[i]}\leq\kappa_{5}(\widehat{r}_{\max}+\kappa_{4} \Delta t^{p})^{\Delta t(j+1)}\,\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt \vrule width 0.4pt height 6.0pt depth 0.0pt} \tag{3.23}\] #### 3.3.2 Trapezoidal Rule Error The error in the Lyapunov exponent approximation \(\ell^{[k]}\) in (2.19) satisfies \[|\lambda^{[k]}-\ell^{[k]}| =\left|(q_{\star}^{[k]T}Aq_{\star}^{[k]})-\frac{1}{T}\frac{\Delta t }{2}\sum_{j=1}^{N}\left[(q_{j-1}^{[k]}{}^{T}Aq_{j-1}^{[k]})+(q_{j}^{[k]T}Aq_{j }^{[k]})\right]\right|\] \[=\frac{1}{2N}\left|\sum_{j=1}^{N}\left[(q_{\star}^{[k]T}-q_{j-1}^{ [k]}{}^{T})Aq_{\star}^{[k]}+q_{j-1}^{[k]T}A(q_{\star}^{[k]}-q_{j-1}^{[k]})\right]\right|\] \[\qquad\qquad+\left.\sum_{j=1}^{N}\left[(q_{\star}^{[k]T}-q_{j}^{[ k]T})Aq_{\star}^{[k]}+q_{j}^{[k]T}A(q_{\star}^{[k]}-q_{j}^{[k]})\right]\right|\] \[\leq\frac{\|A\|}{N}\sum_{j=1}^{N}\left(e_{j-1}^{[k]}+e_{j}^{[k]} \right),\] where we have used \(\|q_{j}^{[k]}\|=1\), since the numerical scheme preserves orthogonality. Making use of Lemma 3.3, we obtain \[|\lambda^{[k]}-\ell^{[k]}|\leq\frac{\|A\|}{N}\sum_{j=1}^{N}\left[\kappa_{5}( \widehat{r}_{\max}+\kappa_{6}\Delta t^{p})^{(j-1)\Delta t}+\kappa_{5}(\widehat{ r}_{\max}+\kappa_{6}\Delta t^{p})^{j\Delta t}\right]\]\[\leq\frac{2\kappa_{5}\|A\|}{N}\sum_{j=1}^{N}(\widehat{r}_{\max}+ \kappa_{6}\Delta t^{p})^{\Delta t(j-1)}\] \[=\frac{2\kappa_{5}\|A\|}{N}\left[\frac{1-(\widehat{r}_{\max}+ \kappa_{6}\Delta t^{p})^{T}}{1-(\widehat{r}_{\max}+\kappa_{6}\Delta t^{p})^{ \Delta t}}\right].\] For a sufficiently small \(\Delta t\), we have \[1-(\widehat{r}_{\max}+\kappa_{6}\Delta t^{p})^{T}\leq 1\quad\text{and}\quad 1-( \widehat{r}_{\max}+\kappa_{6}\Delta t^{p})^{\Delta t}\geq\kappa_{6}\Delta t.\] Therefore, \[|\lambda^{[k]}-\ell^{[k]}|\leq\frac{2\kappa_{7}\|A\|}{N\Delta t}= \frac{\kappa_{8}}{T},\] which establishes Theorem 2.2. ## 4 Numerical Tests ### Real Distinct Eigenvalues #### 4.1.1 Discrete QR Algorithm In this subsection we illustrate Theorem 2.1, testing the three cases \(p>\alpha\), \(p=\alpha\) and \(p<\alpha\) for a \(4\times 4\) system. Given the Lyapunov exponents \(\{\lambda^{[k]}\}_{k=1}^{4}\), we produce the Jacobian matrix \(A\) by forming \(A=X\operatorname{diag}(\lambda^{[k]})X^{-1}\), where \(X\) is a random matrix. (More precisely, \(X\) is formed using rand('state',0) and X = rand(4,4) in Matlab [14].) We perform QR factorizations using the modified Gram-Schmidt method; see [6]. In Figures 4-4 we plot the error in each Lyapunov exponent approximation (2.16) against \(\Delta t\), on a log-log scale. The dashed line with '\(\diamond\)' markers in each picture is of the slope \(\min\{\alpha,p\}\), given by the convergence rate bound of Theorem 2.1. In Figure 4 we take Lyapunov exponents \(5\), \(2\), \(0\), and \(-1\). We use \(S(z)=1+z+z^{2}/2+z^{3}/6+z^{4}/24\) in (2.15), which corresponds to a \(4\)th order, \(4\) stage, explicit Runge-Kutta method, so \(p=4\). We set \(K=0.5\) and \(\alpha=1\). For Figure 4 we use Lyapunov exponents \(1,-1,-5\), and \(-10\) and take \(S(z)=1+z+z^{2}/2\), which corresponds to a \(2\)nd order, \(2\) stage, explicit Runge-Kutta method, so \(p=2\). We set \(K=0.005\) and \(\alpha=2\). Figure 4 arises with Lyapunov exponents \(2,1,-2\), and \(-2.5\). In this case we use \(S(z)=1+z+z^{2}/2\), so \(p=2\), and set \(K=0.1\) and \(\alpha=2.5\). In Figure 4 we illustrate the use of an implicit ode timestepping method. We take \(S(z)=1/(1-z)\), which corresponds to the Backward Euler method [9], for which \(p=1\). We used Lyapunov exponents of \(3.5\), \(1\), \(-1\), and \(-20\), and set \(K=0.05\) and \(\alpha=1\). In these tests, we see that the convergence rate of \(\Delta t^{\min(\alpha,p)}\) arising in Theorem 2.1 is indeed an upper bound on the actual convergence rate, and it is generally sharp. #### 4.1.2 Continuous QR Algorithm We now test the convergence of the continuous QR algorithm in a similar manner to SS4.1.1. In Figure 5 we use Lyapunov exponents \(3,0,-2,-3\). We take \(\Delta t=0.1\) and solve (2.11) using the classical 4th order Runge-Kutta method with "projection" into the orthogonal QR factor using modified Gram-Schmidt. In Figure 6 we take Lyapunov exponents \(7,6,1,-1\). We set \(\Delta t=0.05\) and use the 1-stage 2nd order GLRK method to solve the ode (2.11). Figures 5 and 6 show that the bound in Theorem 2.2 is sharp--on a log-log scale the slope of each line is close to \(-1\). We include Figure 7 as an illustration of what may happen when a method that does not preserve orthogonality is used. In this case, we have taken Lyapunov exponents \(8,5,2,1\)\(\Delta t=0.04\), and used the classical 4th order Runge-Kutta method. It is clear that the algorithm is no longer converging to the true Lyapunov exponents. Closer inspection has shown this non-convergence is caused by the ode solver approaching a steady-state of (2.11) that is not orthogonal, and hence is not a Schur matrix. ### Complex Conjugate Eigenvalues We now give some numerical results for the case of complex conjugate eigenvalues. In this case the Lyapunov exponents are the real parts of the eigenvalues. The next subsection reviews the behaviour of orthogonal iteration Figure 4: Discrete QR algorithm: \(\lambda=\{5,2,0,-1\}\), \(p=4\), \(\alpha=1\). on a fixed matrix and then looks at the discrete QR algorithm. Subsection 4.2.2 deals with the continuous QR algorithm. #### 4.2.1 Discrete QR Algorithm If the orthogonal iteration process described in SS3.1 is applied to a matrix \(B\) that has a complex conjugate pair of eigenvalues, then \(Q_{j}^{T}BQ_{j}\) converges to a block triangular form. The eigenvalues of the appropriate \(2\times 2\) block of \(Q_{j}^{T}BQ_{j}\) converge to the corresponding complex conjugate eigenvalue pair (although the \(2\times 2\) block itself will not have a fixed limit). For a fuller explanation of convergence of the QR Figure 4: Discrete QR algorithm: \(\lambda=\{2,1,-2,-2.5\}\), \(p=2\), \(\alpha=2.5\). algorithm in the complex case, see [1, 12, 17]. It can be shown that the sum of entries on the diagonal of \(B\) which correspond to the complex conjugate pair converge linearly to the sum of the real parts of the pair. This corresponds to the fact that the trace of a \(2\times 2\) block is equal to the sum of its eigenvalues. Therefore, we conclude that, when summed, the diagonal entries of \(Q_{j}^{T}BQ_{j}\) contain the real part eigenvalue information that relates to the Lyapunov exponents. The discrete QR algorithm for Lyapunov exponents, however, does not use \(Q_{j}^{T}BQ_{j}\) but the shifted version \(R_{j+1}=Q_{j+1}^{T}BQ_{j}\). The columns of \(Q_{j+1}\) that correspond to a complex eigenvalue pair are typically different from the corresponding columns of \(Q_{j}\); the space spanned by these columns is converging linearly but the columns themselves are not. Thus, the diagonal entries of the \(2\times 2\) block of \(R_{j+1}\) may differ greatly from the corresponding entries in \(B_{j+1}\). Numerical experiments have shown that the two diagonal entries of \(R_{j+1}\), even when summed, may not reveal any information about the real parts of the eigenvalues of \(B\), and it is tempting to assert that the discrete QR algorithm will fail in the case of complex conjugate eigenvalues. To test this assertion, Figure 8 gives results for the full discrete QR algorithm using a matrix \(A\) with eigenvalues \(4,1-3i,1+3i,-2\) created as \(A=XDX^{-1}\), where \[D=\left[\begin{array}{ccc}4&&\\ &1&3\\ &-3&1\\ &&&2\end{array}\right]\] and \(X\) is a random matrix, as described in SS4.1.1. We used \(S(z)=1+z+z^{2}/2\), so \(p=2\), with \(K=0.01\) and \(\alpha=2\). The figure shows the surprising result that the full discrete QR algorithm is convergent with rate indicated by Theorem 2. So why is the discrete QR algorithm still convergent for complex eigenvalues? Above we were considering a'shifted' version of orthogonal iteration applied to a fixed matrix, while the example in Figure 8 deals with a matrix parametrized by \(\Delta t\) and zooms in on the limit \(\Delta t\to 0\). A heuristic explanation for the success of the full discrete QR algorithm is provided by the observation that if \(A\) has a complex eigenvalue \(\lambda=a+ib\), then the corresponding eigenvalue of \(S(\Delta tA)\) looks like \(1+a\Delta t+ib\Delta t+\mathcal{O}\left(\Delta t^{2}\right)\). The modulus of this eigenvalue is \(1+2a\Delta t+\mathcal{O}\left(\Delta t^{2}\right)\)--the imaginary part of \(\lambda\) has an \(\mathcal{O}\left(\Delta t^{2}\right)\) effect compared to the \(\mathcal{O}\left(\Delta t\right)\) effect of the real part. Hence, in the limit \(\Delta t\to 0\) we expect the real eigenvalue performance to be relevant. Figure 7: Continuous QR Algorithm: \(\lambda=\{8,5,2,1\}\), _RK4_, \(\Delta t=0.04\). #### 4.2.2 Continuous QR Algorithm If the Jacobian matrix \(A\) contains a pair of complex conjugate eigenvalues, then its (real) Schur form will be block upper-triangular with \(2\times 2\) blocks, the eigenvalues of which correspond to each pair of complex eigenvalues. Despite the fact that the continuous QR algorithm uses only information about the diagonals, we observed that the algorithm converged in practice (as did the discrete QR algorithm discussed in the previous subsection). Figure 9 illustrates the behaviour. Figure 8: Discrete QR algorithm: \(\lambda=\{4,1\pm 3i,-2\}\), \(p=2\), \(\alpha=2\). ## 5 Discussion Our approach in this work was to analyse QR algorithms on a simple test problem, so that rigorous convergence rate bounds could be established. By choosing \(A(t)\) constant in (1) and making the assumption (14), the mathematical problem reduces to one of linear algebra--find the eigenvalues of \(A\), although the corresponding analysis of the numerical algorithms requires results from both numerical linear algebra and ODEs. On this problem class the discrete QR algorithm is clearly not optimal. In particular, for each \(j\) in (16), \((R_{j})_{kk}\) is approximating the same quantity. Since the accuracy increases with \(j\), earlier values could be discarded. Indeed, the analysis in SS3.2 can be used to show that taking the extreme case \(\ell^{[k]}=(\log(R_{N})_{kk})/\Delta t\) in (16) improves the error bound in Theorem 2.1 to \(C_{1}\Delta t^{p}\) (independent of \(\alpha>0\)). However, for general time-dependent \(A(t)\) it is clear from (7) that the averaging process inherent in (16) is necessary. Furthermore, in the case where \(A(t)\) is constant but has complex conjugate eigenvalues, the averaging in (16) may compensate for the fact that the algorithm looks only at diagonal elements (rather than \(2\times 2\) blocks). By a similar argument, the continuous QR algorithm loses optimality on this problem class by timestepping to steady state rather than jumping there in a single step, but the timestepping provides the averaging process that is needed for more general problems. On a practical note, our analysis highlighted the need to deal simultaneously with the two limits \(\Delta t\to 0\) and \(T\to\infty\) when using the discrete QR algorithm. The relation (17) that we used to couple the two parameters may also be of use in more realistic simulations. In the general case where a convergence bound of the form (18) is not available, it would be possible to monitor convergence as \(\Delta t\) decreases, and hence adaptively refine the value of \(\alpha\) in order to balance the errors. There is much scope for further theoretical work in this area, including (a) fully analysing the case of complex conjugate eigenvalues and (b) extending the rigorous analysis to more general problem classes, such as the Floquet case [4, pages 412-413]. Given the importance of Lyapunov exponent computations in quantifying the dynamics of long-term simulations, it is clearly of interest to develop tools for analysing and comparing numerical methods, even on simple test problems. **Acknowledgements**. We thank Pete Stewart for explaining to us how Lemma 3 follows from the traditional subspace convergence result, and thereby allowing us to shorten our original proof. ## References * [1]P. G. Ciarlet, _Introduction to Numerical Linear Algebra and Optimisation_, Cambridge University Press, Cambridge, 1989. * [2]J. W. Demmel, _Applied Numerical Linear Algebra_, SIAM, Philadelphia, 1997. * [3]L. Dieci, R. D. Russell, and E. S. V. Vleck, _Unitary integrators and applications to continuous orthonormal techniques_, SIAM J. Numer. Anal., 31 (1994), pp. 261-281. * [4], _On the computation of Lyapunov exponents for continuous dynamical systems_, SIAM J. Numer. Anal., 34 (1997), pp. 402-423. * [5]K. Geist, U. Parlitz, and W. Lauterborn, _Comparison of different methods for computing Lyapunov exponents_, Prog. Theor. Phys., 83 (1990), pp. 875-893. * [6]G. H. Golub and F. F. V. Loan, _Matrix Computations_, John Hopkins University Press, Baltimore and London, 3rd ed., 1996. * [7]D. J. Higham, _Time-stepping and preserving orthonormality_, BIT, 37 (1997), pp. 24-36. * [8]A. Iserles, H. Z. Munthe-Kaas, S. P. Norsett, and A. Zanna, _Lie group methods_, Acta Numerica, 9 (2000), pp. 215-365. * [9]J. D. Lambert, _Numerical Methods for Ordinary Differential Systems_, John Wiley & Sons, Chichester, 1991. * [10]G. W. Stewart, _Methods of simultaneous iteration for calculating eigenvectors of matrices,_ in _Topics in Numerical Analysis II_, John J. H. Miller, Academic Press, New York, 1975, pp. 169-185. * [11], _Simultaneous iteration for computing invariant subspaces of non-Hermitian matrices_, Numer. Math., 25 (1976), pp. 123-136. * [12]J. Stoer and R. Bulirsch, _Introduction to Numerical Analysis_, Springer-Verlag, New York, Inc., 2nd ed., 1992. * [13]A. M. Stuart and A. R. Humphries, _Dynamical Systems and Numerical Analysis_, Cambridge University Press, Cambridge, 1996. * [14]The MathWorks, Inc., _MATLAB User's Guide_, Natick, Massachusetts, 1992. * [15]D. S. Watkins, _Understanding the \(QR\) algorithm_, SIAM Review, 24 (1982), pp. 427-440. * [16], _Isospectral flows_, SIAM Review, 26 (1984), pp. 379-391. * [17]J. H. Wilkinson, _The Algebraic Eigenvalue Problem_, Oxford University Press, Oxford, 1965.
## Chapter 1 The emergence of chaos Embedded in the mud, glistening green and gold and black, was a butterfly, very beautiful and very dead. It fell to the floor, an exquisite thing, a small thing that could upset balances and knock down a line of small dominoes and then big dominoes and then gigantic dominoes, all down the years across Time. Ray Bradbury (1952) ### 1.1 Three hallmarks of mathematical chaos The 'butterfly effect' has become a popular slogan of chaos. But is it really so surprising that minor details sometimes have major impacts? Sometimes the proverbial minor detail is taken to be the difference between a world with some butterfly and an alternative universe that is exactly like the first, except that the butterfly is absent; as a result of this small difference, the worlds soon come to differ dramatically from one another. The mathematical version of this concept is known as _sensitive dependence_. Chaotic systems not only exhibit sensitive dependence, but two other properties as well: they are _deterministic_, and they are _nonlinear_. In this chapter, we'll see what these words mean and how these concepts came into science. Chaos is important, in part, because it helps us to cope withunstable systems by improving our ability to describe, to understand, perhaps even to forecast them. Indeed, one of the myths of chaos we will debunk is that chaos makes forecasting a useless task. In an alternative but equally popular butterfly story, there is one world where a butterfly flaps its wings and another world where it does not. This small difference means a tornado appears in only one of these two worlds, linking chaos to uncertainty and prediction: in which world are we? Chaos is the name given to the mechanism which allows such rapid growth of uncertainty in our mathematical models. The image of chaos amplifying uncertainty and confounding forecasts will be a recurring theme throughout this Introduction. ### Whispers of chaos Warnings of chaos are everywhere, even in the nursery. The warning that a kingdom could be lost for the want of a nail can be traced back to the 14th century; the following version of the familiar nursery rhyme was published in _Poor Richard's Almanack_ in 1758 by Benjamin Franklin: For want of a nail the shoe was lost, For want of a shoe the horse was lost, and for want of a horse the rider was lost, being overtaken and slain by the enemy, all for the want of a horse-shoe nail. We do not seek to explain the seed of instability with chaos, but rather to describe the growth of uncertainty _after_ the initial seed is sown. In this case, explaining how it came to be that the rider was lost due to a missing nail, not the fact that the nail had gone missing. In fact, of course, there either was a nail or there was not. But Poor Richard tells us that if the nail hadn't been lost, then the kingdom wouldn't have been lost either. We will often explore the properties of chaotic systems by considering the impact of slightly different situations. The study of chaos is common in applied sciences like astronomy, meteorology, population biology, and economics. Sciences making accurate observations of the world along with quantitative predictions have provided the main players in the development of chaos since the time of Isaac Newton. According to Newton's Laws, the future of the solar system is completely determined by its current state. The 19th-century scientist Pierre Laplace elevated this determinism to a key place in science. A world is deterministic if its current state completely defines its future. In 1820, Laplace conjured up an entity now known as 'Laplace's demon'; in doing so, he linked determinism and the ability to predict in principle to the very notion of success in science. We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes. Note that Laplace had the foresight to give his demon three properties: exact knowledge of the Laws of Nature ('all the forces'), the ability to take a snapshot of the exact state of the universe ('all the positions'), and infinite computational resources ('an intellect vast enough to submit these data to analysis'). For Laplace's demon, chaos poses no barrier to prediction. Throughout this Introduction, we will consider the impact of removing one or more of these gifts. From the time of Newton until the close of the 19th century, most scientists were also meteorologists. Chaos and meteorology are closely linked by the meteorologists' interest in the role uncertainty plays in weather forecasts. Benjamin Franklin's interest in meteorology extended far beyond his famous experiment of flying a kite in a thunderstorm. He is credited with noting the general movement of the weather from west towards the east and testing this theory by writing letters from Philadelphia to cities further east. Although the letters took longer to arrive than the weather, these are arguably early weather forecasts. Laplace himself discovered the law describing the decrease of atmospheric pressure with height. He also made fundamental contributions to the theory of errors: when we make an observation, the measurement is never exact in a mathematical sense, so there is always some uncertainty as to the 'True' value. Scientists often say that any uncertainty in an observation is due to _noise_, without really defining exactly what the noise is, other than that which obscures our vision of whatever we are trying to measure, be it the length of a table, the number of rabbits in a garden, or the midday temperature. Noise gives rise to _observational uncertainty_, chaos helps us to understand how small uncertainties can become large uncertainties, once we have a model for the noise. Some of the insights gleaned from chaos lie in clarifying the role(s) noise plays in the dynamics of uncertainty in the quantitative sciences. Noise has become much more interesting, as the study of chaos forces us to look again at what we might mean by the concept of a 'True' value. Twenty years after Laplace's book on probability theory appeared, Edgar Allan Poe provided an early reference to what we would now call chaos in the atmosphere. He noted that merely moving our hands would affect the atmosphere all the way around the planet. Poe then went on to echo Laplace, stating that the mathematicians of the Earth could compute the progress of this hand-waving 'impulse', as it spread out and forever altered the state of the atmosphere. Of course, it is up to us whether or not we choose to wave our hands: free will offers another source of seeds that chaos might nurture. In 1831, between the publication of Laplace's science and Poe'sfiction, Captain Robert Fitzroy took the young Charles Darwin on his voyage of discovery. The observations made on this voyage led Darwin to his theory of natural selection. Evolution and chaos have more in common than one might think. First, when it comes to language, both 'evolution' and 'chaos' are used simultaneously to refer both to phenomena to be explained and to the theories that are supposed to do the explaining. This often leads to confusion between the description and the object described (as in 'confusing the map with the territory'). Throughout this Introduction we will see that confusing our mathematical models with the reality they aim to describe muddles the discussion of both. Second, looking more deeply, it may be that some ecosystems evolve as if they were chaotic systems, as it may well be the case that small differences in the environment have immense impacts. And evolution has contributed to the discussion of chaos as well. This chapter's opening quote comes from Ray Bradbury's 'A Sound Like Thunder', in which time-travelling big game hunters accidentally kill a butterfly, and find the future a different place when they return to it. The characters in the story imagine the impact of killing a mouse, its death cascading through generations of lost mice, foxes, and lions, and: Theodelless to say, someone does step off the Path, crushing to death a beautiful little green and black butterfly. We can only consider these 'what if experiments within the fictions of mathematics or literature, since we have access to only one realization of reality. The origins of the term 'butterfly effect' are appropriately shrouded in mystery. Bradbury's 1952 story predates a series of scientific papers on chaos published in the early 1960s. The meteorologist Ed Lorenz once invoked sea gulls' wings as the agent of change, although the title of that seminar was not his own. And one of his early computer-generated pictures of a chaotic system does resemble a butterfly. But whatever the incarnation of the'small difference', whether it be a missing horse shoe nail, a butterfly, a sea gull, or most recently, a mosquito'squished' by Homer Simpson, the idea that small differences can have huge effects is not new. Although silent regarding the origin of the small difference, chaos provides a description for its rapid amplification to kingdom-shattering proportions, and thus is closely tied to forecasting and predictability. ### The first weather forecasts Like every ship's captain of the time, Fitzroy had a deep interest in the weather. He developed a barometer which was easier to use onboard ship, and it is hard to overestimate the value of a barometer to a captain lacking access to satellite images and radio reports. Major storms are associated with low atmospheric pressure; by providing a quantitative measurement of the pressure, and thus how fast it is changing, a barometer can give life-saving information on what is likely to be over the horizon. Later in life, Fitzroy became the first head of what would become the UK Meteorological Office and exploited the newly deployed telegraph to gather observations and issue summaries of the current state of the weather across Britain. The telegraph allowed weather information to outrun the weather itself for the first time. Working with LeVerrier of France, who became famous for using Newton's Laws to discover two new planets, Fitzroy contributed to the first international efforts at real-time weather forecasting. These forecasts were severely criticized by Darwin's cousin, statistician Francis Galton, who himself published the first weather chart in the _London Times_ in 1875, reproduced in Figure 1. The dotted lines indicate the gradients of braconstrictions pressure. The variations of the temperature are marked by figures, the state of the sea and sky by descriptive words, and the direction of the wind by across-barbol and balanced according to its force. \(\odot\) denotes oxim. 1. The first weather chart ever published in a newspaper. Prepared by Francis Galton, it appeared in the _London Times_ on 31 March 1875If uncertainty due to errors of observation provides the seed that chaos nurtures, then understanding such uncertainty can help us better cope with chaos. Like Laplace, Galton was interested in the 'theory of errors' in the widest sense. To illustrate the ubiquitous 'bell-shaped curve' which so often seems to reflect measurement errors, Galton created the 'quincunx', which is now called a Galton Board; the most common version is shown on the left side of Figure 2. By pouring lead shot into the quincunx, Galton simulated a random system in which each piece of shot has a 50:50 chance of going to either side of every 'naif' that it meets, giving rise to a bell-shaped distribution of lead. Note there is more here than the one-off flap of a butterfly wing: the paths of two nearby pieces of lead may stay together or diverge at each level. We shall return to Galton Boards in Chapter 9, but we will use random numbers from the bell-shaped curve as a model for noise many times before then. The bell-shape can be seen at the bottom of the Galton Board on the left of Figure 2, and we will find a smoother version towards the top of Figure 10. The study of chaos yields new insight into why weather forecasts remain unreliable after almost two centuries. Is it due to our missing minor details in today's weather which then have major impacts on tomorrow's weather? Or is it because our methods, while better than Fitzroy's, remain imperfect? Poe's early atmospheric incarnation of the butterfly effect is complete with the idea that science could, if perfect, predict everything physical. Yet the fact that sensitive dependence would make detailed forecasts of the weather difficult, and perhaps even limit the scope of physics, has been recognized within both science and fiction for some time. In 1874, the physicist James Clerk Maxwell noted that a sense of proportion tended to accompany success in a science: This is only true when small variations in the initial circumstances produce only small variations in the final state of the system. In a great many physical phenomena this condition is satisfied; but there are other cases in which a small initial variation may produce a very great change in the final state of the system, as when the displacement of the 'points' causes a railway train to run into another instead of keeping its proper course. This example is again atypical of chaos in that it is 'one-off' sensitivity, but it does serve to distinguish sensitivity and uncertainty: this sensitivity is no threat as long as there is no uncertainty in the position of the points, or in which train is on which track. Consider pouring a glass of water near a ridge in the Rocky Mountains. On one side of this continental divide the water finds its way into the Colorado River and to the Pacific Ocean, on the other side the Mississippi River and eventually the Atlantic Ocean. Moving the glass one way or the other illustrates sensitivity: a small change in the position of the glass means a particular molecule of water ends up in a different ocean. Our uncertainty in the position of the glass might restrict our ability to predict which ocean that molecule of water will end up in, but only _if_ that uncertainty crosses the line of the continental divide. Of course, _if_ we were really trying to do this, we would have to question whether any such mathematical line actually divided continents, as well as the other adventures the molecule of water might have which could prevent it reaching the ocean. Usually, chaos involves much more than a single one-off "tripping point"; it tends to more closely resemble a water molecule that repeatedly evaporates and falls in a region where there are continental divides all over the place. * _Nonlinearity_ is defined by what it is not (it is not linear). This kind of definition invites confusion: how would one go about defining a biology of non-elephants? The basic idea to hold in mind now is that a nonlinear system will show a disproportionate response: the impact of adding a second straw to a camel's back could be much bigger (or much smaller) than the impact of the first straw. Linear systems always respond proportionately. Nonlinear systems need not, giving nonlinearity a critical role in the origin of sensitive dependence. ### The Burns' Day storm But Mousie, thou art no thy lane, In proving foresight may be vain: The best-laid schemes o mice an men Gang aft agley, An lea'e us nought but grief an pain, For promis'd joy!Still thou art blest, compar'd wi me! The present only toucheth thee: But och! I backward cast my e'e, On prospects drear! An forward, tho I canna see, I guess an fear! Robert Burns, 'To A Mouse' (1785) Burns' poem praises the mouse for its ability to live only in the present, not knowing the pain of unfulfilled expectations nor the dread of uncertainty in what is yet to pass. And Burns was writing in the 18th century, when mice and men laid their plans with little assistance from computing machines. While foresight may be pain, meteorologists struggle to foresee tomorrow's likely weather every day. Sometimes it works. In 1990, on the anniversary of Burns' birth, a major storm ripped through northern Europe, including the British Isles, causing significant property damage and loss of life. The centre of the storm passed over Burns' home town in Scotland, and it became known as the Burns' Day storm. A weather chart reflecting the storm at noon on 25 January is shown in the top panel of Figure 4 (page 14). Ninety-seven people died in northern Europe, about half of this number in Britain, making it the highest death toll of any storm in 40 years; about 3 million trees were blown down, and total insurance costs reached PS2 billion. Yet the Burns' Day storm has not joined the rogues' gallery of famously failed forecasts: it was well forecast by the Met Office. In contrast, the Great Storm of 1987 is famous for a BBC television meteorologist's broadcast the night before, telling people _not_ to worry about rumours from France that a hurricane was about to strike England. Both storms, in fact, managed gusts of over 100 miles per hour, and the Burns' Day storm caused much greater loss of life; yet 20 years after the event, the Great Storm of 1987 is much more often discussed, perhaps exactly because the Burns' Day storm _was_ well forecast. The story leading up to this forecast beautifully illustrates a different way that chaos in our models can impact our lives without invoking alternate worlds, some with and some without butterflies. In the early morning of 24 January 1990, two ships in the mid-Atlantic sent routine meteorological observations from positions that happened to straddle the centre of what would become the Burns' Day storm. The forecast models run with these observations give a fine forecast of the storm. Running the model again after the event showed that when these observations are omitted, the model predicts a weaker storm in the wrong place. Because the Burns' Day storm struck during the day, the failure to provide forewarning would have had a huge impact on loss of life, so here we have an example where a few observations, had they not been made, would have changed the forecast and hence the course of human events. Of course, an ocean weather ship is harder to misplace than a horse shoe nail. There is more to this story, and to see its relevance we need to look into how weather models 'work'. Operational weather forecasting is a remarkable phenomenon in and of itself. Every day, observations are taken in the most remote locations possible, and then communicated and shared among national meteorological offices around the globe. Many different nations use this data to run their computer models. Sometimes an observation is subject to plain old mistakes, like putting the temperature in the box for wind speed, or a typo, or a glitch in transition. To keep these mistakes from corrupting the forecast, incoming observations are subject to quality control: observations that disagree with what the model is expecting (given its last forecast) can be rejected, especially if there are no independent, nearby observations to lend support to them. It is a well-laid plan. Of course, there are rarely any 'nearby' observations of any sort in the middle of the Atlantic, and the ship observations showed the development of a storm that the model had not predicted would be there, so the computer's automatic quality control program simply rejected these observations. ## 3 Headline from _The Times_ the day after the Burns' Day storm **4. A modern weather chart reflecting the Burns' Day storm as seen through a weather model (top) and a two-day-ahead forecast targeting the same time showing a fairly pleasant day (bottom)**Luckily, the computer was overruled. An intervention forecaster was on duty and realized that these observations were of great value. His job was to intervene when the computer did something obviously silly, as computers are prone to do. In this case, he tricked the computer into accepting the observations. Whether or not to take this action is a judgement call: there was no way to know at the time which action would yield a better forecast. The computer was 'tricked', the observation was used. The storm was forecast, and lives were saved. There are two take-home messages here: the first is that when our models are chaotic then small changes in our observations can have large impacts on the quality of our foresight. An accountant looking to reduce costs and computing the typical benefit of one particular observation from any particular weather station is likely to vastly underestimate the value of a future report from one of those weather stations that falls at the right place at the right time, and similarly the value of the intervention forecaster, who often has to do nothing, literally. The second is that the Burns' Day forecast illustrates something a bit different from the butterfly effect. Mathematical models allow us to worry about what the real future will bring _not_ by considering possible worlds, of which there may be only one, but by contrasting different simulations of our model, of which there can be as many as we can afford. As Burns might appreciate, science gives us new ways to guess and new things to fear. The butterfly effect contrasts different worlds: one world with the nail and another world without that nail. The _Burns effect_ places the focus firmly on us and our attempts to make rational decisions in the real world given only collections of different simulations under various imperfect models. The failure to distinguish between reality and our models, between observations and mathematics, arguably between an empirical fact and scientific fiction, is the root of much confusion regarding chaos both by the public and among scientists. It was research into nonlinearity and chaos that clarified yet again how import this distinction remains. In Chapter 10, we will return to take a deeper look at how today'sweather forecasters would have used insights from their understanding of chaos when making a forecast for this event. We have now touched on the three properties found in chaotic mathematical systems: chaotic systems are nonlinear, they are deterministic, and they are unstable in that they display sensitivity to initial condition. In the chapters that follow we will constrain them further, but our real interests lie not only in the mathematics of chaos, but also in what it can tell us about the real world. Chaos and the real world: predictability and a 21st-century demon There is no more greater an error in science, than to believe that just because some mathematical calculation has been completed, some aspect of Nature is certain. Alfred North Whitehead (1953) What implications does chaos hold for our everyday lives? Chaos impacts the ways and means of weather forecasting, which affect us directly through the weather, and indirectly through economic consequences both of the weather and of the forecasts themselves. Chaos also plays a role in questions of climate change and our ability to foresee the strength and impacts of global warming. While there are many other things that we forecast, weather and climate can be used to represent short-range forecasting and long-range modelling, respectively. 'When is the next solar eclipse?' would be a weather-like question in astronomy, while 'Is the solar system stable?' would be a climate-like question. In finance, when to buy 100 shares of a given stock is a weather-like question, while a climate-like question might address whether to invest in the stock market or real estate. Chaos has also had a major impact on the sciences, forcing a close re-examination of what scientists mean by the words 'error' and 'uncertainty' and how these meanings change when applied to our world and our models. As Whitehead noted, it is dangerous to interpret our mathematical models as if they somehow governed the real world. Arguably, the most interesting impacts of chaos are not really new, but the mathematical developments of the last 50 years have cast many old questions into a new light. For instance, what impact would uncertainty have on a 21st-century incarnation of Laplace's demon which could not escape observational noise? Consider an intelligence that knew all the laws of nature precisely and had good, but imperfect, observations of an isolated chaotic system over an arbitrarily long time. Such an agent - even if sufficiently vast to subject all this data to computationally exact analysis - could not determine the current state of the system and thus the present, as well as the future, would remain uncertain in her eyes. While our agent could not predict the future exactly, the future would hold no real surprises for her, as she could see what could and what could not happen, and would know the probability of any future event: the predictability of the world she could see. Uncertainty of the present will translate into well-quantified uncertainty in the future, _if_ her model is perfect. In his 1927 Gifford Lectures, Sir Arthur Eddington went to the heart of the problem of chaos: some things are trivial to predict, especially if they have to do with mathematics itself, while other things seem predictable, sometimes: A total eclipse of the sun, visible in Cornwall is prophesied for 11 August 1999... I might venture to predict that \(2+2\) will be equal to 4 even in 1999... The prediction of the weather this time next year... is not likely to ever become practicable... We should require extremely detailed knowledge of present conditions, since a small local deviation can exert an ever-expanding influence. We must examine the state of the sun... be forewarned of volcanic eruptions,..., a coal strike..., a lighted match idly thrown away... Our best models of the solar system are chaotic, and our best models of the weather appear to be chaotic: yet why was Eddington confident in 1928 that the 1999 solar eclipse would occur? And equally confident that no weather forecast a year in advance would ever be accurate? In Chapter 10 we will see how modern weather forecasting techniques designed to better cope with chaos helped me to see that solar eclipse. ### When paradigms collide: chaos and controversy One of the things that has made working in chaos interesting over the last 20 years has been the friction generated when different ways of looking at the world converge on the same set of observations. Chaos has given rise to a certain amount of controversy. The studies that gave birth to chaos have revolutionized not only the way professional weather forecasters forecast but even what a forecast consists of. These new ideas often run counter to traditional statistical modelling methods, and still produce both heat and light on how best to model the real world. This battle is broken into skirmishes by the nature of the field and our level of understanding in the particular system of which a question is asked, be it the population of voles in Scandinavia, a mathematical calculation to quantify chaos, the number of spots on the Sun's surface, the price of oil delivered next month, tomorrow's maximum temperature, or the date of the last ever solar eclipse. The skirmishes are interesting, but chaos offers deeper insights even when both sides are fighting for traditional advantage, say, the 'best' model. Here studies of chaos have redefined the high ground: today we are forced to reconsider new definitions for what constitutes the best model, or even a 'good' model. Arguably, we must give up the idea of approaching Truth, or at least define a wholly new way of measuring our distance from it. The study of chaos motivates us to establish utility without any hope of achieving perfection, and to give up many obvious home truths of forecasting,like the naive idea that a good forecast consists of a prediction that is close to the target. This did not appear naive before we understood the implications of chaos. ### La Tour's realistic vision of science in the real world To close this chapter, we illustrate how chaos can force us to reconsider what constitutes a good model, and revise our beliefs as to what is ultimately responsible for our forecast failures. This impact is felt by scientists and mathematicians alike, but the reconsideration will vary depending on the individual's point of view and the empirical system under study. The situation is nicely personified in Figure 5, a French baroque painting by Georges de la Tour showing a card game from the 17th century. La Tour was arguably a realist with a sense of humour. He was fond of fortune telling and games of chance, especially those in which chance played a somewhat lesser role than the participants happened to believe. In theory, chaos can play exactly this role. We will interpret this painting to show a mathematician, a physicist, a statistician, and a philosopher engaged in an exercise of skill, dexterity, insight, and computational prowess; this is arguably a description for doing science, but the task at hand here is a game of poker. Exactly who is who in the painting will remain open, as we will return to these personifications of natural science throughout the book. The insights chaos yields vary with the perspective of the viewer, but a few observations are in order. The impeccably groomed young man on the right is engaged in careful calculations, no doubt a probability forecast of some nature; he is currently in possession of a handsome collection of gold coins on the table. The dealer plays a critical role, without her there is no game to be played; she provides the very language within which we communicate, yet she seems to be in nonverbal communication with the handmaiden. The role of the handmaiden is less clear; she is perhaps tangential, but then again the provision of wine will influence the game, and she herself may feature as a distraction. The roguish character in ramshackle dress with bows untied is clearly concerned with the real world, not mere appearances in some model of it; his left hand is extracting one of several aces of diamonds from his belt, which he is about to introduce into the game. What then do the 'probabilities' calculated by the young man count for, if, in fact, he is not playing the game his mathematical model describes? And how deep is the insight of our rogue? His glance is directed to us, suggesting that he knows we can see his actions, perhaps even that he realizes that he is in a painting? The story of chaos is important because it enables us to see the world from the perspective of each of these players. Are we merely developing the mathematical language with which the game is played? Are we risking economic ruin by over-interpreting some potentially useful model while losing sight of the fact that it, like all models, is imperfect? Are we only observing the big picture, not entering the game directly but sometimes providing an interesting distraction? Or are we manipulating those things we can change,acknowledging the risks of model inadequacy, and perhaps even our own limitations, due to being within the system? To answer these questions we must first examine several of the many jargons of science in order to be able to see how chaos emerged from the noise of traditional linear statistics to vie for roles both in understanding and in predicting complicated real-world systems. Before the nonlinear dynamics of chaos were widely recognized within science, these questions fell primarily in the domain of the philosophers; today they reach out via our mathematical models to physical scientists and working forecasters, changing the statistics of decision support and even impacting politicians and policy makers. ## Chapter 2 Exponential growth, nonlinearity, common sense One of the most pervasive myths about chaotic systems is that they are impossible to predict. To expose the fallacy of this myth, we must understand how uncertainty in a forecast grows as we predict further and further into the future. In this chapter we investigate the origin and meaning of _exponential growth_, since on average a small uncertainty will grow exponentially fast in a chaotic system. There is a sense in which this phenomenon really does imply a 'faster' growth of uncertainty than that found in our traditional ideas of how error and uncertainty grow as we forecast further into the future. Nevertheless, chaos can be easy to predict, sometimes. ### 2.1 Chess, rice, and Leonardo's rabbits: exponential growth An oft-told story about the origin of the game of chess illustrates nicely the speed of exponential growth. The story goes that a king of ancient Persia was so pleased when first presented with the game that he wanted to reward the game's creator, Sissa Ben Dahir. A chess board has 64 squares arranged in an 8 by 8 pattern; for his reward, Ben Dahir requested what seemed a quite modest sum of rice determined using the new chess board: one grain of rice was to be put on the first square of the board, two to be put on the second, four for the third, eight for the fourth, and so on, doubling the number on each square until the 64th was reached. Amathematician will often call any rule for generating one number from another one a mathematical _map_, so we'll refer to this simple rule ('double the current value to generate the next value') as the _Rice Map_. Before working out just how much rice Ben Dahir has asked for, let us consider the case of linear growth where we have one grain on the first square, two on the second square, three on the third, and so on until we need 64 for the last square. In this case we have a total of \(64+63+62+\ldots+3+2+1\), or around 1,000 grains. Just for comparison, a 1 kilogram bag of rice contains a few tens of thousands of grains. The Rice Map requires one grain for the first square, then two for the second, four for the third, then 8, 16, 32, 64, and 128 for the last square of the first row. On the third square of the second row, we pass 1,000 and before the end of the second row there is a square which exhausts our bag of rice. To fill the next square alone will require another entire bag, the following square two bags, and so on. Some square in the third row will require a volume of rice comparable to a small house, and we will have enough rice to fill the Royal Albert Hall well before the end of the fifth row. Finally, the 64th square alone will require billions and billions, or to be exact, \(2^{\text{\tiny{63}}}\) (= 9, 223, 372, 036, 854, 775, 808) grains, for a total of 18,446,744,073,709,551,615 grains. That is a non-trivial quantity of rice! It is something like the entire world's rice production over two millennia. Exponential growth quickly grows out of all proportion. By comparing the amount of rice on a given square in the case of linear growth with the amount of rice on the same square in the case of exponential growth, we quickly see that exponential is much faster than linear growth: on the fourth square we already have twice as many grains in the exponential case as in the linear case (8 in the first, only 4 in the second), and by the eighth square, at the end of the first row, the exponential case has 16 times more! Soon thereafter we have the astronomical numbers. Of course, we hid the values of some _parameters_ in the example above: we could have made the linear growth faster by adding not one additional grain for each square, but instead, say, 1,000 additional grains. This parameter, the number of additional grains, defines the constant of proportionality between the number of the square and the number of grains on that square, and gives us the slope of the linear relationship between them. There is also a parameter in the exponential case: on each step we increased the number of grains by a factor of two, but it could have been a factor of three, or a factor of one and a half. One of the surprising things about exponential growth is that _whatever_ the values of these parameters, there will come a time at which exponential growth surpasses _any_ linear growth, and will soon thereafter dwarf linear growth, no matter how fast the linear growth is. Our ultimate interest is not in rice on a chess board, but in the dynamics of uncertainty in time. Not just the growth of a population, but the growth of our uncertainty in a forecast of the future size of that population. In the forecasting context, there will come a time at which an exponentially growing uncertainty which is very small today will surpass a linearly growing uncertainty which is today much larger. And the same thing happens when contrasting exponential growth with growth proportional to the square of time, or to the cube of time, or to time raised to any power (in symbols: steady exponential growth will eventually surpass the growth proportional to t\({}^{2}\) or t\({}^{3}\) or t\({}^{n}\) for any value of n.). It is for this reason among others that exponential growth is mathematically distinguished, and taken to provide a benchmark for defining chaos. It has also contributed to the widespread but fundamentally mistaken impression that chaotic systems are hopelessly unpredictable. Ben Dahir's chess board illustrates that there is a deep sense in which exponential growth is faster than linear growth. To place this in the context of forecasting, we move forward a few hundred years in time and a few hundred miles northwest, from Persia to Italy. At the beginning of the 13th century, Leonardo of Pisa posed a question of population dynamics: given a newborn pair of rabbits in a large, lush, walled garden, how many pairs of rabbits will we have in one year if their nature is for each mature pair to breed and produce a new pair every month, and newborn rabbits mature in their second month? In the first month we have one juvenile pair. In the second month this pair matures and breeds to produce a new pair in the third month. So in the third month, we have one mature pair and one newborn pair. In the fourth month we once again have one new born pair from the original pair of rabbits and now two mature pairs for a total of three pairs. In the fifth month, two new pairs are born (one from each mature pair), and we have three mature pairs for a total of five pairs. And so on. So what does this 'population dynamic' look like? In the first month we have one immature pair, in the second month we have one mature pair, in the third month we have one mature pair and a new immature pair, in the fourth month we have two mature pairs and one immature pair, in the fifth month we have three mature pairs and two immature. If we count up all the pairs each month, the numbers are 1, 1, 2, 3, 5, 8, 13, 21.... Leonardo noted that the next number in the series is always the sum of the previous two numbers (1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5,...) which makes sense, as the previous number is the number we had last month (in our model all rabbits survive no matter how many there are), and the penultimate number is the number of mature pairs (and thus the number of new pairs arriving this month). Now it gets a bit tedious to write 'and in the sixth month we have 12 pairs of rabbits', so scientists often use a short-hand X for the number of pairs of rabbits and X\({}_{6}\) to denote the number of pairs in month six. And since the series 1, 1, 2, 3, 5, 8,.... reflects how the population of rabbits evolves in time, this series and others like it are called _time series_. The Rabbit Map is defined by the rule:Add the previous value of X to the current value of X, and take the sum as the new value of X. The numbers in the series 1, 1, 2, 3, 5, 8, 13, 21, 34... are called Fibonacci numbers (Fibonacci was a nickname of Leonardo of Pisa), and they arise again and again in nature: in the structure of sunflowers, pine cones, and pineapples. They are of interest here because they illustrate exponential growth in time, almost. The crosses in Figure 6 are Fibonacci's points - the rabbit population as a function of time - while the solid line reflects two raised to the power \(\lambda\)t, or in symbols \(2^{\lambda\tau}\), where t is the time in months and \(\lambda\) is our first exponent. Exponents which multiply time in the superscript are a useful way of quantifying uniform exponential growth. In this case, \(\lambda\) is equal to the logarithm of a number called the golden mean, a very special number which is discussed in the _Very Short Introduction to Mathematics_. **6. The series of crosses showing the number of pairs of rabbits each month (Fibonacci numbers); the smooth curve they lie near is the related exponential growth**The first thing to notice about Figure 6 is that the points lie close to the curve. The exponential curve is special in mathematics because it reflects a function whose increase is proportional to its current value. The larger it gets, the faster it grows. It makes sense that something like this function would describe the dynamics of Leonardo's rabbit population since the number of rabbits next month is more or less proportional to the number of rabbits this month. The second thing to notice about the figure is that the points do _not_ lie on the curve. The curve is a good _model_ for Fibonacci's Rabbit Map, but it is not perfect: at the end of each month the number of rabbits is always a whole number and, while the curve may be close to the correct whole number, it is not exactly equal to it. As the months go by and the population grows, the curve gets closer and closer to each Fibonacci number, but it never reaches them. This concept of getting closer and closer but never quite arriving is one that will come up again and again in this book. So how can Leonardo's rabbits help us to get a feel for the growth of forecast uncertainty? Like all observations, counting the number of rabbits in a garden is subject to error; as we saw in Chapter 1, observational uncertainties are said to be caused by noise. Imagine that Leonardo failed to notice a pair of mature rabbits also in the garden in the first month; in that case, the number of pairs actually in the garden would have been 2, 3, 5, 8, 13,... The error in the original forecast (1, 1, 2, 3, 5, 8...) would be the difference between the Truth and that forecast, namely: 1, 2, 3, 5...(again, the Fibonacci series). In month 12, this error has reached a very noticeable 146 pairs of rabbits! A small error in the initial number of rabbits results in a very large error in the forecast. In fact, the error is growing exponentially in time. This has many implications. Consider the impact of the exponential error growth on the uncertainty of our forecasts. Let us again contrast linear growth and exponential growth. Let's assume that, for a price, we can reduce the uncertainty in the initial observation that we use in generating our forecast. If the error growth is linear, and we reduce our initial uncertainty by a factor of ten, then we can forecast the system ten times longer before our uncertainty exceeds the same threshold. If we reduce the initial uncertainty by a factor of 1,000, then we can get forecasts of the same quality 1,000 times longer. This is an advantage of linear models. Or, more accurately, this is an apparent advantage of studying only linear systems. By contrast, if the model is nonlinear and the uncertainty grows exponentially, then we may reduce our initial uncertainty by a factor of ten yet only be able to forecast twice as long with the same accuracy. In that case, _assuming_ the exponential growth in uncertainty is uniform in time, reducing the uncertainty by a factor of 1,000 will only increase our forecast range at the same accuracy by a factor of eight. Now reducing the uncertainty in a measurement is rarely free (we have to hire someone else to count the rabbits a second time), and large reductions of uncertainty can be expensive, so when uncertainty grows exponentially fast, the cost sky-rockets. Attempting to achieve our forecast goals by reducing uncertainty in initial conditions can be tremendously expensive. Luckily, there is an alternative that allows us to accept the simple fact that we can never be certain that any observation has not been corrupted by noise. In the case of rabbits or grains of rice, it seems there really is a fact of the matter, a whole number that reflects the correct answer. If we reduce the uncertainty in this initial condition to zero then we can predict without error. But can we ever really be certain of the initial condition? Might there not be another bunny hiding in the noise? While our best guess is that there is one pair in the garden, there might be two, or three, or more (or perhaps zero). When we are uncertain of the initial condition, we can examine the diversity of forecasts under our model by making an ensemble of forecasts: one forecast started from each initial condition we think plausible. So one member of the ensemble will start with X equal to one, another ensemble member will start with X equals two, and so on. How should we divide our limited resources between computingmore ensemble members and making better observations of the current number of rabbits in the garden? In the Rabbit Map, differences between the forecasts of different members of the ensemble will grow exponentially fast, but with an ensemble forecast we can see just how different they are and use this as a measure of our uncertainty in the number of rabbits we expect at any given time. In addition, if we carefully count the number of rabbits after a few months, we can all but rule out some of the individual ensemble members. Each of these ensemble members was started from some estimate of the number of rabbits that were in the garden originally, so ruling an ensemble member out in effect gives us more information about the original number of rabbits. Of course, this information need only prove accurate if our model is literally perfect, meaning, in this case, that our Rabbit Map captures the reproductive behaviour and longevity of our rabbits exactly. But if our model is perfect, then we can use future observations to learn about the past; this process is called _noise reduction_. If it turns out that our model is not perfect, then we may end up with incoherent results. But what if we were measuring something that is not a whole number, like temperature, or the position of a planet? And is temperature in an imperfect weather model exactly the same thing as temperature in the real world? It was these questions that initially interested our philosopher in chaos. First, we should consider the more pressing question of why rabbits have not taken over the world in the 9,000 months since 1202? ### Stretching, folding, and the growth of uncertainty The study of chaos lends credence to the meteorological maxim that no forecast is complete without a useful estimate of forecast uncertainty: if we know our initial condition is uncertain then we are not only interested in the prediction _per se_, but equally in learning what the likely forecast error will be. Forecast error for any **Exponential growth: an example from Miss Nagel's third grade class** **A few months ago, I received an email written by an old friend of mine from elementary school. It contained another email that had originated from a third grader in North Carolina whose class was studying geography. It requested that everyone who read the email send a reply to the school stating where they lived, and the class would locate that place on a school globe. It also requested that each reader pass on the email to ten friends.** **I did not forward the message to anyone, but I did write an email to Miss Nagel's class stating that I was in Oxford, England. I also suggested that they tell their mathematics teacher about their experiment and use it as an example to illustrate exponential growth: if they sent the message to ten people, and the next day each of them sent it to ten more people, that would be 100 on day three, 1,000 on day four, and more emails than there are email addresses within a week or so. In a real system, exponential growth cannot go on forever: eventually we run out of rice, or garden space, or new email addresses. It is often the resources that limit growth: even a lush garden provides only a finite amount of rabbit food. There are limits to growth which bound populations, if not our models of populations.** **I never found out whether Miss Nagel's class learned their lesson in exponential growth. The only answer I ever received was an automated reply stating that the school's email in-box had exceeded its quota and had been closed.**real system should not grow without limit; even if we start with a small error like one grain or one rabbit, the forecast error will not grow arbitrarily large (unless we have a very naive forecaster), but will saturate near some limiting value, as would the population itself. Our mathematician has a way to avoid ludicrously large forecast errors (other than naivete), namely by making the initial uncertainty _infinitesimally_ small - smaller than any number you can think of, yet greater than zero. Such an uncertainty will stay infinitesimally small for all time, even if it grows exponentially fast. Physical factors, like the total amount of rabbit food in the garden or the amount of disk space on an email system, limit growth in practice. The limits are intuitive even if we do not know exactly what causes them: I think I have lost my keys in the car park; of course they might be several miles from there, but it is exceedingly unlikely that they are farther away than the moon. I do not need to understand or believe the laws of gravity to appreciate this. Similarly, weather forecasters are rarely more than 100 degrees off, even for a forecast one year in advance! Even inadequate models can usually be constrained so that their forecast errors are bounded. Whenever our model goes into never-never land (suggesting values where no data have ever gone before), then something is likely to give, unless something in our model has already broken. Often, as our uncertainty grows too large, it starts to fold back on itself. Imagine kneading dough, or a toffee machine continuously stretching and folding toffee. An imaginary line of toffee connecting two very nearby grains of sugar will grow longer and longer as these two grains separate under the action of the machine, but before it becomes bigger than the machine itself, this line will be folded back into itself, forming a horrible tangle. The distance between the grains of sugar will stop growing, even as the string of toffee connecting them continues to grow longer and longer, becoming a more and more complicated tangle. The toffee machine gives us a way to envision limits to the growth of prediction error whenever our model is perfect. In this case, the error is the growing _distance_between the True state and our best guess of that state: any exponential growth of error would correspond only to the rapid initial growth of the string of toffee. But if our forecasts are not going to zoom away towards infinity (the toffee must stay in the machine, only a finite number of rabbits will fit in the garden, and the like), then eventually the line connecting Truth and our forecast will be folded over on itself. There is simply nowhere else for it to grow into. In many ways, identifying the movement of a grain of sugar in the toffee machine with the evolution of the state of a chaotic system in three dimensions is a useful way to visualize chaotic motion. We want to require a sense of containment for chaos, since it is hardly surprising that it is difficult to predict things that are flying apart to infinity, but we do not want to impose so strict a condition as requiring a forecast to never exceed some limited value, no matter how big that value might be. As a compromise, we require the system to come back to the vicinity of its current state at some point in the future, and to do so again and again. It can take as long as it wants to come back, and we can define coming back to mean returning closer to the current point than we have ever seen it return before. If this happens, then the trajectory is said to be _recurrent_. The toffee again provides an analogy: if the motion was chaotic and we wait long enough, our two grains of sugar will again come back close together, and each will pass close to where it was at the beginning of the experiment, assuming no one turns off the machine in the meantime. ## Chapter 3 Chaos in context: determinism, randomness, and noise All linear systems resemble one another, each nonlinear system is nonlinear in its own way. After Tolstoy's _Anna Karenina_ ### Dynamical systems Chaos is a property of dynamical systems. And a dynamical system is nothing more than a source of changing observations: Fibonacci's imaginary garden with its rabbits, the Earth's atmosphere as reflected by a thermometer at London's Heathrow airport, the economy as observed through the price of IBM stock, a computer program simulating the orbit of the moon and printing out the date and location of each future solar eclipse. There are at least three different kinds of dynamical systems. Chaos is most easily defined in _mathematical dynamical systems_. These systems consist of a rule: you put a number in and you get a new number out, which you put back in, to get yet a newer number out, which you put back in. And so on. This process is called _iteration_. The number of rabbits each month in Fibonacci's imaginary garden is a perfect example of a time series from this kind of system. A second type of dynamical system is found in the empirical world of the physicist, the biologist, or the stock market trader. Here, our sequence of observations consists of noisy measurements of reality,which are fundamentally different from the noise-free numbers of the Rabbit Map. In these _physical dynamical systems_ - the Earth's atmosphere and Scandinavia's role population, for example - numbers represent the state, whereas in the Rabbit Map they _were_ the state. To avoid needless confusion, it is useful to distinguish a third case when a digital computer performs the arithmetic specified by a mathematical dynamical system; we will call this a _computer simulation_ - computer programs that produce TV weather forecasts are a common example. It is important to remember that these are different _kinds_ of systems and that each is a different beast: our best equations for the weather differ from our best computer models based on those equations, and both of these systems differ from the real thing the Earth's atmosphere itself. Confusingly, the numbers from each of our three types of systems are called time series, and we must constantly struggle to keep in mind the distinction between what these are time series of: a number of imaginary rabbits, the True temperature at the airport (if such a thing exists), a measurement representing that temperature, and a computer simulation of that temperature. The extent to which these differences are important depends on what we aim to do. Like la Tour's card players, scientists, mathematicians, statisticians, and philosophers each have different talents and aims. The physicist may aim to describe the observations with a mathematical model, perhaps testing the model by using it to predict future observations. Our physicist is willing to sacrifice mathematical tractability for physical relevance. Mathematicians like to prove things that are true for a wide range of systems, but they value proof so highly that they often do not care how widely they must restrict that range to have it; one should almost always be wary whenever a mathematician is heard to say '_almost every_'. Our physicist must be careful not to forget this and confuse mathematical utility with physical relevance; physical intuitions should not be biased by the properties of 'well-understood' systems designed only for their mathematical tractability. Our statistician is interested in describing interesting statistics from the time series of real observations and in studying the properties of dynamical systems that generate time series which look like the observations, always taking care to make as few assumptions as possible. Finally, our philosopher questions the relationships among the underlying physical system that we claim generated the observations, the observations themselves, and the mathematical models or statistical techniques that we created to analyse them. For example, she is interested in what we can know about the relationship between the temperature we measure and the true temperature (if such a thing exists), and in whether the limits on our knowledge are merely practical difficulties we might resolve or limits in principle that we can never overcome. ### Mathematical dynamical systems and attractors We commonly find four different types of behaviour in time series. They can (i) grind to a halt and more or less repeat the same fixed number over and over again, (ii) bounce around in a closed loop like a broken record, periodically repeating the same pattern: exactly the same series of numbers over and over, (iii) move in a loop that has more than one period and so does not quite repeat exactly but comes close, like the moment of high tide drifting through the time of day, or (iv) forever jump about wildly, or perhaps even calmly, displaying no obvious pattern. The fourth type looks random, yet looks can be deceiving. Chaos can look random but it is not random. In fact, as we have learned to see better, chaos often does not even look all that random to us anymore. In the next few pages we will introduce several more maps, though perhaps without the rice or rabbits. We need these maps in order to generate interesting artefacts for our tour in search of the various types of behaviour just noted. Some of these maps were generated by mathematicians for this very purpose, although our physicist might argue, with reason, that a given map was derived by simplifying physical laws. In truth, the maps are simple enough to have each come about in several different ways. Before we can produce a time series by iterating a map, we need some number to start with. This first number is called an _initial condition_, an initial _state_ that we define, discover, or arrange for our system to be. As in Chapter 2, we adopt the symbol X as shorthand for a state of our system. The collection of all possible states X is called the _state space_. For Fibonacci's imaginary rabbits, this would be the set of all whole numbers. Suppose our time series is from a model of the average number of insects per square mile at mid-summer each year. In that case, X is just a number and the state space, being the collection of all possible states, is then a line. It sometimes takes more than one number to define the state, and if so X will have more than one component. In predator-prey models, for instance, the populations of both are required and X has two components: it is a vector. When X is a vector containing both the number of voles (prey) and the number of weasels (predators) on the first of January each year, then the state space will be a two-dimensional surface - a plane - that contains all pairs of numbers. If X has three components (say, voles, weasels, and annual snowfall), then the state space is a three-dimensional space containing all triplets of numbers. Of course, there is no reason to stop at three components; although the pictures become more challenging to draw in higher dimensions, modern weather models have over 10,000,000 components. For a mathematical system, X can even be a continuous field, like the height of the surface of the ocean or the temperature at every point on the surface of the Earth. However, our observations of physical systems will never be more complicated than a vector, and since we will only measure a finite number of things, our observations will always be finite-dimensional vectors. For the time being, we will consider the case in which X is a simple number, such as one-half. Recalling that a mathematical map is just a rule that transforms one set of values into the next set of values, you can define the **Quadrupling Map** by the rule:Given an initial condition, like X equals one-half, this mathematical dynamical system produces a time series of values of X, in this case \(\lambda_{2}\times 4=2\), \(2\times 4=8\), \(8\times 4=32\ldots\) and the time series is \(0.5\), \(2\), \(8\), \(32\), \(128\), \(512\), \(2048\ldots\) And so on. This series just gets bigger and bigger and, dynamically speaking, that is not so interesting. If a time series of X grows without limit like this one does, we call it _unbounded_. In order to get a dynamical system where X is bounded, we'll take a second example, the **Quartering Map**: Take X divided by four as the new X Starting at X = \(\gamma_{2}\) yields the time series \(1/8\), \(1/32\), \(1/128\),.... At first sight, this is not very exciting since X rapidly shrinks towards zero. But in fact, the Quartering Map has been carefully designed to illustrate special mathematical properties. The origin - the state X = 0 - is a _fixed point_: if we start there we will never leave, since zero divided by four is again zero. The origin is also our first _attractor_; under the Quartering Map the origin is the inevitable if unreachable destination: if we start with some other value of X, we never actually make it to the attractor, although we get close as the number of iterations increases without limit. How close? Arbitrarily close. As close as you like. _Infinitesimally_ close, meaning closer than any number you can name. Name a number, any number, and we can work out how many iterations are required after which X will remain closer to zero than that number. Getting arbitrarily close to an attractor as time goes on while never quite reaching it is a common feature of many time series from nonlinear systems. The pendulum provides a physical analogue: each swing will be smaller than the last, an effect we blame on air resistance and friction. The analogue of the attractor in this case is the motionless pendulum hanging straight down. We will have more to say about attractors after we have added a few more dynamical systems to our menagerie. In the **Full Logistic Map**, time series from almost every X bounces around irregularly between zero and one forever: The order of the numbers in a time series is important, whether the series reflects monthly values of Fibonacci's rabbits or iterations of the Full Logistic Map. Using the short-hand suggested in Chapter 2, we will write \(X_{z}\) for the fifth new value of \(X\), and \(X_{o}\) for the initial state (or observation), and in general \(X_{i}\) for the ith value. Whether we are iterating the map or taking observations, i is always an integer and is often called 'time'. In the Full Logistic Map with \(X_{o}\) is equal to 0.5, \(X_{i}\) is equal to 1, \(X_{z}\) is 0, \(X_{3}\) is 0, \(X_{4}\) is 0, and \(X_{i}\) will be zero for all i greater than four as well. So the origin is again a fixed point. But under the Full Logistic Map small values of \(X\) grow (you can check this with a hand calculator), \(X=0\) is unstable and so the origin is not an attractor. A time series started near the origin is in fact unlikely to take one of the first three options noted at the opening of this section, but to bounce about chaotically forever. Figure 7 shows a time series starting near \(X_{o}\) equals 0.876; it represents a chaotic time series from the Full Logistic Map. But look at it closely: does it really look completely unpredictable? It looks like small values of \(X\) are followed by small values of \(X\), and that there is a tendency for the time series to linger whenever it is near three-quarters. Our physicist would look at this series and expect it to be predictable at least sometimes, while, after a few calculations, our statistician might even declare it random. Although we can see this structure, the most common statistical tests cannot. ### A menagerie of maps The rule that defines a map can be stated either in words, or as an equation, or in a graph. Each panel of Figure 8 defines the rule graphically. To use the graph, find the current value of \(\mathbf{X}\) on the horizontal axis, and then move directly upward until you hit the curve; the value of this point on the curve on the vertical axis is the new value of X. The Full Logistic Map is shown graphically in Figure 8 (b), while the Quarter Map is in panel (a). An easy way of using the graph to see if a fixed point is unstable is to look at the slope of the map at the fixed point: if the slope is steeper than 45 degrees (either up or down); then the fixed point is **8. Graphical presentation of the (a) Quarter Map, (b) Full Logistic Map, (c) Shift Map, (d) Tent Map, (e) Tripling Tent Map, and (f) the Moran-Ricker Map**Figure 1: The \(\alpha\)-dependence of the \(\alpha\)- unstable. In the Quartering Map the slope is less than one everywhere, while for the Full Logistic Map the slope near the origin is greater than one. Here small but non-zero values of \(X\) grow with each iteration but only as long as they stay sufficiently small (the slope near \(\gamma_{2}\) is zero). As we will see below, for _almost every_ initial condition between zero and one, the time series displays true mathematical _chaos_. The Full Logistic Map is pretty simple; chaos is pretty common. To see if a mathematical system is _deterministic_ merely requires checking carefully whether carrying out the rule requires a random number. If not, then the dynamical system is deterministic: every time we put the same value of \(X\) in, we get the same new value of \(X\) out. If the rule requires (really requires) a random number, then the system is random, also called _stochastic_. With a stochastic system, even if we iterate _exactly_ the same initial condition we expect the details of the next value of \(X\) and thus the time series to be different. Looking back at their definitions, we see that the three maps defined above are each deterministic; their future time series is completely determined by the initial condition, hence the name 'deterministic system'. Our philosopher would point out that just knowing \(X\) is not enough, we also need to know the mathematical system and we have to have the power to do exact calculations with it. These were the three gifts Laplace ensured his demon possessed 200 years ago. Our first stochastic dynamical system is the **AC Map**: Divide \(X\) by four, then subtract \(\gamma_{2}\) and add a random number \(R\) to get the new \(X\). The AC Map is a stochastic system since applying the rule requires access to a supply of random numbers. In fact, the rule above is incomplete, since it does not specify how to get \(R\). To complete the definition we must add something like: for \(R\) on each iteration, pick a number between zero and one in a manner that each number is equally likely to be chosen, which implies that \(R\) will be uniformly distributed between zero and one and that the probability of the next value of R falling in an interval of values is proportional to the width of that interval. What rule do we use to pick R? It could not be a deterministic rule, since then R would not be random. Arguably, there is no finite rule for generating values of R. This has nothing to do with needing uniform numbers between zero and one. We'd have the same problem if we wanted to generate random numbers which mimicked Galton's 'bell-shape' distribution. We will have to rely on our statistician to somehow get us the random numbers we need; hereafter we'll just state whether they have a uniform distribution or the bell-shaped distribution. In the AC Map, each value of R is used within the map, but there is another class of random maps - called Iterated Function Systems, or IFS for short - which appear to use the value of R not in a formula but to make a decision as to what to do. One example is the Middle Thirds IFS Map, which will come in handy later when we try to work out the properties of maps from the time series that they generate. The Middle Thirds IFS Map is: Take a random number R from a uniform distribution between zero and one. If R is less than a half, take X/3 as the new X Otherwise take 1 - X/3 as the new X. So now we have a few mathematical systems, and we can easily tell if they are deterministic or stochastic. What about computer simulations? Digital computer simulations are always deterministic. And as we shall see in Chapter 7, the time series from a digital computer is either on an endless loop of values repeating itself periodically, over and over again, or it is on its way towards such a loop. This first part of a time series in which no value is repeated, the trajectory is evolving towards a _periodic loop_ but has not reached it, is called a _transient_. In mathematical circles, this word is something of an insult, since mathematicians prefer to work with long-lived things, not mere transients. While mathematicians avoid transients, physical scientists may never see anything else and, as it turns out, digital computers cannot maintain them. The digital computers that have proven critical in advancing our understanding of chaos cannot, ironically, display true mathematical chaos themselves. Neither can a digital computer generate random numbers. The so-called random number generators on digital computers and hand calculators are, in fact, only pseudo-random number generators; one of the earliest of these generators was even based on the Full Logistic Map! The difference between mathematical chaos and computer simulations, like that between random numbers and pseudo-random numbers, exemplifies the difference between our mathematical systems and our computer simulations. The maps in Figure 8 are not there by chance. Mathematicians often construct systems in such a way that it will be relatively simple for them to illustrate some mathematical point or allow the application of some specific manipulation - a word they sometimes use to obscure technical sleight of hand. The really complicated maps - including the ones used to guide spacecraft and the ones called 'climate models', and the even bigger ones used in numerical weather prediction - are clearly constructed by physicists, not mathematicians. But they all work the same way: a value of X goes in and a new value X comes out. The mechanism is exactly the same as in the simple maps defined above, even if X might have over 10,000,000 components. ### Parameters and model structure The rules that define the maps above each involve numbers other than the state, numbers like four and one-half. These numbers are called _parameters_. While X changes with time, parameters remain fixed. It is sometimes useful to contrast the properties of time series generated using different parameter values. So instead of defining the map with a particular parameter value, like 4, maps are usually defined using a symbol for the parameter, say \(a\). We can then contrast the behaviour of the map at \(a\) equals 4 with that at \(a=2\), or \(a=3.569945\), for example. Greek symbols are often used to clearly distinguish parameters from state variables. Rewriting the Full Logistic Map with a parameter yields one of the most famous systems of nonlinear dynamics: the **Logistic Map**: Subtract \(X^{2}\) from \(X\), then multiply by \(a\) and take the result as the new \(X\). In physical models, parameters are used to represent things like the temperature at which water boils, or the mass of the Earth, or the speed of light, or even the speed with which ice 'falls' in the upper atmosphere. Statisticians often dismiss the distinction between the parameter and the state, while physicists tend to give parameters special status. Applied mathematicians, as it turns out, often force parameters towards the infinitely large or the infinitesimally small; it is easier, for example, to study the flow of air over an infinitely long wing. Once again, these different points of view each make sense in context. Do we require an exact solution to an approximate question, or an approximate answer to a particular question? In nonlinear systems, these can be very different things. ### Attractors Recall the Quartering Map, noting that after one iteration every point between zero and one will be between zero and one-quarter. Since all the points between zero and one-quarter are also between zero and one, none of these points can ever escape to values greater than one or less than zero. Dynamical systems in which, on average, line segments (or in higher dimensions, areas or volumes) shrink are called _dissipative_. Whenever a dissipative map translates a volume of state space completely inside itself, we know immediately that an attractor exists without knowing what it looks like. Whenever \(a\) is less than four we can prove that the Logistic Map has an attractor by looking at what happens to all the points between zero and one. The largest new value of \(X\) we can get is the iteration of \(X\) equals one-half. (Can you see this in Figure 8?) This largest value is \(a/4\), and as long as \(a\) is less than four this largest value is less than one. That means every point between zero and one iterates to a point between zero and \(a/4\) and is confined there forever. So the system must have an attractor. For small values of \(a\) the point \(X\) equals zero is the attractor, just like in the Quartering Map. But if \(a\) is greater than one, then any value of \(X\) near zero will move away and the attractor is elsewhere. This is an example of a non-constructive proof: we can prove that an attractor exists but, frustratingly, the proof does not tell us how to find it nor give any hint of its properties! Multiple time series of the Logistic Map for each of four different values of \(a\) are shown in Figure 9. In each panel, we start with 512 points taken at random between zero and one. At each step we move the entire ensemble of points forward in time. In the first step we see that all remain greater than zero, yet move away from \(X\) equals one never to return: we have an attractor. In (a) we see them all collapsing onto the period one loop; in (b) onto one of the two points in the period two loop; in (c) onto one of the four points of the period four loop. In (d), we can see that they are collapsing, but it is not clear what the period is. To make the dynamics more plainly visible, one member of our ensemble is chosen at random in the middle of the graph, and the points on its trajectory are joined by a line from that point forward. The period one loop (a) appears as a straight line, while (b) and (c) show the trajectories alternating between two or four points, respectively. While (d) first looks like a period four loop as well, but a closer look shows that there are many more than four options, and that while there is regularity in the order in which the bands of points are visited, no simple periodicity is visible. To get a different picture of the same phenomena, we can examine many different initial conditions and different values for \(a\) at the same time, as shown in Figure 13 (page 63). In this three-dimensional view, the initial states can be seen randomly scattered on the back left of the box. At each iteration, they move out towards you and the points collapse towards the pattern shown in the previous two figures. The iterated initial random states are shown after 0, 2, 8, 32, 128, and 512 iterations; it takes some time for the transients to die away, but the familiar patterns can be seen emerging as the states reach the front of the box. ### Tuning model parameters and structural stability We can see now that a dynamical system has three components: the mathematical rule that defines how to get the next value, the parameter values, and the current state. We can, of course, change any of these things and see what happens, but it is useful to distinguish what type of change we are making. Similarly, we may have insight into the uncertainty in one of these components, and it is in our interest to avoid accounting for uncertainty in one component by falsely attributing it to another. Our physicist may be looking for the 'True' model, or only just a useful one. In practice there is an art of 'tuning' parameter values. And while nonlinearity requires us to reconsider how we find 'good parameter values', chaos will force us to re-evaluate what we mean by 'good'. A very small difference in the value of a parameter which has an unnoticeable impact on the quality of a short-term forecast can alter the shape of an attractor beyond recognition. Systems in which this happens are called _structurally unstable_. Weather forecasters need not worry about this, but climate modellers must; as Lorenz noted in the 1960s. A great deal of confusion has arisen from the failure to distinguish between uncertainty in the current state, uncertainty in the value of a parameter, and uncertainty regarding the model structure itself. Technically, chaos is a property of a **9. Each frame shows the evolution of 512 points, initially spread at random between zero and one, as they move forward under the Logistic Map. Each panel shows one of four different values of \(a\), showing the collapse towards (a) a fixed point, (b) a period two loop, (c) a period four loop, and (d) chaos. The solid line starting at time 32 shows the trajectory of one point, in order to make the path on each attractor visible**Figure 1: The dynamical system with fixed equations (structure) and specified parameter values, so the uncertainty that chaos acts on is only the uncertainty in the initial state. In practice, these distinctions become blurred and the situation is much more interesting, and confused. ### Statistical models of Sun spots Chaos is only found in deterministic systems. But to understand its impact on science we need to view it against the background of traditional stochastic models developed over the past century. Whenever we see something repetitive in nature, periodic motion is one of the first hypotheses to be deployed. It can make you famous: Halley's comet, and the Wolf Sun spot number. In the end, the name often sticks even when we realize that the phenomenon is not really periodic. Wolf guessed that the Sun went through a cycle of about 11 years at a time when he had less than 20 years' data. Periodicity remains a useful concept even though it is impossible to prove a physical system is periodic regardless of how much data we take. So are the concepts of determinism and chaos. The solar record showed correlations with weather, with economic activity, with human behaviour; even 100 years ago the 11-year cycle could be'seen' in tree rings. How could we model the Sun spots cycle? Models of a frictionless pendulum are perfectly periodic, while the solar cycle is not. In the 1920s, the Scottish statistician Udny Yule discovered a new model structure, realizing how to introduce randomness into the model and get more realistic-looking time series behaviour. He likened the observed time series of Sun spots to those from the model of a damped pendulum, a pendulum with friction which would have a free period of about 11 years. If this model pendulum were 'left alone in a quiet room', the resulting time series would slowly damp down to nothing. In order to motivate his introduction of random numbers to keep the mathematical model going, Yule extended the analogy with a physical pendulum: 'Unfortunately, boys with pea shooters get into the room, and pelt the pendulum from all sides at random.' The resulting models became a mainstay in the statistician's arsenal. A linear, stochastic mainstay. We will define the **Yule Map**: Take \(a\) times X plus a random value R to be the new value of X where R is randomly drawn from the standard bell-shaped distribution. So how does this stochastic model differ from a chaotic model? There are two differences that immediately jump out at the mathematician: the first is that Yule's model is stochastic - the rule requires a random number generator, while a chaotic model of the Sun spots would be deterministic by definition. The second is that Yule's model is linear. This implies more than simply that we do not multiply components of the state together in the definition of the map; it also implies that one can combine solutions of the system and get other acceptable solutions, a property called _superposition_. This very useful property is not present in nonlinear systems. Yule developed a model similar to the Yule Map that behaved more like the time series of real Sun spots. Cycles in Yule's improved model differ slightly from one cycle to the next due to the random effects, the details of the pea shooters. In a chaotic model the state of the Sun differs from one cycle to the next. What about _predietability_? In any chaotic model, almost all nearby initial states will eventually diverge, while in each of Yule's models even far away initial states would converge, _if_ both experienced the same forcing from the pea shooters. This is an interesting and rather fundamental difference: similar states diverge under deterministic dynamics whereas they converge under linear stochastic dynamics. That does not necessarily make Yule's model easier to forecast, since we never know the details of the future random forcing, but it changes the way that uncertainty evolves in the system, as shown in Figure 10. Here an initially small uncertainty, or even an initially zero uncertainty, at the bottom grows wider and moves to the left with each iteration. Note that the uncertainty in the state seems to be approaching a bell-shaped distribution, and has more or less stabilized by the time it reaches the top of the graph. Once the uncertainty saturates in a static state, then all predictability is lost; this final distribution is called the 'climate' of the model. Physical dynamical systems There is no way of proving the correctness of the position of 'determinism' or 'indeterminism'. Only if science were complete or demonstrably impossible could we decide such questions. E. Mach (1905) There is more to the world than mathematical models. Just about anything we want to measure in the real world, or even just think about observing, can be taken to have come from a physical dynamical system. It could be the position of the planets in the solar system, or the surface of a cup of coffee on a vibrating table, or the population of fish in a lake, or the number of grouse on an estate, or a coin being flipped. The time series we want to observe now is the state of the physical system: say, the position of our nine planets relative to the Sun, the number of fish or grouse. As a short-hand, we will again denote the state of the system as X, while trying to remember that there is a fundamental difference between a model-state and the True state, if such a thing exists. It is unclear how these concepts stand in relation to each other; as we shall see in Chapter 11, some philosophers have argued that the discovery of chaos implies the real world must have special mathematical properties. Other philosophers, perhaps sometimes the same ones, have argued that the discovery of chaos implies mathematics does not describe the world. Such are philosophers. In any event, we never have access to the True state of a physical system, even if one exists. What we do have are observations, which we will call 'S' to distinguish them from the state of the system, X. What is the difference between X and S? The unsung hero of science: _noise_. Noise is the glue that bonds the experimentalists with the theorists on those occasions when they meet. Noise is also the grease that allows theories to slide easily over awkward facts. In the happy situation where we know the mathematical model which generated the observations and we also know of a _noise model_ for whatever generated whatever noise there was, then we are in the _Perfect Model Scenario_, or PMS. It is useful to distinguish a strong version of PMS where we know the parameter values exactly, from a weak version where we know only the mathematical forms and must estimate parameter values from the observations. As long as we are in either version of PMS, the noise is defined by the distance between X and S, and it makes sense to speak of noise as causing our uncertainty in the state, since we know a True state exists even if we do not know its value. Not much of this picture survives when we leave PMS. Even within PMS, noise takes on a new prominence once we acknowledge that the world is not linear. What about the concepts of deterministic and random, or even periodic? These refer to properties of our models; we can apply them to the real world only via (today's) best model. Are there really random physical dynamical systems? Despite the everyday use of coin flips and dice as sources of 'randomness', the typical answer in classical physics is: no, there is no randomness at all. With a complete set of laws it may (or may not) be too difficult for us to calculate the outcomes of coin flips, rolling dice, or spinning a roulette: but that is a problem only in practice, not in principle: Laplace's demon would have no difficulty with such predictions. Quantum mechanics, however, is different. Within the traditional quantum mechanical theory, the half-life of a uranium atom is as natural and real a quantity as the mass of the uranium atom. The fact that classical coin tosses or roulette are not best modelled as random is irrelevant, given the quantum mechanical claim for randomness and objective probabilities. Claims for - or against - the existence of objective probabilities require interpreting physical systems in terms of our models of those systems. As always. Some future theory may revoke this randomness in favour of determinism, but we are on the scene only for a vanishingly small interval. It is relatively safe to say that some of our best models of reality will still admit random elements as you read these words. ### Observations and noise Over the last few decades, a huge number of scientific papers have been written about using a time series to distinguish deterministic systems from stochastic systems. This avalanche was initiated in the physics literature, and then spread into geophysics, economics, medicine, sociology, and beyond. Most of these papers were inspired by a beautiful theorem proven by the Dutch mathematician Floris Takens in 1983, to which we will return in Chapter 8. Why were all these papers written, given that we have a simple rule for determining if a mathematical system is deterministic or stochastic? Why not just look at the rules of the system and see if it requires a random number generator? It is common to confuse the games mathematicians play with constraints placed on the work of the natural (and other) scientists. Real mathematicians like to play intellectual games, like pretending to forget the rules and then guessing if the system is deterministic or stochastic from looking only at the time series of the states of the system. Could they clearly identify any deterministic system given the time series from the infinitely remote past to the infinitely distant future? For fixed points and even periodic loops, this game is not challenging enough; to make it more interesting, consider a variation in which we do not know the exact states, but have access only to noisy observations, S, of each state X. The origin S is commonly, if somewhat misleadingly, thought of as being related to the addition of a random number to each true X. In that case, this _observational noise_ does not affect the future states of the system, only our observations of each state; it is a very different role from that played by the random numbers R in the stochastic systems, like the Yule Map where the value of R did impact the future since it changed the next value of X. To maintain this distinction, random influences that do influence X are called _dynamic noise_. As noted above, mathematicians can work within the Perfect Model Scenario (PMS). They start off knowing that the model which generated the time series has a certain kind of structure, and sometimes they assume they know the structure (weak PMS), sometimes even the values of the parameters as well (strong PMS). They generate a time series of X, and from this a time series of S. They then pretend to forget the values of X and see if they can work out what they were, or they pretend to forget the mathematical system and see if, given only S, they can identify the system along with its parameter values, or determine if the system is chaotic, or forecast the next value of X. At this point, it should be pretty easy to see where their game is going: our mathematicians are trying to simulate the situation that natural scientists can never escape from. The physicists, earth scientists, economists, and other scientists do _not_ know the rule, the full Laws of Nature, relevant to the physical systems of scientific study. And scientific observations are not perfect; they may be invariably uncertain due to observational noise, but that is not the end of the story. It is a capital mistake to confuse real observations with those of these mathematical games. The natural scientist is forced to play a different game. While attempting to answer the same questions, the scientist is given only a time series of observations, S, some information regarding the statistics of the observational noise, and the _hope_ that some mathematical map exists. Physicists can never be sure if such a structure exists or not; they cannot even be certain if the model state variable X really has any physical meaning. If X is the number of rabbits in a real garden, it is hard to imagine that X does not exist, it is just some whole number. But what about model variables like wind speed or temperature? Are there real numbers that correspond to those components of our state vector? And if not, where between rabbits and wind speed does the correspondence break down?Our philosopher is very interested in such questions, and we all should be. LeVerrier, the Frenchman who worked with Fitzroy to set up the first weather warning system, died famous for discovering two planets. He used Newton's Laws to predict the location of Neptune based on 'irregularities' in the observed time series of Uranus's orbit, and that planet was duly observed. He also analysed 'irregularities' in the orbit of Mercury, and again told observers where to find another new planet. And they did: the new planet, named Vulcan, was very near the Sun and difficult to see, but it was observed for decades. We now know that there is no planet Vulcan; LeVerrier was misled because Mercury's orbit is poorly described by Newton's Laws (although it is rather better described by Einstein's). How frequently do we blame the mismatch between our models and our data on noise when the root cause is in fact model inadequacy? Most really interesting science is done at the edges, whether the scientists realize it or not. We are never sure if today's laws apply there or not. Modern-day climate science is a good example of hard work being done at the edge of our understanding. The study of chaos has clarified the importance of distinguishing two different issues: one being the effects of uncertainty in the state or the parameters, the other being the inadequacy of our mathematics itself. Mathematicians working within PMS can make progress by pretending that they are not, while scientists who pretend - or believe - that they are working within PMS when they are not can wreak havoc, especially if their models are naively taken as a basis for decision making. The simple fact is that we cannot apply the standards of mathematical proof to physical systems, but only to our mathematical models of physical systems. It is impossible to prove that a physical system is chaotic, or to prove it is periodic. Our physicist and mathematician must not forget that they sometimes use the same words to mean rather different things; when they do, they often run into some difficulty and considerable acrimony. Mach's comment above (page 53) suggests that this is not a new issue. ## Chapter 4 Chaos in mathematical models We would all be better off if more people realised that simple nonlinear systems do not necessarily possess simple dynamical properties. Lord May (1976) This chapter consists of a very short survey of chaotic mathematical models from zoology to astronomy. Like any cultural invasion, the arrival of nonlinear deterministic models with sensitive dependence was sometimes embraced, and sometimes not. It has been most uniformly welcomed in physics where, as we shall see, the experimental verification of its prophecies has been nothing short of astounding. In other fields, including population biology, the very relevance of chaos is still questioned. Yet it was population biologists who proposed some of the first chaotic models a decade before the models of astronomers and meteorologists came on the scene. Renewed interest in this work was stimulated in 1976 by an influential and accessible review article in the journal _Nature_. We begin with the basic insights noted in that article. ### 19 The darling bugs of May In 1976, Lord May provided a high-profile review of chaotic dynamics in _Nature_ that surveyed the main features of deterministic nonlinear systems. Noting that many interesting questions remained unresolved, he argued that this new perspectiveprovided not just theoretical but practical and pedagogical value as well, and that it suggested everything from new metaphors for describing systems to new quantities to observe and new parameter values to estimate. Some of the simplest population dynamics are those of breeding populations when one generation does not overlap with the next. Insects that have one generation per year, for example, might be described by discrete time maps. In this case \(X_{i}\) would represent the population, or population density, in the ith year, so our time series would have one value per year, and the map is the rule that determines the size of next year's population given this year's. A parameter \(a\) represents the density of resources. In the 1950s, Moran and Ricker independently suggested the map shown in Figure 8(f) (page 40). Looking at this graph, we can see that when the current value of X is small, the next value of X is larger: small populations grow. Yet if X gets too big, then the next value of X is small, and when the current value is very large, the next value is very small: large populations exhaust the resources available to each individual, and so successful reproduction is reduced. Irregularly fluctuating populations have long been observed, and researchers have long argued over their origin. Time series of Canadian lynx and both Scandinavian and Japanese voles are, along with the Sun spot series, some of the most analysed data sets in all of statistics. The idea that very simple nonlinear models can display such irregular fluctuations suggested a new potential mechanism for real population fluctuations, a mechanism that was in conflict with the idea that 'natural' populations should maintain either a steady level or a regular periodic cycle. The idea that these random-_looking_ fluctuations need not be induced by some outside force like the weather, but could be inherent to the natural population dynamics, had the potential to radically alter attempts to understand and manage populations. While noting that'replacing a population's interactions with its biological and physical environment by passive parameters may do great violence to the reality', May provided a survey of interesting behaviours in the Logistic Map. The article ends with 'an evangelical plea for the introduction of these difference equations into elementary mathematics courses, so that students' intuition may be enriched by seeing the wild things that simple nonlinear equations can do'. That was three decades ago. We will consider a few of these wild things below, but note that the mathematicians' focus on the Logistic Map is not meant to suggest that this map itself in any sense 'governs' the various physical and biological systems. One thing that distinguishes nonlinear dynamics from traditional analysis is that the former tends to focus more on the behaviour of systems rather than on the details of any one initial state under particular equations with specific parameter values: a focus on geometry rather than statistics. Similar dynamics can be more important than 'good' statistics. And it turns out that the Logistic Map and the Moran-Ricker Map are very similar in this way, even though they look very different in Figure 8(f) (page 40). The details may well matter, of course; the enduring role of the Logistic Map itself may be pedagogical, helping to exorcize the historical belief that complicated dynamics requires either very complicated models or randomness. ### Universality: prophesying routes to chaos The Logistic Map gives rise to amazingly rich varieties of behaviour. The famous bifurcation diagram of Figure 11 summarizes the behaviour of the map at many different values of its parameter in one figure. The horizontal axis is \(a\) and the dots in any vertical slice indicate states which fall near the attractor for that value of \(a\). Here \(a\) reflects some parameter of the system: if X is the number of fish in the lake, then \(a\) is the amount of food in the lake; if X is the time between drips of the faucet, then \(a\) is the rate of water leaking through the tap; if X is the motion of rolls in fluid convection, then \(a\) is the heat delivered to the bottom of the pan. In models of very different things, the behaviour is the same. For small \(a\) (on the left) we have a fixed point attractor. The location of the fixed point increases as \(a\) increases, until \(a\) reaches a value of one, where the fixed point vanishes and we observe iterations which alternate between two points: a period two loop. As \(a\) continues to increase, we get a period four loop, then period eight, then 16, then 32. And so on. Bifurcating over, and over again. Since the period of the loop always increases by a factor of two, these are called _period doubling bifurcations_. While the old loops are no longer seen, they do not cease to exist. They are still there, but have become unstable. This is what happened to the origin in the Logistic Map when \(a\) is greater than one: X only stays at zero if it is exactly equal to zero, while small non-zero values grow at each iteration. Similarly, points near an unstable periodic loop move away from it, and so we no longer see them clearly when iterating the map. There is a regularity hidden in Figure 11. Take any three consecutive values of \(a\) at which the period doubles, subtract the first from the second, and then divide that number by the difference between the second and the third. The result leads to the Feigenbaum number, \(\sim\)4.6692016091. Mitch Feigenbaum discovered these relationships, working with a hand calculator in Los Alamos in the late 1970s, and **II. Period doubling behaviour in the Logistic Map as \(a\) increases from 2.8 to \(\sim\)3.5; the first three doublings are marked**the ratio is now known by his name. Others also found it independently; having the insight to do this calculation was stunning in each case. Since the Feigenbaum number is greater than one, values of \(a\) at which bifurcations occur get closer and closer together, and we have an infinite number of birfurcations before reaching a value of \(a\) near \(3.5699456718\). Figure 12 indicates what happens for larger values of \(a\). This sea of points is largely chaotic. But note the windows of periodic behaviour, for instance the period three window where \(a\) takes on the value of one plus the square root of eight (that is, about \(3.828\)). This is a stable period three loop; can you identify windows corresponding to period five? Seven? Figure 13 puts the figures of the Logistic Map in context. Randomly chosen values for \(a\) and \(\mathrm{X_{o}}\) form a cloud of points on the t equals zero slice of this three-dimensional figure. Iterating the Logistic Map forward from these values, the transients fall away, and the attractors at each value of \(a\) slowly come into view, so that after \(512\) iterations the last time slice resembles Figure 12. **12. A variety of behaviours in the Logistic Map as \(a\) increases from a period four loop at \(a=3.5\) to chaos at \(a=4\). Note the replicated period doubling cascades at the right side of each periodic window**Three-dimensional diagram showing the collapse of initially random values of \(\mathbf{X}_{o}\) and \(\mathbf{a}\) at the left rear side of the box falling toward their various attractors as the number of iterations increases. Note the similarity of the points near the right forward side with those in Figures 11 and 12It would be asking too much to expect something as simple as the Logistic Map to tell us anything about the behaviour of liquid helium. But it does. Not only does the onset of complicated behaviour show a qualitative indication of period doubling, the actual quantitative values of the Feigenbaum numbers computed from many experiments agree remarkably well with those computed from the Logistic Map. Many physical systems seem to display this 'period doubling route to chaos': hydrodynamics (water, mercury, and liquid helium), lasers, electronics (diodes, transistors), and chemical reactions (BZ reaction). One can often estimate the Feigenbaum number to two-digits' accuracy in experiments. This is one of the most astounding results reported in this Introduction to chaos: how could it be that simple calculations with the Logistic Map can give us information that is relevant to all these physical systems? The mathematician's fascination with this diagram arises not only from its beauty but also from the fact that we would get a similar picture for the Moran-Ricker Map and many other systems that at first instance appear quite different from the Logistic Map. A technical argument shows that the period doubling is common in 'one-hump' maps where the hump _looks like_ a parabola. In a very real and relevant sense, almost all nonlinear maps look like this very close to their maximum value, so properties like period doubling are called 'universal', although not _all_ maps have them. More impressive than these mathematical facts is the empirical fact that a wide variety of physical systems display unexpected behaviour that, as far as we can see, reflects this mathematical structure. Is this not then a strong argument for the mathematics to govern, not merely describe, Nature? To address this question, we might consider whether the Feigenbaum number is more akin to a constant of geometry, like \(\pi\), or a physical constant like the speed of light, c. The geometry of disks, cans, and balls is well described using \(\pi\), but \(\pi\) hardly governs the relationship between real lengths, area, and volumes in the same way that the values of physical constants govern the nature of things within our laws of nature. ### The origin of the mathematical term 'chaos' In 1964 the Russian mathematician A. N. Sharkovski proved a remarkable theorem about the behaviours of many 'one-hump' maps: namely that discovering a periodic loop indicated that others, potentially lots of others, existed. Discovering that a period 16 loop existed for a particular value of the parameter implied there were loops of period eight and of four and of two and of one at that value; while finding a loop of period three meant that there was a loop of every possible period! It is another non-constructive proof; it does not tell us where those loops are, but nevertheless it is a pretty neat result. Eleven years after Sharkovski, Li and Yorke published their enormously influential paper with the wonderful title 'Period Three Implies Chaos'. The name 'chaos' stuck. ### Higher-dimensional mathematical systems Most of our model states so far have consisted of just one component. The vote and weasel model is an exception, since the state consisted of two numbers: one reflecting the population of voles, the other the population of weasels. In this case, the state is a vector. Mathematicians call the number of components in the state the _dimension_ of the system, since plotting the state vectors would require a state space of that dimension. As we move to higher dimensions, the systems are often not maps but _flows_: a map is a function that takes one value of X and returns the next value of X, while a flow provides the velocity of X for any point in the state space. Think of a parsnip floating under the surface of the sea; it is carried along by the current and will trace out the flow of the sea. The three-dimensional path of the parsnip in the sea is analogous to a path traced out by X in the state space, and both are sometimes called _trajectories_. If instead of a parsnip, we follow the path of an infinitesimal parcel of the fluid itself, we often find these paths to be recurrent with sensitive dependence. The equations are deterministic and these fluid parcels are said todisplay 'Lagrangian chaos'. Laboratory experiments with fluids often display beautiful patterns which reflect the chaotic dynamics observed in our models of fluid flow. Without examining the differential equations that define these velocity fields, we will next touch several classic chaotic systems. ### Dissipative chaos In 1963, Ed Lorenz published what became a classic paper on the predictability of chaotic systems. He considered a vastly simplified set of three equations based on the dynamics of a fluid near the onset of convection which is now called the _Lorenz System_. One can picture the three components of the state in terms of convective rolls in a layer of fluid between two flat plates when the lower plate is heated. When there is no convection, the fluid is motionless and the temperature in the fluid decreases uniformly from the warmer plate at the bottom to the cooler plate at the top. The state X of the Lorenz model consisted of three values {x,y,z}, where x reflected the speed of the rotating fluid, y quantified the temperature difference between rising and sinking fluid, and z measured the deviation from the linear temperature profile. An attractor from this system is shown in Figure 14; by chance, it looks something like a butterfly. The different shading on the attractor indicates variations in the time it takes an infinitesimal uncertainty to double. We return to discuss the meaning of these shades in Chapter 6, but note the variations with location. The evolution of uncertainty in the Lorenz system is shown in Figure 15; this looks a bit more complicated than the corresponding figure for the Yule Map in Figure 10 (page 52). Figure 15 shows the kind of forecast our 21st-century demon could make for this system: an initial small uncertainty at the bottom of the panel grows wider, then narrower, then wider, then narrower...eventually splitting into two parts and beginning to fade away. But depending on the decisions we are trying to make, there may still be useful information in this pattern even at the time reflected at the top of **14. Three-dimensional plots of (above) the Lorenz attractor and (below) the Moore-Spiegel attractor. The shading indicates variations in uncertainty doubling time at each point**The probability forecast our 21st-century demon would make for the 1963 Lorenz System. Contrast the way uncertainty evolves in this chaotic system with the relatively simple growth of uncertainty under the Yule Map shown in Figure 10 on page 52the panel. On this occasion the uncertainty has not stabilized by the time it reaches the top of the graph. In 1965, mathematical astronomers Moore and Spiegel considered a simple model of a parcel of gas in the atmosphere of a star. The state space is again three-dimensional, and the three components of X are simply the height, velocity, and acceleration of the parcel. The dynamics are interesting because we have two competing forces: a thermal force that tends to destabilize the parcel and a magnetic force that tends to bring it back to its starting point, much like a spring would. As the parcel rises, it finds itself at a different temperature than the surrounding fluid and this feeds back on its velocity and its temperature, but at the same time the star's magnetic field acts as a spring to pull the parcel back towards its original location. Motion caused by two competing forces often gives rise to chaos. The Moore-Spiegel attractor is also shown in Figure 16. Chaos experiments have always pushed computers to their limits, and sometimes slightly beyond those limits. In the 1970s, the astronomer Michael Henon wanted to make a detailed study of chaotic attractors. For a given amount of computer power there is a direct trade-off between the complexity of the system and the duration of the time series one can afford to compute. Henon wanted a system with properties similar to Lorenz's 1963 system that would be cheaper to iterate on his computer. This was a two-dimensional system, where the state X consisted of the pair of values {x,y}. The Henon Map is defined by the rules: The new value of x \({}_{i+1}\) is equal to one minus y\({}_{i}\) plus \(\alpha\) times x\({}_{i}^{2}\); the new value of y \({}_{i+1}\) is equal to \(\beta\) times x\({}_{i}\). Panel (b) of Figure 16 shows the attractor when \(\alpha\) is 1.4 and \(\beta\) is 0.3; panel (a) shows a slice of the Moore-Spiegel attractor made by combining snapshots of the system whenever z was zero and **16. Two-dimensional plots of (a) a slice of the Moore-Spiegel attractor at z = 0; and (b) the H\(\hat{\rm n}\)on attractor where \(\alpha\) is 1.4 and \(\beta\) is 0.3. Note the similar structure with many leaves in each case**growing. This type of figure is called a _Poincare section_ and illustrates how slices of a flow are much like maps. ### Delay equations, epidemics, and medical diagnostics Another interesting family of models are delay equations. Here both the current state and some state in the past (the 'delayed state') play a direct role in the dynamics. These models are common for biological systems, and can provide insight into oscillatory diseases like leukaemia. In the blood supply, the number of cells available tomorrow depends upon the number available today, and also the number of new cells that mature today; the delay comes from the gap in time between when these new cells are requested and when they mature: the number of cells maturing today depends on the number of blood cells at some point in the past. There are many other diseases with this kind of oscillatory dynamics, and the study of chaos in delay equations is extremely interesting and productive. We leave the discussion of mathematical models for a paragraph to note that medical research is another area where insights from our mathematical models are deployed for use in real systems. Research by Mike Mackey at McGill University and others on delay equations has even led to a cure for at least one oscillatory disease. The study of nonlinear dynamics has also led to insights in the evolution of diseases that oscillate in a population, not an individual; our models can be contrasted with reality in the study of measles, where one can profitably consider the dynamics in time and in space. The analysis of chaotic time series has also led to the development of insightful ways to view complicated medical time series, including those from the brain (EEG) and heart (ECG). This is not to suggest that these medical phenomena of the real word are chaotic, or even best described with chaotic models; methods of analysis developed for chaos may prove of value in practice regardless of the nature of the underlying dynamics of the real systems that generate the signals analysed. Hamiltonian chaos If volumes in state space do not shrink in time there can be no attractors. In 1964, Henon and Heiles published a paper showing chaotic dynamics in a four-dimensional model of the motion of a star within a galaxy. Systems in which volumes in state space do not shrink, including those of Newtonian celestial mechanics commonly used to predict eclipses, and which trace the future of the solar system and spacecraft within it, are called _Hamiltonian_. Figure 17 is a slice from the Henon-Heiles system, which is Hamiltonian. Note the intricate interweaving of empty islands in a sea of chaotic trajectories. Initial states started within these islands may fall onto almost closed loops (tori); alternatively they may follow chaotic trajectories confined within an island chain. In both cases, the order in which the islands in the chain are visited is predictable, although exactly where on each island might be unpredictable; in any case, things are only unpredictable on small length scales. **17. A two-dimensional slice of the H\(\acute{\rm e}\)non-Heiles attractor. Note the simultaneous loops, and a chaotic sea with many (empty) islands** ### Exploiting the insights of chaos In the three-year period between 1963 and 1965, three independent papers appeared (by Lorenz, by Moore and Spiegel, and by Henon and Heiles), each using digital computers to introduce what would be called 'chaotic dynamics'. In Japan, chaos had been observed in analogue computer experiments by Yoshisuke Ueda, and Russian mathematicians were advancing upon a groundwork laid down by over a century of international mathematics. Almost 50 years later, we are still finding new ways to exploit these insights. What limits the predictability of future solar eclipses? Is it uncertainty in our knowledge of the planetary orbits due to the limited accuracy of our current measurements? Or future variations in the length of the day which alters the point on the surface of the Earth under the eclipse? Or the failure of Newton's equations due to effects (better) described by general relativity? We can see that the Moon is slowly moving away from the Earth, and assuming that this continues, it will eventually appear too small to block the entire Sun. In that case, there will be a last total eclipse of the Sun. Can we forecast when that event will occur and, weather permitting, where we should be on the surface of the Earth in order to see it? We do not know the answer to that question. Nor do we know, for certain, if the solar system is stable. Newton was well aware of the difficulties nonlinearities posed for determining the ultimate stability of only three celestial bodies, and suggested that insuring the stability of the solar system was a task for God. By understanding the kinds of chaotic orbits that Hamiltonian systems admit, we have learned many things about the ultimate stability of the solar system. Our best guess, currently, is that our solar system is stable, probably. Insights like these come from understanding the geometry in state space rather than attempting detailed calculations based upon observations. Can we safely draw insights from mathematical behaviour of low-dimensional systems? They suggest new phenomena to look for in experiments, like periodic doubling, or suggest new constants to estimate in Nature, like the Feigenbaum number. These simple systems also provide test beds for our forecast methods; this is a bit more dangerous. Are the phenomena of low-dimensional chaotic systems the same phenomena that we observe in more complicated models? Are they so common that they occur _even in_ simple low-dimensional systems like Lorenz 1963 or the Moore-Spiegel system? Or are these phenomena due to the simplicity of these examples: do they occur _only in_ simple mathematical systems? The same _even in or only in_ question applies to techniques developed to forecast or control chaotic systems, which are tested in low-dimensional systems: do these things happen _even in or only in_ low-dimensional systems? The most robust answer so far is that difficulties we identify in low-dimensional systems rarely go away in higher-dimensional systems, while successful solutions to these difficulties which work in low-dimensional systems often fail to work in higher-dimensional systems. Recognizing the danger of generalizing from three-dimensional systems, Lorenz moved on to a 28-dimensional system about 50 years ago; he is still creating new systems today, some in two dimensions and others in 200 dimensions. Chaos and nonlinearity impact many fields; perhaps the deepest insight to be drawn here is that complicated-looking solutions are sometimes acceptable and need not be due to external dynamic noise. This does not imply that, in any particular case, they are not due to external noise, nor does it lessen the practical value of stochastic statistical modelling, which has almost a century of experience and statistical good practice behind it. It does suggest the value in developing tests for which methods to use in a given application, and consistency tests for all modelling approached. Our models should be as uninhibited as possible, but not more so. The lasting impact of these simple systems may be in their pedagogical value; young people can be exposed to the rich behaviours of these simple systems early in their education. By requiring internal consistency, mathematics constrains our flights of fancy in drawing metaphors, not so much as to bring them in line with physical reality, but often opening new doors. ## Chapter 5 Fractals, strange attractors, and dimension(s) Big fleas have little fleas upon their backs to bite'em. And little fleas have lesser fleas, and so ad infinitum. A. de Morgan (1872) No introduction to chaos would be complete without touching upon _fractals_. This is neither because chaos implies fractals nor because fractals require chaos, but simply because in dissipative chaos real mathematical fractals appear as if from nowhere. It is just as important to distinguish mathematical fractals from physical fractals as it is to distinguish what we mean by chaos in mathematical systems from what we mean by chaos in physical systems. Despite decades of discussion, there is no single generally accepted definition of a fractal in either case, although you can usually recognize one when you see it. The notion is bound up in self-similarity: as we zoom in on the boundary of clouds, countries, or coastlines, we see patterns similar to those seen at the larger-length scales again and again. The same thing happens with the set of points in Figure 18. Here the set is composed of five clusters of points; if we enlarge any one of these clusters, we find the enlargement looks similar to the entire set itself. If this similarity is exact - if the zoom is equivalent to the original set - then the set is called _strictly self-similar_. If only statistical properties of interestare repeated, then the set is called _statistically self-similar_. Deciding exactly what counts as a'statistical property of interest' opens one of the discussions that has prevented agreement on a general definition. Disentangling these interesting details deserves its own _Very Short Introduction to Fractals_; we will content ourselves with some examples. In the late 18th century, fractals were widely discussed by mathematicians including Georg Cantor, although the famous Middle Thirds set that bears his name was first found by an Oxford mathematician named Henry Smith. Fractal entities were often disavowed by their mathematical parents as monstrous curves in the 100 years that followed, just as L. F. Richardson was beginning to quantify the fractal nature of various physical fractals. Both physical and mathematical fractals were more warmly embraced by astronomers, meteorologists, and social scientists. One of the first fractals to bridge the divide - and blur the distinction - between a mathematical space and real-world space appeared about 100 years ago in an attempt to resolve Olbers' paradox. ### A fractal solution to Olbers' paradox In 1823, the German astronomer Heinrich Olbers encapsulated a centuries-old concern of astronomers in the concise question: 'Why is the night sky dark?' If the universe were infinitely large and more or less uniformly filled with stars, then there would be a balance between the number of stars at a given distance and the light we get from each one of them. This delicate balance implies that the night sky should be uniformly bright; it would even be difficult to see the Sun against a similarly bright day-time sky. But the night sky is dark. That is Olbers' paradox. Johannes Kepler used this as an argument for a finite number of stars in 1610. Edgar Allan Poe was the first to suggest an argument still in vogue today: that the night sky was dark because there had not been enough time for light from far-away stars to reach the Earth, yet. Writing in 1907, Fournier d'Albe proposed an elegant alternative, suggesting that the distribution of matter in the universe was uniform but in a fractal manner. Fournier illustrated his proposal with the figure reproduced in Figure 18. This set is called the Fournier Universe. It is strictly self-similar: blowing up one of the small cubes by a factor of 5 yields an exact duplicate of the original set. Each small cube contains the totality of the whole. The Fournier Universe illustrates a way out of Olbers' paradox: the line Fournier placed in Figure 18 indicates one of many directions in which no other'star' will ever be found. Fournier did not stop **EIGERAL OF A KULTI-UNYRES** **18. The Fournier Universe, showing the self-similar structure, as published by Fournier himself in 1907**at the infinitely large, but also suggested that this cascade actually extended to the infinitely small; he interpreted atoms as micro-verses, which were in turn made of yet smaller particles, and suggested macro-verses in which our galaxies would play the role of atoms. In this way, he proposed one of the few physical fractals with no inner cut-off and no outer cut-off: a cascade that went from the infinitely large to the infinitesimally small in a manner reminiscent of the last frames of the film _Men in Black_. ## Fractals in physics Big whorls have little whirls, which feed on their velocity. And little whirls have lesser whirls, and so on to viscosity. L. F. Richardson Clouds, mountains, and coastlines are common examples of natural fractals; statistically self-similar objects that exist in real space. Interest in generating fractal irregularity is not new: Newton himself recorded an early recipe, noting that when beer is poured into milk and 'the mixture let stand till dry, the surface of the curdled substance will appear as rugged and mountainous as the earth at any place'. Unlike Newton's curdled substance, the fractals of chaos are mathematical objects found in state spaces; they are true fractals as opposed to their physical counterparts. What is the difference? Well, for one thing, a physical fractal only displays the properties of a fractal at certain length scales and not at others. Consider the edge of a cloud: as you look more and more closely, going to smaller and smaller length scales, you'll reach a point at which the boundary is no more; the cloud vanishes into the helter-skelter rush of molecules and there is no boundary to measure. Similarly, a cloud is not self-similar on length scales comparable with the size of the Earth. For physical fractals, fractal concepts break down as we look too closely; these physical cut-offs make it easy to identify old Hollywood special effects using model ships in wave tanks: we can sense the cut-off is at the wrong length scale relative to the'ships'. Today, film makers in Hollywood and in Wellington have learned enough mathematics to generate computer counterfeits that hide the cut-off better. The Japanese artist Hokusai respected this cut-off in his famous 'Great Wave' print of the 1830s. Physicists have also known this for some time: de Morgan's poem allowed its cascade of fleas to continue _ad infinitum_, while the cascade whorls in L. F. Richardson's version face a limit due to viscosity, the term for friction within fluids. Richardson was expert in the theory and observation of turbulence. He once threw parsnips off one end of Cape Cod canal at regular intervals, using the time of their arrival at a bridge on the other end of the canal to quantify how the fluid dispersed as it moved downstream. He also computed (by hand!) the first numerical weather forecast, during the First World War. * A Quaker, who left the Met Office in the First World War to become an ambulance driver in France, Richardson later became interested in measuring the length of the border between nations in order to test his theory that this influenced the likelihood of their going to war. He identified an odd effect when measuring the same border on different maps: the border between Spain and Portugal was much longer when measured on the map of Portugal than it was when measured on the map of Spain! Measuring coastlines of island nations like Britain, he found that the length of the coastline increased as the callipers he walked along the coast to measure it decreased, and also noted an unexpected relationship between the area of an island and its perimeter as both vary when measured on different scales. Richardson demonstrated that these variations with length scale followed a very regular pattern which could be summarized by a single number for a particular boundary: an exponent that related the length of a curve to the length scale used to measure it. Following fundamental work by Mandelbrot, this number is called the _fractal dimension_ of the boundary. Richardson developed a variety of methods to estimate the fractal dimension of physical fractals. The area-perimeter method quantifies how the area and perimeter both change under higher and higher resolution. For one particular object, such as a single cloud, this relationship also yields the fractal dimension of its border. When we look at many _different_ clouds at the _same_ resolution, as in a photograph from space, a similar relationship between areas and perimeters emerges; we do not understand why this alternative area-perimeter relation seems to hold for collections of different-sized clouds, given that clouds are famous for not all looking the same. ### Fractals in state space We next construct a rather artificial mathematical system designed to dispel one of the most resilient and misleading myths of chaos: that detecting upon a fractal set in state space indicates deterministic dynamics. The Tripling Tent Map is: If X is less than a half then take 3X as the new value of X, Otherwise take 3 minus 3X as the new value of X. Almost every initial state between zero and one flies far away from the origin; we will ignore these and focus on the infinite number of initial conditions which remain forever between zero and one. (We ignore the apparent paradox due to the loose use of 'infinity' here, but note Newton's warning that 'the principle that all infinities are equal is a precarious one'.) The Tripling Tent Map is chaotic: it is clearly deterministic, the trajectories of interest are recurrent, and the separation between infinitesimally close points increases by a factor of three on each iteration, which implies sensitive dependence. A time series from the Tripling Tent Map, along with one from the stochastic Middle Thirds IFS Map, are shown in Figure 19. Visually, we see hints that **19. A time series from (a) the stochastic Middle Thirds IFS Map and (b) the deterministic Tripling Tent Map. The lower insets show a summary of all the points visited: approximations to the Middle Thirds Cantor set in each case**the chaotic map is easier to forecast: small values of \(\mathbf{X}\) are _always_ followed by small values of \(\mathbf{X}\) The two small insets at the bottom of Figure 19 each show a set of points visited by a long trajectory from one of the systems, they look very similar and in fact both reflect points from the Middle Thirds Cantor set. The two dynamical systems each visit the same fractal set, so we can never distinguish the deterministic system from the stochastic system if we only look at the dimension of the set of points each system visits; but is it any surprise that to understand the dynamics we have to examine how the system moves about, not only where it has been? This simple counter-example slays the myth noted above; while chaotic systems may often move on fractal sets, detecting a finite dimensional set indicates neither determinism nor chaotic dynamics. Finding fractals in carefully crafted mathematical maps is not so surprising, as mathematicians are clever enough to design maps which create fractals. One of the neatest things about dissipative chaos is that fractals appear without the benefit of intelligent design. The Henon Map is the classic example. Mathematically speaking, it represents an entire class of interesting models; there is nothing particularly 'fractal-looking' in its definition, as there is in the Middle Thirds IFS Map. Figure 20 shows a series of zooms from where, as if by magic, self-similar structures spring out. Surely this is one of the most amazing things about nonlinear dynamical systems. There is no hint of artificial design in the Henon Map, and fractal structure appears commonplace in the attractors of dissipative chaotic systems. It is not required for chaos, nor vice versa, but it is common. Like all magic, we can understand how the trick works, at least after the fact: we have chosen to zoom in about a fixed point of the Henon Map, and looking at the properties of the map very, very close to this point reveals how much to zoom in order to make its self-similarity so striking. The details of the repeated structure, a thick line and two thinner lines, depend on what happens far away from this point. But if Henon is really chaotic and the computer trajectory used to make these pictures is realistic, then we have a fractal attractor naturally. The traditional theory of turbulence in state space reflected Richardson's poem: it was thought that more and more periodic modes would be excited and tracing the linear sum of all those oscillations would require a very high-dimensional state space. So most physicists were expecting the attractors of turbulence to be high-dimensional doughnuts, or mathematically speaking, tori. In the early 1970s, David Ruelle and Floris Takens were looking for alternatives to smooth high-dimensional tori and ran into lower-dimensional fractal attractors; they found the fractal attractors'strange'. Today, the word'strange' is used to describe the geometry of the attractor, specifically the fact that it is a fractal, while the word 'chaos' is used to describe the dynamics of the system. It is a useful distinction. The precise origin of the phrase '_strange attractor_' has been lost, but the term has proven an inspiring and appropriate label for these objects of mathematical physics. Since Hamiltonian systems have no attractors at all, they have no strange attractors. Nevertheless, chaotic time series from Hamiltonian systems often develop intricate patterns with stark inhomogeneity and hints of self-similarity called _strange accumulators_ which persist for as long as we run our computers. Their ultimate fate remains unknown. ### Fractal dimensions Counting the number of components in the state vector tells us the dimension of the state space. But how would we estimate the dimension of a set of points if those points do not define a boundary; the points that form a strange attractor, for example? One approach reminiscent of the area-perimeter relation is to completely cover the set with boxes of a given size, and see how the number of boxes required increases as the size of the individual boxes gets smaller. Another approach considers how the number of points changes, on average, as you look inside a ball centred on a random point and decrease the radius of the ball. To avoid complications that arise near the edge of an attractor, our mathematician will consider only balls with a vanishingly small radius, r. We find familiar-looking results: near a random point on a line the number of points is proportional to r\({}^{1}\), about a point in a plane it is proportional to \(\pi\)r\({}^{2}\), and about a point from the set which defines a solid cube, it is proportional to \(4/3\)\(\pi\)r\({}^{3}\). In each case, the exponent of r reflects the dimension of the set: one if the set forms a line, two if a plane, three if a solid. This method can be applied to fractal sets, although fractals tend to have holes, called lacunae, on all scales. While dealing with these logarithmic wrinkles is non-trivial, we can compute the dimension of strictly self-similar sets exactly, and immediately notice that the dimension of a fractal is often not a whole number. For the Fournier Universe, the dimension is \(\sim\)0.7325 (it equals log 5/log 9) while the Middle Thirds Cantor set has dimension \(\sim\)0.6309 (it equalslog 2/log 3); in each case, the dimension is a fraction bigger than zero yet less than one. Mandelbrot took the 'fract' in 'fraction' as the root of the word 'fractal'. What is the dimension of the Henon attractor? Our best estimate is ~1.26, but while we know there is an attractor, we do not know for certain whether or not, in the long run, this attractor is merely a long periodic loop. In maps, every periodic loop consists of only a finite number of points and so has dimension zero. To see this, just consider balls with a radius r smaller than the closest pair of points on the loop; the number of points in each ball is constant (and equal to one), which we can write as proportional to r\({}^{o}\), and so each has dimension zero. In Chapter 7, we shall see why it is hard to prove what happens in the long run using a computer simulation. First, we will take a closer look at the challenges to quantifying the dynamics of uncertainty even when we know the mathematical system perfectly. For real-world systems, we only have noisy observations, and the problem is harder still. ## Chapter 6 Quantifying the dynamics of uncertainty Chaos exposes our prejudices when we examine the dynamics of uncertainty. Despite the hype regarding unpredictability, we shall see that the quantities used to establish chaos place no restriction whatsoever on the accuracy of today's forecast: chaos does not imply that prediction is hopeless. We can see why the link between chaos and predictability has been so badly overstated by looking at the history of the statistics used to measure uncertainty. Additional statistics are available today. Once scientists touch on uncertainty and predictability, they are honour-bound to clarify the relevance of their forecasts and the statistics used to quantify their uncertainty. The older man looking out of la Tour's painting may have provided the younger man with accurate tables of probabilities for every hand from a deck of 52 cards, but he knows those probabilities do not reflect the game being played. Likewise, our 21st-century demon can quantify the dynamics of uncertainty quite accurately, given her perfect model, but we know we do not have a perfect model. Given only a collection of imperfect models, how might we relate the diversity of their behaviours to our uncertainty about the future state of the real world?The decay of certainty: information without correlation When it comes to predicting what a system will do next, data on the recent state of the system often provide more information than data on some long past state of the system. In the 1920s, Yule wanted to quantify the extent to which data on this year's Sun spots provide more information about the number of spots that will appear next year than ten-year-old data do. Such a statistic would also allow him to quantitatively compare properties of the original data with those of time series generated by models. He invented what is now called the auto-correlation function (or ACF), which measures the linear correlation between states k iterations apart. When k is zero the ACF is one, since any number is perfectly correlated with itself. If the time series reflects a periodic cycle, the ACF decreases from one as k increases, and then returns to equal one whenever k is an exact multiple of the period. Given data from a linear stochastic system the ACF is of great value, but as we will soon see, it is of less use when faced with observations from a nonlinear system. Nevertheless, some statisticians went so far as to define determinism as linear correlation; many are still reeling from this misstep. It is well known that correlation does not imply causation; the study of chaos has made it clear that causation does not imply (linear) correlation either. The correlation between consecutive states of the Full Logistic Map is zero despite the fact that the next state is completely determined by the current state. In fact, its ACF is zero for every separation in time. How then are we to detect relationships in nonlinear systems, much less quantify predictability, if a mainstay of a century of statistical analysis is blind to such visible relationships? To answer this question, we first introduce base two. ### Bits and pieces of information Computers tend to record numbers in binary notation: rather than use the ten symbols (0,1,2,3,4,5,6,7,8, and 9) we learn in school,they use only the first two (0 and 1). Instead of 1000, 100 and 10 representing 10*, 102 and 10*, in binary these symbols represent 2*, 2* and 2* that is, eight, four, and two. The symbol 11 in base two represents 2* + 2*, i.e. three, while 0.10 represents 2*(one-half) and 0.001 represents 2*(one-eighth). Hence the joke that there are ten kinds of mathematicians in the world: those who understand binary notation and those who do not. Just as multiplying by ten (10) is easy in base ten, multiplying by two (10) is easy in base two: just shift all the bits to the left, so that 1.0100101011 becomes 10.100101011, that is where the Shift Map gets its name. Similarly dividing by two: its just a shift to the right. A computer usually uses a fixed number of bits for each number, and does not waste valuable memory space storing the 'decimal' point. This makes dividing a bit curious: On a computer, dividing 0010100101100 by two yields 00010100101010; but then dividing 0010100101101 by two yields the same result! Multiplying 000101001010110 by two yields 00101001010110Q, where Q is a new bit the computer has to make up. So it is for every shift left: a new bit is required in the empty place on the far right. In dividing by two, a zero correctly appears in the empty place on the far left, but any bits that are shifted out the right side this window are lost forever into the bit bucket. This introduces an annoying feature: if we take a number and divide by two, and then multiply by two, we may not get back to the original number we started with. The discussion thus far leads to differing visions of the growth and decay of uncertainty - or creation of information - in our various kinds of mathematical dynamical systems: random systems, chaotic mathematical systems and computerized versions of chaotic mathematical systems. The evolution of the state of a system is often visualized as a tape passing through a black-box. What happens inside the box depends on what kind of dynamical system we are watching. As the tape exits the box we see the bits written on it; the question of whether the tape is blank when it enters the back of the box, or if it already has the bits written on it,leads to spirited discussions in ivory tower coffee rooms. What are the options? If the dynamics are random, then the tape comes into the box blank and leaves with a randomly determined bit stamped on it. In this case, any pattern we believe we see in the bits as the tape ticks constantly forward is a mirage. If the dynamical system is deterministic, the bits are already printed on the tape (and unlike us, Laplace's demon is in a position to already see all of them); we cannot see them clearly until they pass through the box, but they are already there. Creating all those bits of information is something like a miracle either way, and it seems to come down to personal preference whether you prefer one big miracle or a regular stream of small ones: in a deterministic system the picture corresponds to creating an infinite number of bits all at once: the irrational number which is the initial state; in the random system, it looks as if new bits are created as at each iteration. In practice, it certainly seems that we do have some control over how accurately we measure something, suggesting that the tape is pre-printed. There is nothing in the definition of a chaotic system that prevents the tape from running backwards for a while. When this happens, prediction gets simple for a while, since we have seen the tape back up, we already know the next bits that will come out when it runs forward again. When we try to cast this image into the form of a computational system, we run into difficulty. The tape cannot really be blank before it comes into the box: the computer has to'make up' those new bits with some deterministic rule when it left-shifts, so they are effectively already printed on the tape before it enters the box. More interesting is what happens in a region where the tape backs up, since the computer cannot'remember' any bits it loses on a right-shift. For constant slope maps we are always shifting left or always right, the tape never backs up. The computer simulation is still a deterministic system, although the variety of tapes it can produce is much less rich than the tapes of the deterministic mathematical map it is simulating. If the map being simulated has regions of shrinking uncertainty, then there is a transient period during which the tape backs up, the computer cannot know which bits were written on it; when the tape runs forward again the computer uses its internal rule to make up new bits and we may find a 0 and a 1 overprinted on the tape as it comes out of the box a second time! We discuss other weird things that happen in computer simulations of chaotic mathematical systems in Chapter 7. ### Statistics for predicting predictability One of the insights of chaos is to focus on information content. In linear systems variance reflects information content. Information content is more subtle in nonlinear systems, where size is not the only indicator of importance. How else might we measure information? Consider the points on a circle on the X,Y plane with a radius equal to one, and pick an angle at random. Knowing the value of X tells us a great deal about the value of Y - it tells us that Y is one of two values. Likewise, if we do not know all of the bits needed to completely represent X, the more bits of X we learn, the more bits of Y we know. Although we will never be able to decide between two alternative locations of Y, our uncertainty regarding the two possible locations shrinks as we measure X more and more accurately. Not surprisingly, X and Y have a linear correlation of zero in this case. Other statistical measures have been developed to quantify just how much knowing one value tells you about the other. _Mutual Information_, for instance, reflects how many bits of Y you learn, on average, when you learn another bit of X. For the circle, if you know the first five bits of X, you know four of the first five bits of Y; if you know 20 bits of X, you know 19 of Y; and if you know all the bits of X, you know all but one of the bits of Y. Without that missing bit, we can't tell which of two possible values of Y is the actual value of Y. And unfortunately, from the linear-thinking point of view, the bit you are missing is the value of the 'largest' bit in Y. Nevertheless, it is more than a bit misleading to interpret the fact that the correlation is zero to mean you learn nothing about Y upon learning the value of X. What does Mutual Information tell us about the dynamics of the Logistic map? Mutual Information will reflect the fact that knowing one value of X exactly gives us complete information on future values of X. While given a finite precision measurement of X, Mutual Information reflects how much we know, on average, about a future measurement of X. In the presence of observational noise we would tend to know less about future values of X the further they fall in the future since the corresponding bits of the current value of X will be obscured by the noise. So Mutual Information tends to decay as the separation in time increases, while the linear correlation coefficient is zero for all separations (except zero). Mutual Information is one useful tool; the development of custom-made statistics to use in particular applications is a growth industry within nonlinear dynamics. It is important to know exactly what these new statistics are telling us, and it is equally important to accept that there is more to say than traditional statistics can tell us. Our model of the noise gives us an idea of our current uncertainty, so one measure of predictability would be the time we expect that uncertainty to double. We must avoid the trap of linear thinking that suggests the quadrupling time will be twice the doubling time in a non-linear system. Since we do not know which time will be of interest (the doubling-time, tripling-time, quadrupling time, or...), we will simply refer to the q-tupling time near a particular initial condition. The distribution of these q-tupling times is relevant to predictability: they directly reflect the time we expect our uncertainty in each particular forecast to go through a given threshold of interest to us. The average uncertainty doubling time gives the same information averaged over forecasts from this model. It is convenient to have a single number, but this average may not apply to any initial state at all. The average uncertainty doubling time is a useful statistic of predictability. But the definition of mathematical chaos is not made in relation to doubling (or any q-tupling) time statistic, but rather in relation to _Lyapunov exponents_ which we define below. This is one reason that chaos and predictability are not as closely related as they are commonly thought to be. The average doubling time gives a more practical indication of predictability than the leading Lyapunov exponent, but it lacks a major impractical advantage which mathematicians value highly and which, as we shall see, Lyapunov exponents do possess. Chaos is defined in the long run. Uniform exponential growth of uncertainty is found only in the simplest chaotic systems. Indeed, uniform growth is rare amongst chaotic systems which usually display only _effective-exponential growth_, or equivalently _exponential-on-average_ growth. The average is taken in the limit of an infinite number of iterations. The number we use to quantify this growth is call the _Lyapunov Exponent_. If the growth is a pure exponential, not just exponential-on-average, then we can quantify it as two raised to the power \(\lambda\) t, where t is time and \(\lambda\) is the Lyapunov exponent. The Lyapunov exponent has units of bits per iteration, and a positive exponent indicates the number of bits our uncertainty has grown _on average_ after each iteration. A system has as many Lyapunov exponents as there are directions in its state space, which is the same as the number of components that make up the state. For convenience they are listed in decreasing order, and the first Lyapunov exponent, the largest one, is often called the _leading Lyapunov exponent_. In the sixties, the Russian mathematician Osceledec established that Lyapunov exponents existed for a wide variety of systems and proved that in many systems _almost all_ initial conditions would share the same Lyapunov exponents. While Lyapunov exponents are defined by following the nonlinear trajectory of a system in state space, they only reflect the growth of uncertainty infinitesimally close to that nonlinear reference trajectory, and as long as our uncertainty is infinitesimal it can hardly damage our forecasts. In as much as computing Lyapunov exponents requires averaging over infinite durations and restricts attention to infinitesimal uncertainties, adopting these exponents in the technical definition of mathematical chaos places this burden on identifying a system as chaotic. The advantage here is that these same properties make the Lyapunov exponent a robust reflection of the underlying dynamical system; we can take the state space and stretch it, fold it, twist it, and apply any smooth deformation, and the Lyapunov exponents do not change. Mathematicians prize that kind of consistency, and so Lyapunov exponents define whether or not a system has sensitive dependence. If the leading Lyapunov exponent is positive, then we have _exponential-on-average_ growth of infinitesimal uncertainties, and a positive Lyapunov exponent is taken to be a necessary condition for chaos. Nevertheless, the same properties that give Lyapunov exponents their robustness make them rather difficult to measure in mathematical systems, and perhaps impossible to measure in physical dynamical systems. Ideally that should help us remain clear on the difference between mathematical maps and physical systems. While there is no alternative with the mathematically appealing robustness of Lyapunov exponents, there are more relevant quantities for quantifying predictability. Knowing the average time it took a train to travel from Oxford to central London last week is more likely to provide insight into how long it will take today, than would dividing the distance between Oxford and London by the average speed of all trains which ever ran in England. Lyapunov exponents give us an average speed, while doubling times give us average times. By their very nature, Lyapunov exponents are far removed from any particular forecast. Look at the menagerie of maps in Figure 8 (page 40): how would we calculate their Lyapunov exponents or doubling times? We wish to quantify the stretching (or shrinking) that goes on near a reference trajectory, but if our map is nonlinear then the amount of stretching will depend on how far we are from the reference trajectory. Requiring the uncertainty to remain infinitesimally close to the trajectory circumvents this potential difficulty. For one-dimensionalsystems we can then legitimately look at the slope of the map at each point. We are interested in how uncertainty magnifies with time. To combine magnifications we have to multiply the individual magnifications together. If my credit card bill doubles one day and then triples the next, the total increase is six times what I started with, not five. This means that to compute the average magnification per iteration we must take a _geometric average_. Suppose the uncertainty increases by a factor of three in the first iteration, then by two, then by four, then by one third, and then by four: over all that is a factor of \(32\) over these five iterations: so on average the increase is by a factor of two per iteration, since the fifth root of \(32\) is two, that is: \(2\times 2\times 2\times 2\times 2=32\). We are not interested in the arithmetic average: \(32\) divided by \(5\) is \(6.4\) and our uncertainty _never_ grew that much on any one day. Also note that although the average growth is by a factor of two per day, the actual factors were \(3\), \(2\), \(4\), \(\frac{1}{3}\), and \(4\): the growth was not uniform and on one day the uncertainty actually shrunk: if we can bet on the quality of our forecasts in a chaotic system and if we can bet different amounts on different days, then there may be times where we are _much_ more confident in the future. Another myth bites the dust: chaos does not imply prediction is hopeless. In fact, if you can bet against someone who firmly believes that predicting chaos is uniformly hopeless, you are in a position to educate them. The fact that some of the simplest cases (and most common examples) of chaos have constant slopes has lead to the overgeneralization that chaos is uniformly unpredictable. Looking back at the six chaotic systems in Figure 8 (page 40), we notice that in four of them (Shift Map, Tent Map, Quarter Map, and Tripling Tent Map), the magnitude of the slope is always the same. On the other hand, in the Logistic Map and the Moran-Ricker Map, the slope varies a great deal for different values of X. Since a slope with absolute value less than one indicates shrinking uncertainty, the Logistic Map shows strong growth of uncertainty at values of X near zero or near one, and shrinking of uncertainty for values of X near one-half! Likewise, the Moran-Ricker Map shows strong growth of uncertainty near zero and at values near one, where the magnitude of the slope is also large, but shrinking at intermediate and high values of X, where the slope is near zero. How might we determine an average that extends into the infinite future? Like many mathematical difficulties, the easiest way to solve this one is to cheat. One reason that the Shift Map and the Tent Map are so popular in nonlinear dynamics is that while the trajectories are chaotic, the magnification of uncertainty is the same at each state. For the Shift Map, every infinitesimal uncertainty increases by a factor of two on each iteration. So the apparently intractable task of taking an average as time goes to infinity becomes trivial: if the uncertainty grows by a factor of two on every iteration then it grows by a factor of two on average, and the Shift Map has a Lyapunov exponent of one bit per iteration. Computing the Lyapunov exponent of the Tent Map is almost as easy: the magnification is either a factor of two or a factor of minus two, depending on which half of the 'tent' we are in. The minus sign does not effect the size of magnification: it merely indicates that the orientation has flipped from left to right, and we can safely ignore this. Again we have one bit per iteration. The same trick works for the Tripling Tent Map, but it has a larger slope of three, and a Lyapunov exponent of ~1.58 bits per iteration (the exact value is \(\log_{2}(3)\)). Why do we keep taking logarithms instead of just talking about'magnifying factors' (Lyapunov numbers)? And why base 2 logarithms? This is a personal choice, usually justified by its connection to binary arithmetic, its use in computers, a preference for saying 'one bit per iteration' over saying 'about 0.693147 nats per iteration', and the fact that multiplying by two is relatively easy for humans. The graph of the Full Logistic Map reveals a parabola, so the magnification at different states varies, and our trick of taking the average of a constant appears to fail. How might we take the limit into the infinite future? Our physicist would simply fire up a computer and compute finite-time Lyapunov exponents for many different states. Specifically, he would compute the geometric average magnification over two iterations for different values of X, and then the distribution corresponding to three iterations, then four iterations,.... And so on. If this distribution converges towards a single value, then he might be willing to count this as an estimate of the Lyapunov exponent, as long as the computer is not run so long as to be unreliable. As it turns out, this distribution converges faster than the Law of Large Numbers would suggest. Our physicist is happy with this estimated value, which turns out to be near one bit per iteration. Our mathematician, of course, would not dream of making such an extrapolation. She sees no analogy between a finite number of digital computations, each of which is inexact, and an exact calculation extended into the infinite future. From her point of view, the value of the Lyapunov exponent at most values of \(a\) remains unknown, even today. But the Full Logistic Map is special, and demonstrates the second trick of mathematicians: substituting \(\sin 0\) for X in the rule that defines the Full Logistic Map, and using some identities from trigonometry, she can show that the Full Logistic Map _is_ the Shift Map. Since the Lyapunov exponents do not change under this kind of mathematical manipulation, she can prove that the Lyapunov exponent really is equal to one bit per iteration, and explain the violation of the Law of Large Numbers in a footnote. ### Lyapunov exponents in higher dimensions If the model state has more than one component, then uncertainty in one of its components can contribute to future uncertainty in other components. This brings in a whole new set of mathematical issues, since the order in which you multiply things together becomes important. We will initially avoid these complications by considering examples where the uncertainty in different components do not mix, but we must be careful not to forget that these are very special cases!The state space of the _Baker's Map_ has two components, x and y, as shown in Figure 21. It maps a two-dimensional square back into itself exactly with the rule: If x is less than one-half: Multiply x by 2 to get the new value of x and divide y by 2 to get the new y. Otherwise: Multiply x by 2 and subtract one to get the new value of x and divide y by 2 and add one half to get the new y. In the Baker's Map, any uncertainty in the horizontal (x) component of our state will double on each iteration, while those in vertical (y) are cut in half. Since it is true on every step it is also true on average. The average uncertainty doubling time is one iteration and the Baker's Map has one Lyapunov exponent equal to one bit per iteration, and one exponent equal to minus one bit per iteration. **21. Schematic showing how points in the square evolve forward under one iteration of (left) Baker's Map and (right) a Baker's Apprentice Map**The positive Lyapunov exponent corresponds to growing uncertainty, while the negative one corresponds to shrinking uncertainty. For every state, there is a direction associated with each of these exponents; in this very special case these directions are the same for all states and thus they never mix uncertainties in x with uncertainties in y. The Baker's Map itself was carefully crafted to avoid the difficulties caused by uncertainty in one component contributing to uncertainty in another component. In _almost all_ two-dimensional maps, of course, such uncertainties do mix, so usually we cannot compute any positive Lyapunov exponents at all! We can see why one might think predicting chaos is hopeless from the left panels of Figure 22, which show the evolution of a mouse-shaped ensemble over several iterations of the map. But remember that this map is a very special case: our hypothetical baker is very skilled in kneading, and can uniformly stretch the dough by a factor of two in the horizontal so that it shrinks by a factor of two in the vertical, before returning the lot back into the unit square. It is useful to contrast the Baker's Map with various members of the family of Baker's Apprentice Maps. Our hypothetical apprentices are each less uniform, stretching a small portion of the dough on the right side of the square a great deal, while hardly stretching the majority of the dough to the left at all, as shown in Figure 21. Luckily, all members of the Apprentice family are skilled enough not to mix the uncertainty in one component into another, so we can compute doubling times and Lyapunov exponents of any member. As it turns out, every Apprentice Map has a leading Lyapunov exponent greater than the Baker's Map. So _if_ we adopt the leading Lyapunov exponent as our measure of chaos, then the Apprentice Maps are each'more chaotic' than the Baker's Map. This conclusion might cause some unease, when considered in light of Figure 22, which shows, side by side, the evolution of an ensemble of points under the Baker's Map and also under Apprentice number four. The **22. A mouse-like ensemble of initial states (top) and four frames, showing in parallel the evolution of this ensemble under both the Baker's Map (left) and the fourth Baker's Apprentice Map (right)**average doubling time of an Apprentice Map can be much greater than the Baker's Map, even though its Lyapunov exponent is also greater than that of the Baker's Map. This is true for an entire family of Apprentice Maps, and we can find an Apprentice Map with an average doubling time larger than any number one cares to name. Perhaps we should reconsider the connection between chaos and predictability? ### Positive Lyapunov exponents with shrinking uncertainties As long as our uncertainty is smaller than the smallest number we can think of, it can hardly pose any practical limit on our forecasts, and as soon as that uncertainty grows to be measurable, then its evolution need no longer be reflected by Lyapunov exponents in any way whatsoever. Even in the infinitesimal case, the Baker's Apprentice Maps show that Lyapunov exponents are misleading indicators of predictability, since the amount the uncertainty grows can vary with the state the system is in. And it gets better: in the classic system of Lorenz 1963 we can prove that there are regions of the state space in which all uncertainties _decrease_ for a while. Given a choice as to when to bet on a forecast, betting when entering such a region will improve your odds of winning. Predicting chaotic systems is far from hopeless, betting against someone who naively believes it is hopeless might even prove profitable. We end this discussion of Lyapunov exponents with one more word of caution. While a direction in which uncertainty neither grows nor shrinks implies a zero Lyapunov exponent the converse is not true: a Lyapunov exponent of zero does not imply a direction of no growth! Remember the discussion of the exponential that accompanied Fibonacci's rabbits: even growth as fast as the square of time is slower than exponential and will result in a zero Lyapunov exponent. This is one reason why mathematicians are so pedantic about really taking limits all the way out to the infinite future: if we consider a long but finite period of time, then _any_ magnification at all would suggest a positive Lyapunov exponent - exponential, linear or even slower than linear growth will yield a magnification greater than one over any finite period, and the logarithm of any number greater than one will be positive. Computing the statistics of chaos will prove tricky. ### Understanding the dynamics of relevant uncertainties As we noted above, an infinitesimal uncertainty cannot cause us much difficulty in forecasting; once it becomes measurable, the details of its exact size and where the state is in the state space come into play. To date, mathematicians have found no elegant method for tracking these small but noticeable uncertainties, which are, of course, most relevant to real-world forecasting. The best we can do is to take a sample of initial states, called an ensemble, make this ensemble consistent both with the dynamics of our model and the noise in our observations, and then see how the ensemble disperses in the future. For our 21st-century demon that is enough: given her perfect model of the system and of the noise, her noisy observations of previous states reaching into the distant past, and her access to infinite computer power, her ensemble will accurately reflect the probability of future events. If a quarter of her ensemble members indicate rain tomorrow, then there really is a 25% chance of rain tomorrow, given the noisy observations available to her. Decreasing the noise increases her ability to determine what is more likely to happen. Chaos is no real barrier to her. She is uncertain of the present, but can accurately map that uncertainty into the future: who could ask for anything more? Our models, however, are not perfect and our computational resources are limited: in Chapter 9 we contrast the inadequacy with which we must deal with the uncertainty which she can accommodate. The nonlinear zoo contains more than mere chaos. It need not be the case that the smaller the uncertainty, the more tame its behaviour. There are worse things than chaos: it could be the case that the smaller the uncertainty, the faster it grows, leading to an explosion of infinitesimal uncertainties to finite proportions after only a finite period. This is not as outlandish as it might sound: it remains an open question whether or not the basic equations of fluid dynamics display this worse-than-chaos behaviour - one of those few mathematical questions with a one million dollar reward attached to it! ## Chapter 7 Real numbers, real observations, and computers The mathematician very carefully defines irrational numbers. The physicist never meets any such numbers... The mathematician shudders at uncertainty and tries to ignore experimental errors. Leon Brillouin (1964) In this chapter we examine the relation between the numbers in our mathematical models, the numbers we observe when taking measurements in the world, and the numbers used inside a digital computer. The study of chaos has helped to clarify the importance of distinguishing these three sorts of number. What do we mean by different kinds of number? Whole numbers are integers; measurements of things like 'the number of rabbits in my garden' come naturally as integers, and computers can do perfect mathematics with integers as long as they do not get too big. But what about things like 'the length of this table', or 'the temperature at Heathrow Airport? It seems these need not be integers, and it is natural to think of them as being represented by real numbers, numbers which can have an infinitely long string of digits to the right of the decimal point or bits to the right of the binary point. The debate over whether or not these real numbers exist in the real world dates back into antiquity. One thing that is clear is that when we 'take data' we only 'keep' integer values. If we measure 'the length of this table' and write it down as 1.370the measurement does not appear to be an integer at first sight, but we can transform it into an integer by multiplying by 1000; anytime we are only able to measure a quantity like length or temperature to finite precision - which is always the case in practice - our measurement can be represented using an integer. And in fact our measurements are almost always recorded in this way today, since we tend to record and manipulate them using digital computers, which _always_ store numbers as integers. This suggests something of a disconnect between our physical notion of length and our measurements of length, and there is a similar break between our mathematical models, which consider real numbers, and their computerized counterparts, which only allow integers. Of course a real physicist would never say that the length of the table was 1.370; she would say something like the length was 1.370 \(\pm\) 0.005, with the aim of quantifying her uncertainty due to noise. Implicit in this is a model of the noise. Random numbers from the bell-shaped curve is without doubt the most common noise model. One learns to include things like '\(\pm\) 0.005' in order to pass science classes in school; it is usually seen as an annoyance but what does it really mean? What is it that our measurements are measuring? Is there a precise number that corresponds to the True length of the table or the True temperature at the airport, but just obscured by noise and truncated when we record it? Or is it a fiction, and the belief that there should be some precise number just a creation of our science? The study of chaos has clarified the role of uncertainty and noise in evaluating our theories by suggesting new ways to see if such True values might exist. For the moment we will assume the Truth is out there and that we just cannot see it clearly. ### Nothing really matters So what is an observation exactly? Remember our first time series, which consisted of monthly numbers of rabbits in Fibonacci's mythical garden. In that case, we knew the total number of rabbits in the garden. But in most studies of population dynamics we do not have such complete information. Suppose for instance that we are studying a population of voles in Finland. We put out traps, check them each day, release the captives, and keep a daily time series of the number of voles captured. This number is somehow related to the number of voles per square kilometre in Finland, but how exactly? Suppose we observe zero voles in our trap today. What does this 'zero' mean? That there are no voles in this forest? That there are no voles in Scandinavia? That voles are extinct? Zero in our trap could mean any or none of these things and thus illustrates two distinct kinds of uncertainty we must cope with when relating our measurements to our models. The first is simple observational noise: an example would be to miscount the number of voles in the trap, or to find the trap full, leaving open the possibility that more voles might have been counted on that day if a larger trap had been used. The second is called _representation error_: our models consider the population density per square kilometre, but we are measuring the number of voles in a trap, so our measurement does not represent the variable our models use. Is this a shortcoming of the model or the measurement? If we put the wrong number into our model we can expect to get the wrong number out: garbage in, garbage out. But it seems that our models are asking for one _kind_ of number, while our observations are offering a noisy version of another kind of number. In the case of weather forecasting where our target variables - temperature, pressure, humidity - are thought to be real numbers, we cannot expect our observations to reflect the true values exactly. This suggests that we might look for models with dynamics which are _consistent_ with our observations, rather than taking our observations and our model states to be more-or-less the same thing and trying to measure the distance between some future state of our model and the corresponding target observation. The goal of forecasting linear systems is to minimize this distance: the forecast error. When forecasting nonlinear systems it becomes important to distinguish the various things bound up in this quantity, including uncertainties in observation, truncation in measurement, and the difference between our mathematical models, our computer simulations of them, and whatever it was that actually generated the data. We first consider what happens when we try to put dynamics into a digital computer. ### Computers and chaos Recall that our three requirements for mathematical chaos were determinism, sensitive dependence, and recurrence. Computer models are deterministic to a fault. Sensitive dependence reflects the dynamics of infinitesimals, but on any given digital computer there is a limit to how close two numbers can be, beyond which the computer sees no difference at all and will treat them as if they were the same number. No infinitesimals, no mathematical chaos. A second reason that computers cannot display chaos arises from the fact there is only a finite amount of memory in any digital computer: each computer has a limited number of bits and thus only a limited number of different internal states, so eventually the computer must return to a state it has already been in, after which, being deterministic, the computer will simply run in circles, repeating its previous behaviour over and over forever. This fate cannot be avoided, unless some human or other external force interferes with the natural dynamic of the digital computer itself. A simple card trick illustrates the point nicely. What does this imply for computer simulations of the Logistic Map? In the mathematical version of the map, the time series from iterating almost any X between zero and one will never contain the same value of X twice, no matter how many iterations we consider. As the number of iterations increases, the smallest value of X observed so far will slowly get closer and closer to zero, never actually reaching zero. For the computer simulation of the Logistic Map there are only about \(2^{\varepsilon 0}\) (about a million million million) different values of X between zero and one, so the time series from the computer must eventually include two values of X which are exactly the same, becoming stuck in an endless loop. After this happens, the smallest value of X will never decrease again, and any computation along this loop, whether it be the average value of X or the Lyapunov exponent of the map, will reflect the characteristics of the particular loop, not the mathematical map. The computer trajectory has become _digitally periodic_, regardless of what the mathematical system would have done. And so it is for all digital computers. Computers cannot do chaos. There may be more than one digitally periodic loop: shuffle a deck of cards and place some of them in a large circle so that the first card ## Chapter 4the particular cards on the table, everyone will hit the jack of hearts; no one will hit the ace of spades unless they start there. To see this, try starting with each value. If you pick one, you land on the six, then the four, then the jack; while picking two hits the five, the four, and the jack; picking three lands on the three, the ace, the four, and the jack; picking four, the two, the ace, the four, and the jack; picking five, the six and the jack; picking six, the ace, the four, the jack; picking seven, the four and the jack; picking 8, the ace, the two, and the jack. All values lead to the jack. Place the cards in a circle and we have a finite state machine where every starting point must lead to a periodic loop, but there may be more than one loop. By projecting the cards on a screen, you can use this demonstration with a large audience. Take a number yourself, and deal out cards until you think everyone has converged. Then ask people to raise their hands if they are on, in this case, the jack of hearts. There is a wonderful look of surprise on the faces of the audience when they realize that they are all on the same card. They will converge faster if you restrict the deck to cards with small values. If you are willing to stack the deck to get more rapid convergence, what order would you place the cards? follows the last card dealt. Determining which loop each card ends up in yields a list of all the loops. Which is larger: the number of cards that are actually on loops or those on transients? Shuffle the cards and repeat the experiment to see how the number of loops and their lengths change with the number of cards dealt. In the same way, artificially changing the number of bits a computer uses for each value of X turns it into a mathematical microscope for examining the digitally fine structure of the map, using the computer dynamics to examine the length scales where there would be far too many boxes to count them all. ### Shadows of reality Reality is that which, when you stop believing in it, doesn't go away. P. K. Dick Our philosopher and our physicist find these results disturbing. If our computers cannot reflect our mathematical models, how might we decide if our mathematical models reflect reality? If our computers cannot realize a mathematical system as simple as the Logistic Map, how can we evaluate the theory behind our much more complicated weather and climate models? Or contrast our mathematical models with reality? The issue of model inadequacy is deeper than that of uncertainty in the initial condition. One test of model inadequacy is to take the observations we already have and ask if our model can generate a time series that stays close to these observations. If the model were perfect there would be at least one initial state that shadowed any length of observations we might take, where by _shadowing_ we mean that the difference(s) between the model time series and the observed time series is consistent with our model for the noise. This gives our model for the noise a much higher status than it has ever had in the past. Can we still expect shadows when our models are not perfect? No, not in the long run, if our model is chaotic: we can prove that no shadowing trajectory exists. Noise will not go away, even when we stop believing in it. In imperfect chaotic models, we cannot get the noise to allow a coherent account of the difference between our models and the observations. Model error and observational noise are inextricably mixed together. And if observations, model states, and real numbers really are different kinds of number - like apples and orangutans - what did we think we were doing when we tried to subtract one from another? To pursue that question, we must first learn more about the statistics of chaos. ## Chapter 8 Sorry, wrong number: statistics and chaos I have no data yet, and it is a capital mistake to theorise before one has data. (Holmes to Watson in _A Scandal in Bohemia_, A. C. Doyle) Chaos poses new challenges to statistical estimation, but these need to be seen in the context of the challenges statisticians have been dealing with for centuries. When analysing time series from our models themselves, there is much to be gleaned from statistical insight and basic rules of statistical good practice. But our physicist faces an 'apples and oranges' problem when contrasting chaotic models with observations of the real world, and this casts the role of statistics in a less familiar context. The study of chaotic systems has clarified just how murky the situation is. There is even disagreement as to how to estimate the current state of a system given from noisy observations, which threatens to stop us from making a forecast before we even get started. Progress here would yield fruit on issues as disparate as our ability to foresee tomorrow's weather and our ability to influence climate change 50 years from now. ### The statistics of limits and the limits of statistics Consider estimating some particular statistic, say the average height of all human beings. There may be some disagreement over the definition of the population of 'all human beings' (those alive on 1 January 2000? those alive today? all those who have ever lived?...), but that need not distract us yet. Given the height of every member of this population a well-defined value exists, we just do not know what its value is. The average height taken over a sample of human beings is called the sample-average. All statisticians will agree on this value, even if they disagree about the relationship of this number to the desired average over the entire population. (Well, almost all statisticians will agree.) The same cannot be said for sample-Lyapunov exponents. It is not clear that sample-exponents of chaos can be uniquely defined in any sensible way. There are several reasons for this. First, computing the statistics of chaos, like fractal dimensions and Lyapunov exponents, requires taking limits to vanishingly small lengths and over infinitely long durations. These limits can never be taken based on observations. Second, the study of chaos has provided new ways of making models from data without specifying exactly how to build them. The fact that different statisticians with the same data set may arrive at rather different _sample-statistics_ makes the statistics of chaos rather different from the sample-mean. ### Chaos changes what counts as 'good' Many models contain 'free' parameters, meaning parameters which, unlike the speed of light or the freezing point of water, we do not already know with good accuracy. What then is the best value to give the parameter in our model? And if the purpose of the model is to make forecasts, why would we use a value from the lab or from some fundamental theory, if some other parameter value provided better forecasts? Modelling chaoticsystems has even forced us to re-evaluate, arguably to redefine, 'better'. In the weak version of the Perfect Model Scenario, our model has the same mathematical structure as the system which generated the data, but we do not know the True parameter values. Say we know that the data was generated by the Logistic Map, without knowing the value of \(a\). In this case, there is a pretty well-defined 'best': the parameter value that generated the data. Given a perfect noise model for the observational uncertainty, how do we extract the _best_ parameter values for use tomorrow given noisy observations from the past? If the model is linear, then several centuries of experience and theory suggests the best parameters are those whose predictions fall closest to their targets. We have to be careful not to over-tune our model if we want to use it on new observations, but this issue is well known to our statistician. As long as the model is linear and the observational noise is from the bell-shaped distribution, then we have the intuitively appealing aim of minimizing the distance between the forecast and the target. Distance is defined in the usual least squares sense: based on adding up the squares of the differences in each component of the state. As the data set grows, the parameter values we estimate will get closer and closer to those that generated the data - assuming of course that our linear model really did generate the data. And if our model is nonlinear? In the nonlinear case our centuries of intuition prove a distraction if not an impediment. The least squares approach can even steer us away from the correct parameter values. It is hard to understate the negative impact that failure to react to this simple fact has had on scientific modelling. There have been many warnings that things might go wrong, but given the lack of any clear and present danger - and their ease of use - such methods were regularly (mis)applied in nonlinear systems. Predicting chaos has made this danger clear: suppose we have noisy observations from the Logistic Map with (unknown to us) \(a=4\), even with an infinite data set, the least squares approach yields a value for \(a\) which is too small. This is not a question of too little data or too little computer power: methods developed for linear systems give the wrong answer when applied to nonlinear questions. The mainstay of statistics simply does not hold when estimating the parameters of nonlinear models. This is a situation where ignoring the mathematical details and hoping for the best leads to disaster in practice: the mathematical justification for least squares assumes bell-shaped distributions for the uncertainty both on the initial state and on the forecasts. In linear models, a bell-shaped distribution for the uncertainty in the initial condition results in a bell-shaped distribution for the uncertainty in the forecasts. In nonlinear models this is not the case. This effect is almost as important as it is neglected. Even today, we lack a coherent, deployable rule for parameter estimation in nonlinear models. It was the study of chaos that made this fact painfully obvious. Recently Kevin Judd, an applied mathematician at the University of Western Australia, has argued that not only the principle of least squares but the idea of maximum likelihood given the observations is also an unreliable guide in nonlinear systems. That does not imply that the problem is unsolvable: our 21st-century demon can estimate \(a\) very accurately, but she will not be using least squares. She will be working with shadows. Modern statistics is rising to the challenge of nonlinear estimation, at least in cases where the mathematical structure of our models is correct. ### Lies, damn lies, and dimension estimates A young student once had the intention, to quantify fractal dimension. But data points are not free, and, needing 42-to-the-D, she settled for visual inspection. (after James Thieler)While Mark Twain would probably have liked fractals, he would have without doubt hated dimension estimates. In 1983, Peter Grassberger and Itamar Procaccia published a paper entitled 'Measuring the Strangeness of Strange Attractors', which has now been cited by thousands of other scientific papers. Most papers gather only a handful of citations. It would be interesting to use these citations and examine how ideas from the study of chaos spread between disciplines, from physics and applied mathematics through every scientific genre. The paper provides an engagingly simple procedure for estimating, from a time series, the number of components the state of a good model for a chaotic system would require. The procedure came complete with many well-signposted pitfalls. Nevertheless, many if not most applications to real data probably lie in one or the other of those pits. The mathematical robustness of the dimension is what makes capturing it such a prize: you can take an object and stretch it, fold it, roll it up in a ball, even slice it into a myriad of pieces and stick the pieces back together any old way, and you will not alter its dimension. It is this resilience that effectively requires huge data sets to have a fighting chance at meaningful results. Regrettably, the procedure tended towards false positives, and finding chaos by measuring a low dimension was fashionable. An unfortunate combination. Interest in identifying low-dimensional dynamics and chaos was triggered by a mathematical theorem, which suggested one might be able to predict chaos without even knowing what the equations were. ### Takens' Theorem and embedology Time series analysis was re-landscaped in the eighties as ideas from physicists in California led by Packard and Farmer were given a mathematical foundation by the Dutch mathematician Takens; with that basis new methods to analyse and forecast from a time series appeared apace. Takens' Theorem tells us that if we take observations of a deterministic system which evolves in a state space of dimension d, then under some very loose constraints there will be a nearly identical dynamical model in the delay space defined by _almost every_ single measurement function (observation). Suppose the state of the original system has three components \(a\), \(b\), and \(c\), the theorem says that one can build a model of the entire system from a time series of observations of any one of these three components; this is illustrated with real observations in Figure 24; taking just one measurement, say of \(a\), and making a vector whose components are values of \(a\) in the present and in the past, results in a _delay-reconstruction_ state space in which a model equivalent to the original system can be found. When this works, it is called a delay _embedding_. The 'almost every' restrictions are required to avoid picking a particularly bad period of time between the observations. By analogy: if you observed the weather only at noon, then you would have no inkling of what happened at night. Takens' Theorem recasts the prediction problem from one of extrapolation in time to one of interpolation in state space. The traditional statistician who is at the end of his data stream and trying to forecast into an unknown future, while Takens' Theorem places our physicist in a delay-embedding state space trying to interpolate between previous observations. These insights impact more than just data-based models; complicated high-dimensional simulation models evolving on a lower-dimensional attractor might also be modelled by much lower-dimensional, data-based models. In principle, we could integrate the equations in this lower-dimensional space also, but in practice we set up our models as physical simulations in high-dimensional spaces; we can sometimes prove that lower-dimensional dynamics emerge, but we have no clue how to set up the equations in the relevant lower-dimensional spaces. Comparing Figure 24 with Figure 14 makes it clear that the observations of the circuit 'look like' the Moore-Spiegel attractor, but how deep is this similarity, really? Every physical system is different. Often when we have little data and less understanding, **24. An illustration suggesting Takens' Theorem might be relevant to data from Machete's electric circuit carefully designed to produce time series which resemble those of the Moore-Spiegel System. Delay reconstruction of one measurement in the lower panel bears some resemblance to the distribution in the upper panel, which plots the values of three different simultaneous measurements. Contrast these with the lower panel of Figure 14 on page 14.**then statistical models provide a valuable starting point for forecasting. As we learn more, and gather more data, simulation models often show behaviour'similar' to the time series of observations, and as our models get more complicated this similarity often becomes more quantitative. On the rare occasions like this circuit when we have a vast duration of observations, it seems our data-based models - including those suggested by Takens' Theorem - often provide the best quantitative match. It is almost as if our simulation models are modelling some perfect circuit, or planet, while our data-based models more closely reflect the circuit on the table. In each case, we have only similarity; whether we use statistical models, simulation models, or delay-reconstruction models, the sense in which the physical system is described by any model equations is unclear. This is repeatedly seen in physical systems for which our best models are chaotic; we would like to make them empirically adequate, but are not always sure how to improve them. And with systems like the Earth's atmosphere, we cannot wait to take the required duration of observation. The study of chaos suggests a synthesis of these three approaches to modelling, but none has yet been achieved. There are several common misinterpretations of Takens' Theorem. One is that if you have a number of simultaneous observations you _should_ use only one of them; Takens allows us to use them all! A second is to forget that Takens' only tells us that _if_ we have low-dimensional deterministic dynamics _then_ many of its properties are preserved in a delay-reconstruction. We must be careful not to reverse the if-then argument and assume that seeing certain properties in a delay-reconstruction necessarily implies chaos, since we rarely if ever know the True mathematical structure of the system we are observing. Takens' Theorem tells us that _almost any_ measurement will work. This is a case where the 'almost any' in our mathematician's function space corresponds to 'not a single one' in the laboratories of the real world. Truncation to a finite number of bits violates an assumption of the theorem. There is also the issue of observational noise in our measurements. To some extent these are merely technical complaints; a delay reconstruction model may still exist and our statistician and physicist can rise to the challenge of approximating it given realistic constraints on the data stream. Another problem is more difficult to overcome: the duration of our observations needs to exceed the typical recurrence time. It may well be that the required duration is not only longer than our current data set, it may be longer than the lifetime of the system itself. This is a fundamental constraint with philosophical implications. How long would it take before we would expect to see two days with weather observations so similar we could not tell them apart? That is, two days for which the difference between the corresponding states of the Earth's atmosphere was within the observational uncertainty? About \(10^{30}\) years. This can hardly be considered a technical constraint: on that time scale the Sun will have expanded into a red giant and vaporized the Earth, and the Universe may even have collapsed in the Big Crunch. We will leave our philosopher to ponder the implications held by a theorem that requires that the duration of the observations exceed the lifetime of the system. In other systems, like a series of games of roulette, the time between observations of similar states may be much less. The search for dimensions from data streams is slowly being replaced by attempts to build models from data streams. It has been conjectured that it almost always takes less data to build a good model than it does to obtain an accurate dimension estimate. This is another indication that it may prove more profitable to pay attention to the dynamics rather than estimate statistics. In any event, the excitement of constructing these new data-based models brought many physicists into what had been largely the preserve of the statistician. A quarter of a century down the line, one major impact of Takens' Theorem was to meld the statisticians' approach to modelling dynamical systems with that of the physicists. Things are still evolving and a true synthesis of these two may yet emerge. ### Surrogate data The difficulty of getting to grips with statistical estimation in nonlinear systems has stimulated new statistical tests of significance using '_surrogate data_'. Scientists use surrogate data in a systematic attempt to break their favourite theories and nullify their most cherished results. While not every test that fails to kill a conclusion makes it stronger, learning the limitations of a result is always a good thing. Surrogate data tests aim to generate time series which look like the observed data but come from a known dynamical system. The key is that this system is known _not_ to have the property one is hoping to detect: can we root out results that look promising but in fact are not (called false positives) by applying the same analysis to the observed data and then to many surrogate data sets. We know at the start that the surrogate data can show only false positives, so if the observed data set is not easily distinguished from the surrogates, then the analysis holds few practical implications. What does that mean in practice? Well suppose we are hoping to 'detect chaos' and our estimated Lyapunov exponent turns out to be 0.5: is that value significantly greater than zero? If so then we have evidence for one of the conditions for chaos. Of course, 0.5 is greater than zero. The question we want to answer is: are random fluctuations in an estimated exponent likely to be as big as 0.5 in a system (i) which produced similar-looking time series, and (ii) whose true exponent really was not greater than zero? We can generate a surrogate time series, and estimate the exponent from this surrogate series. In fact, we can generate 1,000 different surrogate series, and get 1,000 different exponents. We might then take comfort in our result if almost all of 1,000 estimates from the surrogate series are much less than the value of 0.5, but if the analysis of surrogate data often yields exponents greater than 0.5, then it is hard to argue that the analysis of the real data provided evidence for a Lyapunov exponent greater than zero. ### Applied statistics In a pinch, of course, one can drive a screw with a hammer. Statistical tools designed for the analysis of chaotic systems can provide a new and useful way of looking at observations from systems that are not chaotic. Just because the data do not come from a chaotic system does not mean that such a statistical analysis does not contain valuable information. The analysis of many time series, especially in the medical, ecological, and social sciences, may fall into this category and can provide useful information, information not available from traditional statistical analysis. Statistical good practice protects against being mislead by wishful thinking, and the insight obtained can prove of value in application, regardless of whether or not it establishes the chaotic credentials of the data stream. Data Assimilation is the name given to translating a collection of noisy observations into an ensemble of initial model-states. Within PMS there is a True state that we can approximate, and given the noise model there is a perfect ensemble which, though available only to our 21st-century demon, we can still aim to approximate. But in all real forecasting tasks, we are trying to predict real physical systems using mathematical systems or computer simulations. The perfect model assumption is never justified and almost always false: What is the goal of data assimilation in this case? In this case, it is not simply that we get the 'wrong number' when estimating the state of our model that corresponds to reality, but that there is no 'right number' to identify. Model inadequacy appears to take even probability forecasts beyond our reach. Attempts to forecast chaotic systems with imperfect models are leading to new ways of exploring how to exploit the diversity of behaviour our imperfect models display. Progress requires we never blur the distinction between our mathematical models, our computer simulations and the real world that provides our observations. We turn to prediction in the next chapter. ## Chapter 9 Predictability: does chaos constrain our forecasts? On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. Charles Babbage We are always putting the wrong numbers into our machines; the study of chaos has refocused interest in determining whether or not any 'right numbers' exist. Prediction allows us to examine the connection between our models and the real world in two somewhat different ways. We may test our model's ability to predict the behaviour of the system in the short term, as in weather forecasting. Alternatively, we may employ our models when deciding how to alter the system itself, here we are attempting to alter the future itself towards some desirable, or less undesirable, behaviour, as when using climate models for deciding policy. Chaos poses no prediction problems for Laplace's demon: given exact initial conditions, a perfect model and the power to make exact calculations, it can trace a chaotic system forward in time as accurately as it can a periodic system. Our 21st-century demon has a perfect model and can make exact calculations, but is restricted to uncertain observations, even if they extend at regular intervals into the indefinite past. As it turns out, she cannot use these historicalobservations to identify the current state. She does, however, have access to the a complete representation of her uncertainty in the state given the observations that were made, some would call this an objective probability distribution for the state but we need not go there. These facts hold a number of implications: even with a perfect model of a deterministic system, she cannot do better than make probability forecasts. We cannot aspire to do better, and this implies that we will have to adopt probabilistic evaluation of our deterministic models. But all of these demons exist within the Perfect Model Scenario, we must abandon the mathematical fictions of perfect models and irrational numbers if we wish to offer honest forecasts of the real world. To fail to make it clear that we have done so would be to peddle snake oil. \begin{tabular}{l l} & Forecasting chaos \\ & And be these juggling fiends no more believ'd, \\ & That palter us in a double sense; \\ & That keep the word of promise to our ear, \\ & And break it to our hope. \\ & _Macbeth_ (Act V) \\ \end{tabular} Those who venture to predict have long been criticized even when their forecasts prove accurate, in a technical sense. Shakespeare's play _Macbeth_ focuses on predictions which, while accurate in some technical sense, do not provide effective decision support. When Macbeth confronts the witches asking them what it is that they do, they reply 'a deed without a name'. A few hundred years later, Captain Fitzroy coined the term 'forecast'. There is always the possibility that a forecast be internally consistent from the modellers' perspective while actively misdirecting the forecast user's expectations. There lies the root of Macbeth's complaint against the witches: they repeatedly offer welcome tidings of what would seem to be a path to a prosperous future. Each forecast proves undeniably accurate, but there is little prosperity. Can modern forecasters who interpret uncertainty within their mathematical models as though it reflected real-world probabilities of future events hope to avoid the charge of speaking in a _double-sense_? Are they guilty of Macbeth's accusation in carefully wording their probability forecasts, knowing full well we will allow the excuse of chaos to distract us from entirely different goings on? ### From accuracy to accountability We can hardly blame our forecasters for failing to provide a clear picture of where we are going to end up at if we cannot give them a clear picture of where we are at. We can, however, expect our models to tell us how accurately we need to know the initial condition in order to ensure that the forecast errors stay below some target level. The question of whether or not we can reduce the noise to that level is, hopefully, independent of our model's ability to forecast given a sufficiently accurate initial state. Ideally, a model will be able to shadow: there will be some initial state we can iterate so that the resulting time series remains close to the time series of observations. We have to wait until after we have the observations to see if a shadow exists, and 'close' must be defined by the properties of the observational noise. But if there is _no_ initial state that shadows, then the model is fundamentally inadequate. Alternatively, if there is one shadowing trajectory there will be many. The collection of current states whose pre-histories have shadowed so far can be considered indistinguishable: if the True state is in there we cannot identify it. Nor can we know which of them will continue to shadow when iterated forward to form a forecast, but we could take some comfort from knowing the typical shadowing times of forecasts started from one of these indistinguishable states. It is fairly easy to see that we are headed towards ensemble forecasts based upon candidates who have shadowed the observations up to the present. Realizing that even a perfect model couldn't yield a perfect forecast given an imperfect initial condition, in the 1960s,the philosopher Karl Popper defined an _accountable model_ as one that could quantify a bound on how small the initial uncertainty must be in order to guarantee a specific desired limit on the forecast error. Determining this bound on the initial uncertainty is significantly more difficult for nonlinear systems than it is for linear systems, but we can generalize the notion of accountability and use it to evaluate whether or not our ensemble forecasts reasonably reflect probability distributions. Our ensembles will always have a finite number of members, and so any probability forecast we construct from them will suffer from this finite resolution: if we have 1,000 members then we might hope to see most events with a 1% chance of happening, but we know we are likely to miss events with only a 0.001% chance of happening. We will call an ensemble prediction system _accountable_ if it tells us how big the ensemble has to be in order to capture events with a given probability. Accountability must be evaluated statistically over many forecasts, but this is something our statistician knows how to do quite well. Our 21st-century demon can make accountable forecasts: she will not know the future but it will hold no surprises for her. There will be no unforeseeable events, and unusual events will happen with their expected frequency. ### Model inadequacy With her perfect model, our 21st-century demon can compute probabilities that are useful as such. Why can't we? There are statisticians who argue we can, including perhaps a reviewer of this book, who form one component of a wider group of statisticians who call themselves Bayesians. Most Bayesians quite reasonably insist on using the concepts of probability correctly, but there is a small but vocal cult among them that confuse the diversity seen in our models for uncertainty in the real world. Just as it is a mistake to use the concepts of probability incorrectly, it is an error to apply them where they do not belong. Let's consider an example derived from Galton's Board. Look back at Figure 2 on page 2. You can buy modern incarnations of the image on the left on the internet, just Google 'quincunx'. The machine corresponding to the image on the right is more difficult to obtain. Modern statisticians have even questioned whether Galton actually built that one, although Galton describes experiments with the version, they have been called 'thought experiments' since even modern efforts to build a device to reproduce the expected theoretical results have found it 'exceedingly difficult to make one that will accomplish the task in a satisfactory manner'. It is not uncommon for a theorist to blame the apparatus when an experiment fails to match his theory. Perhaps this is merely an indication that our mathematical models are just different from the physical systems they aim to reflect? To clarify the differences between our models and reality, we will consider experiments on the Not A Galton (NAG) Board shown in Figure 25. ### The NAG Board: an example of pandemonium The NAG Board is 'Not A Galton Board'. It was originally constructed for a meeting to celebrate the 150th year of the Royal Meteorological Society, of which Galton was a member. The NAG Board has an array of nails distributed in a manner reminiscent of those in a Galton Board, but the nails are spaced further apart and imperfectly hammered. Note the small white pin at the top of the board, just to the left of half way. Rather than using a bucket of lead shot, golf balls are allowed through the NAG Board one at a time, each starting in exactly the same position, or as exactly as a golf ball can be placed under the white pin by hand. The golf balls do make a pleasant sound, but they do not make binary decisions at each nail; in fact, they occasional move horizontally past several nails before falling to the next level. Like the Galton Board and Roulette, the dynamics of the NAG Board are not recurrent: the dynamics of each ball is transient and so these systems do not display chaos. Spiegel suggested this behaviour be called _pandemonium_. Unlike the Galton Board, the distribution of golf balls at the bottom of the NAG Board does not reflect the bell-shaped distribution;nevertheless, we can use an ensemble of golf balls to gain a useful probabilistic estimate of where the next golf ball is likely to fall. But reality is not a golf ball. Reality is a red rubber ball. And it is dropped only once. Laplace's demon would allow no discussion of what else might have happened: nothing else could have happened. The analogy here is to take the red rubber ball as the Earth's atmosphere and the golf balls as our model ensemble members. We can invest in as many members as we choose. But what does our distribution of golf balls tell us about the single passage of the red rubber ball? Surely the diversity of behaviour we observe between golf balls tells us something useful? If nothing else, it give us a lower bound on our uncertainty beyond which we know we cannot be confident; but it can never provide a bound in which we can be absolutely confident, even in probabilistic terms. By close analogy, examining the diversity of our models can be very useful, even if there is no probability forecast in sight. The red ball is much like a golf ball: it has a diameter slightly larger but roughly the same as a golf ball and it has, somewhat more roughly, a similar elasticity. But the red ball which is reality can do things that a golf ball simply cannot do: some unexpected, some not; some relevant to our forecast, some not; some known, some not. In the NAG Board, the golf ball is a good model of reality, a useful model of reality; and an imperfect model of reality. How are we to interpret this distribution of golf balls? No one knows. Welcome to frontline statistical research. And it gets better. We could always interpret the distribution of golf balls as a probability forecast conditioned on the assumption that reality is a golf ball. Would it not be a double-sense to proffer probability forecasts one knew were conditioned on an imperfect model as if they reflected the likelihood of future events, regardless of what small print appeared under the forecast? Our ensembles are not restricted to using only golf balls. We might obtain green rubber balls of a slightly smaller diameter and repeat the experiment. If we get a distribution of green balls similar to our distribution of golf balls, we might take courage - or better, take hope - that the inadequacies of our model might not play such a large role in the forecast we are interested in. Alternatively, our two models may share some systematic deficiency of which we are not aware \(\ldots\) yet. But what if the distributions of golf balls and green balls are significantly different? Then we cannot sensibly rely on either. How might quantifying the diversity of our models with these multi-model ensembles allow us to construct a probabilistic forecast for the one passage of reality? When we look at seasonal weather forecasts, using the best models in the world, the distribution from each model tends to cluster together, each in a different way. How are we to provide decision support in this case, or a forecast? What should be our aim? Indeed, how exactly can we take aim at any goal given only empirically inadequate models? If we naively interpret the diversity of an ensemble of models as a probability, we will be repeatedly misled; we know at the start that our models are imperfect, so any discussion of'subjective probability' is a red herring: we do not believe in (any of) our models in the first place! The bottom line is rather obvious: if our models were perfect and we had the resources of Laplace's demon, we would know the future; while if our models were perfect and we had the resources of our \(21\)st-century demon, then chaos would restrict us to probability forecasts, even if we knew the Laws of Nature were deterministic. In case the True Laws of Nature are stochastic, we can envision a statistician's demon, which will again offer accountable probability forecasts with or without exact knowledge of the current state of the universe. But is the belief in the existence of mathematically precise Laws of Nature, whether deterministic or stochastic, any less wishful thinking than the hope that we will come across any of our various demons offering forecasts in the woods? In any event, it seems we do not currently know the relevant equations for simple physical systems, or for complicated ones. The study of chaos suggests that difficulty lies not with uncertainty in the number to 'put in' but the lack of an empirically adequate model to put anything into: chaos we might cope with, but it is model inadequacy, not chaos, that limits predictability. A model may undeniably be the best in the world, but that says nothing about whether or not it is empirically relevant, much less useful in practice, or even safe. Forecasters who couch predictions they expect to be fundamentally flawed with sleight-of-hand phrases such as 'assume the model is perfect' or 'best available information', may be technically speaking the truth, but if those models cannot shadow the past then it is not clear what 'uncertainty in the initial state' might mean. Those who blame chaos for the shortcomings of probability forecasts they devised under the assumption their models were perfect, models they knew to be inadequate, palter to us in a double-sense. ## Chapter 10 Applied chaos: can we see through our models? All theorems are true, All models are wrong. All data are inaccurate. What are we to do? Scientists often underestimate the debt they owe real-time forecasters who, day after day, stand up and present their vision of the future. Prominent among them are weather forecasters and economists, while professional gamblers risk more than their image when they go to work. As do futures traders. The study of chaos has initiated a rethink of modelling and clarified the restrictions on what we can see through our models. The implications differ, of course, for mathematical systems where we know there is a target to take aim at, and physical systems where what we aim for may well not exist. ### 10 Modelling from the ground up: data-based models We will consider four types of data-based models. The simplest are _persistence models_ which assume that things will stay as they are now. A simple dynamic variation on this theme are _advection models_, which assume the persistence of velocities: here, a storm moving to the east would be forecast to continue moving to the east at the same speed. Fitzroy and LeVerrier employed this approach inthe 1800s, exploiting telegraph signals which could race ahead of an oncoming storm. The third are _analogue models_. Lorenz's classic 1963 paper ends with the sentence: 'In the case of the real atmosphere, if all other methods fail, we can wait for an analogue.' An analogue model requires a library of past observations from which a previous state similar to the current state is identified; the known evolution of this historical analogue provides the forecast. The quality of this method depends on how well we observe the state and whether or not our library contains sufficiently good analogues. When forecasting a recurrent system, obtaining a good analogue is just a question of whether or not the library's collection is large enough given our aims and the noise level. In practice, building the library may require more than just patience: how might we proceed if the expected time required to observe recurrence is longer than the lifetime of the system itself? Traditional statistics has long exploited these three approaches within the context of forecasting from historical statistics. Takens' Theorem suggests that for chaotic systems we can do better than any of them. Suppose we wish to forecast what the state of the atmosphere will be tomorrow from a library. The situation is shown schematically in Figure 26. The analogue approach is to take the state in the library which is nearest to today's atmospheric state, and report whatever it did the next day as our forecast for tomorrow. Takens' Theorem suggests taking a collection of nearby analogues and interpolating between their outcomes to form our forecasts. These data-based _delay reconstruction models_ can prove useful without being perfect: they need only outperform - or merely complement - the other options available to us. Analogue approaches remain popular in seasonal weather forecasting, while roulette suggests a data-based modelling success story. It is easy to put money on a winner in roulette: all you have to do is bet one dollar on each number and you'll have a winner every time. You'll lose money, of course, since your winner will pay $36, while you'll have to bet on more than 36 numbers. 'Play them all'strategies lose money on each and every game; casinos worked this out some time ago. Making a profit requires more than placing a winning bet every time: it requires a probabilistic forecast that is better than the house's odds. Luckily, that can be achieved short of the harsh requirements of empirical adequacy or mathematical accountability. The fact that bets can be placed after the ball is in play makes roulette particularly interesting to physicists and the odd statistician. Suppose you record whenever the ball passes, say, the zero with the big toe on your left foot, and whenever zero passes a fixed point on the table with the big toe on your right foot; how often could a computer in the heel of your cowboy boot correctly predict which quarter of the roulette wheel the ball would land on? Predicting the correct quarter of the wheel half of the time would turn the odds in your favour: when you were right you'd win about four times the amount you lost, leaving a profit of three times your gamble, and you'd lose it all about half the time; so on average, you'd make about one and a half times the stake you put at risk. While the world will never know how many times people have tried this, we can put a lower bound of once: the story is nicely told by Thomas Bass in "The Newtonian Casino'. ### Simulation models What if the most similar analogues did not provide a sufficiently detailed forecast? One alternative is to learn enough physics to build a model of the system from 'first principles'. Such models have proven seductively useful across the sciences, yet we must remember to come back from model-land and evaluate our forecasts against real observations. We may well have access to the best model in the world, but whether or not that model is of any value in making decisions is an independent question. Figure 27 is a schematic reflecting the state space of a UK Met. Office Climate model. The state space of a numerical weather prediction (NWP) model falls along similar lines, but weather models are not run for as long as climate models, and so one often simplifies them by assuming things that change slowly, such as the oceans, sea ice or land use, are effectively constant. While the schematic makes models look more elaborate than the simple maps of previous chapters, once transfered onto a digital computer, the iteration of a weather model is not any more complex really, just more complicated. The atmosphere, along with the ocean, and the first few metres of the Earth's crust in some models, is effectively divided up into boxes; model variables - temperature, pressure, humidity, wind speed, and so on - are defined by one number in each box. In as much as it contains an entry for every variable in every grid-box, the model state can be rather large, some have over 10,000,000 components. Updating the state of the model is a straightforward if tedious process: one just applies the rule for each and every component, and iterates over and over again. This is what Richardson did by hand, taking years to forecast one day ahead. The fact that the calculations focus on components from 'nearby' cells gave Richardson the idea that a room full of computers arranged as shown in Figure 28 could in fact compute the weather faster than it happened. Writing in the 1920s, Richardson's computers were human beings. Today's multiprocessor digital supercomputers use more or less the same scheme. NWP models are among the mostcomplicated computer codes ever written and often produce remarkably realistic-looking simulations. Like all models, however, they are imperfect representations of the real-world system they target, and the observations we use to initialize them are noisy. How are we to use such valuable simulations in managing our affairs? Can we at least get an idea of how much we should rely on today's forecast for next weekend? 28. A realization of Richardson's dream, in which human computers work in massively parallel style to calculate the weather before it happens. Note the director in the central platform is shining a light on northern Florida, presumably indicating that those computers are slowing down the project (or perhaps the weather there is just particularly tricky to compute?)Ensemble weather prediction systems Latest EPS giving northern France an edge over Cornwall. Do you have a travel agent who can advise on ferry bookings? Tim Email dated 5 August 1999 In 1992 operational weather forecasting centres on both sides of the Atlantic took a great step forward: they stopped trying to say exactly what the weather would be next weekend. For decades, they had run their computer simulations once a day. As computers grew faster, the models had grown more and more complicated, limited only by the need to get the forecast out well before the weather arrived. This 'best guess' mode of operation ended in 1992: instead of running the most complicated computer simulation once and then watching as reality did something different, a slightly less complex model was run a few dozen times. Each member of this ensemble was initialized at a slightly different state. The forecasters then watched the ensemble of simulations spread out from each other as they evolved in time towards next weekend, and used this information to quantify the _reliability_ of the forecast for each day. This is an Ensemble Prediction System (EPS). By making an _ensemble forecast_ we can examine alternatives consistent with our current knowledge of the atmosphere and our models. This provides significant advantages for informed decision support. In 1928, Sir Arthur Eddington predicted a solar eclipse 'visible over Cornwall' for 11 August 1999. I wanted to see this eclipse. So did Tim Palmer, Head of the Probability Forecast Division at the European Centre for Medium-range Weather Forecasts (ECMWF) in Reading, England. As the eclipse approached, it seemed Cornwall might be overcast. The email from Tim quoted at the beginning of this section was sent six days before the eclipse: we examined the ensemble for the 11th, noting that the number of ensemble members suggested clear sky over France exceeded the corresponding number for Cornwall; the same thing happened on the 9th and we left England for France by ferry. There we saw the eclipse, thanks to playing the odds suggested by the EPS, and to a last minute dash for better visibility made possible by Tim's driving skills on tiny French farm roads in a right-hand-drive car; not to mention his solar eclipse black-out glasses. The study of chaos in our model suggests that our uncertainty in the current state of the atmosphere makes it impossible to say for certain, even only a week in advance, where the eclipse will be visible and where it will be obscured by clouds. By running an ensemble forecast with the aim of tracking this uncertainty, the EPS provided effective decision support nevertheless: we saw the eclipse. We did not have to assume anything about the perfection of the model and there were no probability distributions in sight. Since the EPS first became operational in 1992, no ensemble forecast was generated for the Burns' Day storm of January 1990. ECMWF has kindly generated a retrospective ensemble forecast using the data available two days before the Burns' Day storm struck. Figure 4 (on page 14) shows the storm as seen within a modern weather model - called the _analysis_ - along with a two-day-ahead forecast using only data from before the time of the critical ship observations discussed in Chapter 1. Note that there is no storm in the forecast. Twelve other ensemble members also from two days before the storm are shown in Figure 29; some have storms, some not. The second ensemble member in the top row looks remarkably like the analysis; the member two rows below it has what looks like a super-storm while other members suggest a normal British winter's day. As the critical ship observations were made after this EPS forecast, this ensemble would have already provided an indication that a storm was likely, and significantly reduced the pressure on the intervention forecaster. At longer lead times, the ensemble from three days before Burns' Day has members with storms over Scotland, and there is even one member from the four-day-ahead ensemble forecast with a major storm in the vicinity. The ensemble provides early warning. **29. An ensemble of forecasts from the ECMWF weather model, two days in advance of the Burns' Day storm: some show storms, some do not. Unlike the single 'best guess' forecast shown in Figure 4 on page 14, here we have some forewarning of the storm**Figure 1: The \(\pi^{+}\pi^{-} At all lead times, we must cope with the Burns effect: our collection of ECMWF weather 'golf balls' shows the diversity of our model's behaviour to aid us when we 'guess and fear', without actually quantifying the uncertainty in our real-world future. In fact, we could widen this diversity, if we have enough computer power and questioned the reliability of certain observations, we might run some ensemble members with those observations while omitting them in others. We will never see another situation quite like the Burns' Day storm of 1990. We might decide where to take future observations designed to maximize the chance of distinguishing which of our ensemble members were most realistic: those with a storm in the future or those without? Rather than wasting too much energy trying to determine the 'best' model, we might learn that ensembles members from different models were of more value than one simulation of an extremely expensive super-model. But we should not forget the lessons of the NAG Board: our ensembles reveal the diversity of our models not the probability of future events. We can examine ensembles over initial conditions, parameter values, even mathematical model structures, but it seems only our 21st-century demon can make probability forecasts which are useful as such. Luckily, an EPS can inform and add value without providing probabilities that we would use as such for decision making. Just after Christmas in 1999, another major storm swept across Europe. Called T1 in France and Lothar in Germany, this storm destroyed 3,000 trees in Versailles alone and set new record high insurance claims in Europe. Forty-two hours before the storm, ECMWF ran its usual 51-member EPS forecast. Fourteen members of the 51-member ensemble had storms. It is tempting to forget these are but as golf balls on a NAG Board, and interpret this as saying that there was about a 28% probability of a major storm. Even though that temptation should be resisted, we have here another EPS forecast with great utility. Running a more realistic, more complicated model once might have shown a storm, or might have shown no storm: why take the chance of not seeing the storm when an EPS might quantify that chance? Ensemble forecasting is clearly a sensible idea, but how exactly should we distribute limited resources between using a more expensive model and making a larger ensemble? This active research question remains open. In the meantime, the ECMWF EPS regularly provides a glimpse of alternative future scenarios seen through our models with significant added value. How to communicate this information in the ensemble without showing the public dozens of weather maps also remains an open question. In New Zealand, where severe weather is rather common, the Meteorological Service regularly makes useful probabilistic statements on their website - statements like 'two chances in five'. This adds significant value to the description of a likely event. Of course, meteorologists often display a severe weather fetish, while energy companies are happy to exploit the significant economic value in extracting useful information from more mundane weather, every day. And those in other sectors with operational weather risk are beginning to follow suit. ### Chaos and climate change Climate is what you expect. Weather is what you get. Robert Heinlein, Time Enough for Love (1974) Climate modelling differs fundamentally from weather forecasting. Think of the weather in the first week in January a year from now. It will be mid-summer in Australia and mid-winter in the northern hemisphere. That alone gives us a good idea of the range of temperatures to expect: this collection of expectations is climate - ideally reflecting the relative probability of every conceivable weather pattern. If we believe in physical determinism, then the weather next January is already preordained; even so, our concept of the climate collection is relevant, as our current models are not able to distinguish that preordained future. The ideal ensemble weather forecast would trace the growth of any initial uncertainty in the state of the atmosphere until it became indistinguishable from the corresponding climate distribution. Given imperfect models, of course, this doesn't ever quite happen, as our ensemble of model simulations evolves towards the attractor of the model not that of the real world, if such a thing exists. Even with a perfect model, and ignoring the impacts of human free will noted by Eddington, accurate probability forecasts based on the current conditions of the Earth would be prevented by influences just now leaving the Sun, or those due to arrive from beyond the solar system, of which we cannot know today, even in principle. Climate modelling also differs from weather forecasting in that it often contains a 'what if' component. Altering the amount of carbon dioxide (CO2) and other greenhouse gases in the atmosphere is analogous to changing the parameter \(a\) in the Logistic Map; as we change parameter values, the attractor itself changes. In other words, while weather forecasters try to interpret the implications a distribution of golf balls holds for the single drop of a red rubber ball in the NAG Board of Figure 25 (page 128), climate modellers add the complication of asking what would happen if the nails were moved about. Looking at just one run of a climate model carries the same dangers as looking at just one forecast for Burns' Day in 1990, although the repercussions of such naive over-confidence would be much greater in the climate case. No computing centre in the world has the power to run large ensembles of climate models. Nevertheless, such experiments are made possible by harnessing the background processing power of PCs in homes spread about the globe (see _www.climategrediction.net_). Thousands of simulations have revealed that a surprisingly large range of diversity exists within one state-of-the-art climate model, suggesting that our uncertainty in the future of real-world climate is at least very large. These results contribute to improving current models. They fail to provide evidence that the current generation of climate models can realistically focus the questions of regional detail, which, when available, will be of great value in decision support. A frank appraisal of the limitations of today's climate models casts little doubt upon the wide consensus that significant warming has been seen in the data of the recent past. How wide is the current diversity among our models? This depends, of course, on what model variables you examine. In terms of planet-wide average temperature, there is a consistent picture of warming; a goodly number of ensemble members show a great deal more warming than was previously considered. In terms of regional details, there are vast variations between ensemble members. It is hard to judge the utility of estimated precipitation for decision support, even for monthly rainfall over the whole of Europe. How might one distinguish what are merely the best currently available forecasts from forecasts that actually contain useful information for decision makers in the climate context? In reality, carbon dioxide levels and other factors are constantly changing, weather and climate merge into a single realization of a one-off transient experiment. Weather forecasters often see themselves as trying to extract useful information from the ensemble before it spreads out across the 'weather attractor'; climate modellers must address difficult questions about how the structure of that attractor would change if, say, the amount of carbon dioxide in the atmosphere was doubled and then held constant. Lorenz was already doing active research here in the 1960s, warning that issues of structural stability and long transients complicate climate forecasts, and illustrating the effects in systems not much more complicated than the maps we defined in Chapter 3. Given that our weather models are imperfect, their ensembles do not actually evolve towards realistic climate distributions. And given that the properties of the Earth's climate system are constantly changing, it makes little sense to talk about some constantly changing, unobservable'realistic climate distribution' in the first place. Could any such thing exist outside of model-land? That said, coming to understand chaos and nonlinear dynamics has improved both the experimental design in and the practice of climate studies, allowing more insightful decision support for policy makers. Perhaps most importantly, it has clarified that difficult decisions will have to be made under uncertainty. Neither the fact that this uncertainty is not tightly constrained nor the fact that it can only be quantified with imperfect models, provides an excuse for inaction. All difficult policy decisions are made in the context of the Burns effect. ### Chaos in commerce: new opportunities in Phyrance When a large number of people are playing a game with clear rules but unknown dynamics, it is hard to distinguish those who win with skill from those who win by chance. This is a fundamental problem in judging hedge-fund managers and improving weather models, since traditional scores can actually penalize skilful probabilistic play. The Prediction Company, or PredCo, was founded on the premise that there must be a better way to predict the economic markets than the linear statistical methods that dominated quantitative finance two decades ago. PredCo set out upon a different path blazed by Doyne Farmer and Norm Packard, along with some of the brightest young nonlinear dynamicists of the day, who gave up post-docs for stock options. If there was chaos in the markets, perhaps others were being be fooled without randomness? Sadly, confidentiality agreements still cloud even PredCo's early days, but the continued profitability of the company indicates that whatever it is doing, it is doing it well. PredCo is one example of a general move towards Phyrance, bringing well-trained mathematical physicists in to look at forecast problems in finance, traditionally the statistician's preserve. Is the stock market chaotic? Current evidence suggests our best models of the markets are fundamentally stochastic, so the answer is 'no'. Butneither are they linear. To take one example, the study of chaos has contributed to fascinating developments at the interface of weather and economics: many markets are profoundly affected by weather, some are even affected by weather forecasts. Many analysts so fear that they might be fooled by randomness that they are religiously committed to fairly simple, purely stochastic models, and ignore the obvious fact that some ensemble weather forecasts contain useful information. For energy companies, information on the uncertainty of weather information is being used daily to avoid 'chasing the forecast': buying high, then selling low, then buying high the same cubic metre of natural gas yet again as the weather forecast for next Friday's temperature jumps down, then up, then down again, taking the expected electricity demand for next Friday along with it at each jump. That fact has put speculators in hot pursuit of methods to forecast the next forecast. The study of chaos leads to efficiency beyond short-term profit; Phynance is making significant contributions to the improved distribution of perishable goods with weather-related demand, ship, train and truck transport, and demand forecasting in general. Better probabilistic forecasts of chaotic fluctuations in wind and rain significantly increase our ability to use renewable energy, reducing the need to keep fossil fuel generators running on'standby', except on days of truly low predictability. ### Retreating towards a simpler reality Physical systems inspired the study of chaotic dynamical systems, and we now understand how our \(21\)st-century incarnation of Laplace's demon could generate accountable probability forecasts of chaotic systems with her perfect model. Whether purely data-based or derived from today's 'Laws of Nature', the models we have to hand are imperfect. We must contend both with observational uncertainty and with model inadequacy. Interpreting an ensemble forecast of the real world as if it were a perfect model probability forecast of a mathematical system is to make the most naive of forecasting blunders. Can we find a single real-world system in which chaos places the ultimate limit on our forecasts? The Earth's atmosphere/ocean system is a tough forecasting nut to crack; physicists avoid a complete retreat to mathematical models by examining simpler physical systems on which to break their forecasting procedures and theories of predictability. We will track the course of this retreat from the Earth's atmosphere to examine the last ditch, and then examine what lies there in some detail. Lorenz noted the laboratory 'dish pan' experiments of Raymond Hide to support chaotic interpretations of his computer simulations in the early 1960s. Offspring of those experiments are still rotating in the Physics Department of Oxford University, where Peter Read provides the raw material for their data-based reconstructions. Thus far, probabilistic forecasts of these fluid systems remain very imperfect. Experimentalists around the globe have taken valuable data both from fluid systems and from mechanical systems motivated by the chaotic nature of the corresponding physical models. Real pendulums tend to heat up, changing the 'fixed' parameters of simulation models while leaving the regions of state space on which data-based models were trained. Even dice wear down, a bit, on each throw. Such is the nature of the real world. Physical systems providing large volumes of data, low observational noise levels, and physically stationary conditions might prove more amenable to the tools of modern nonlinear data analysis. Ecosystems are right out. Fast, clean, and accurately instrumented lasers have proven rich sources, but we do not have accountable forecast models here or when studying the dynamics of more exotic fluids like helium. At the last ditch we find electronic circuits: arguably simple analogue computers. A manuscript reporting successful ensemble forecasts of these systems is likely to be rejected by professional referees for having taken too simple a system. So much greater the insight when we _fail_ to generate accountable forecasts for these simplest of real-world systems. Figure 30 shows ensemble forecasts of observed voltages in a circuit constructed to mimic the Moore-Spiegel system. Forecasts from two different models are shown. In each panel, the dark solid line shows the target observations generated by the circuit, while each light line is an ensemble member; the forecasts start at time equals zero; the ensemble was formed using only observations taken before that time. The top two panels show results from Model One, while the bottom two show results from Model Two. Look at the two panels on the left, which show simultaneous forecasts from each model. Every member of the Model One ensemble runs away from reality without warning just before time 100, as shown in the upper panel; the Model Two ensemble in the lower panel manages to spread out at about the correct time (or is it a bit early?), and the diversity of this ensemble looks useful all the way to the end of the forecast. In this case, we may not know which model was going to prove correct, but we can see where they began to strongly diverge from each other. On the panels to the right, both models fail at about the same time, in about the same way. In each case, it appears that the forecasts provide insight into the likely future observations, but that the point in the future when this insight fails is not well reflected by either ensemble system. How can we best interpret this diversity in terms of a forecast? Analysis of many forecasts from different initial conditions shows that, interpreted as probability forecasts, these ensembles are not accountable. This seems to be a general result when using arguably chaotic mathematical models to forecast real-world systems. I know of no exceptions. Luckily, utility does not require extracting useful probability estimates. ### Odds: do we really have to take our models so seriously? In academic mathematics, odds and probabilities are more or less identical. In the real world this is not the case. If we add up the **30. Ensemble forecasts of the Machete's Moore-Spiegel Circuit. The dark line shows the observations; the light lines are the ensemble members; the forecast starts at time zero. The two panels on the left show ensemble forecasts for the same data but made by two different models; note that the ensemble in the lower panel manages to catch the circuit even when the model in the upper panel loses it near time 100. Forecasts from a second initial condition by these same two models is shown in the two panels on the right where the ensembles under both models fail at about the same time**Figure 1: The probability of every possible event, then the sum of the probabilities should be one. For any particular set of odds-on, we can then define the _implied probability_ of an event from the odds on that event. If the sum of the implied probabilities is equal to one, then this set of odds are _probabilistic odds_. Outside mathematics lectures, probabilistic odds are rather hard to find in the real world. The related notion of 'fair odds', where the odds are fixed and one is given the option to take either side of a bet, suggests a similar sort of ivory tower 'wishful thinking'; implied probabilities from odds-against do not complement those from odds-on. The confusion at the heart of both conceptions comes largely from blurring the distinction between mathematical systems and the real-world systems they model. At the racetrack or in a casino, the implied probabilities sum to more than one. A European roulette wheel yields \(37/36\), while an American wheel yields \(38/36\). In a casino this excess ensures profit; scientifically, we might exploit this same excess to communicate information about model inadequacy. Model inadequacy can steer us away from probability forecasts in a manner not dissimilar to the way in which uncertainty in the initial condition steers us away from the principle of least squares in nonlinear models. Theory for incorporating probability forecast systems into a decision support by maximizing expected utility - or some other reflection of the user's goal - is well developed. A 'probability forecast' which would not be used as such in this setting should perhaps not be called a probability forecast at all. A theory for incorporating forecast systems which provides odds rather than probabilities for decision support could, no doubt, be constructed. Judd has already provided several worked examples. It appears that accepting the inadequacy of our own models, while being ignorant of the inadequacy of the models to which the competition has access, requires we aim for something short of fair odds. If an odds prediction system can cover its losses - breaking even when evaluated over all corners while covering its running costs - then we can say it generates _sustainable odds_. Sustainableodds then provide decision support which does not result (has not yet resulted) in catastrophe nor instilled the desire to invest more in improving those odds in order either to gain greater market share or to cover running expenses. Ensembles over all the alternatives one can think of to sample might lead to sustainable odds, allowing the diversity within multi-model ensembles to estimate the impact of model inadequacy. The extent to which the sum of our implied probabilities exceeds one provides a manner to quantify model inadequacy. One wonders if, as we understand some real-world system better and better, we can expect the implied probabilities of our odds forecasts to ever sum to one for _any_ physical system? Moving to forecast systems which provide odds rather than probabilities releases our real-world decision support from unnatural constraints due to probabilities, which may be well-defined only in our mathematical systems. It is an awkward but inescapable fact that sustainable odds will depend both on the quality of your model and on that of the opposition. Decision making would be easy if accountable probability forecasts were on offer, but when model diversity cannot be translated into (a decision relevant) probability, we have no access to probability forecasts. Pursuing risk management as if we did for the sake of simplicity is foolhardy. And while odds might prove useful in hourly or daily decision making, what are we to do in the climate change scenario, where it appears we have only one high-impact event and no truly similar test cases to learn from? We have reached the coal face of real-world scientific forecasting. The old seam of probability is growing thin and it is unclear exactly which direction we should dig in next. If chaotic dynamical systems have not provided us with a new shovel, they have at least given us a canary. ## Chapter 11 Philosophy in chaos You don't have to believe everything you compute. Is there really anything new in chaos? There is an old joke about three baseball umpires discussing the facts of life within the game. The first umpire says 'I calls'em as I see'em.' The second umpire says 'I calls'em as they are.' Finally, the third says 'They ain't, until I calls'em.' The study of chaos tends to force us towards the philosophical position of the third referee. ### 11 Complications of chaos Do the quantities we forecast exist only within the forecast models we construct? If so, then how might we contrast them with our observations? A forecast lies in the state space of our model and, while the corresponding observation is not in that state space, are these two'subtractable'? This is a mathematical version of the 'apples and oranges' problem: are the model state and the observation similar enough that we can meaningfully subtract one from the other to define a distance, to then call a forecast error? Or are they not? And if not, then how might we proceed? Evaluation of chaotic models has exposed a second fundamental complication that arises even in perfect nonlinear models withunknown parameter values: how do we determine the best values? If the model is linear, then we have several centuries of experience and theory which convincingly establish that the best values in practice are those that yield the closest agreement on the target data, where closest is defined in a least squares sense (smallest distance between the model and the target observations); likelihood is a useful thing to maximize. If our model is not linear, then our centuries of intuition often prove a distraction, if not an impediment to progress. Taking least squares is no longer optimal, and the very idea of 'accuracy' has to be rethought. This simple fact is as important as it is neglected. This problem is easily illustrated in the Logistic Map: given the correct mathematical formula and all the details of the noise model - random numbers with a bell-shaped distribution - using least squares to estimate \(a\) leads to systematic errors. This is not a question of too few data or insufficient computer power, it is the method that fails. We can compute the optimal least squares solution: its value for \(a\) is too small at all noise levels. This principled approach just does not apply to nonlinear models because the theorems behind the principle of least squares repeatedly assume bell-shaped distributions. The shape of these distributions is preserved by linear models, but _nonlinear models distort the bell-shape_, making least squares inappropriate. In practice, this 'wishful linear thinking' systematically underestimates the true parameter value at every noise level. Recent (mis)interpretations of climate models have floundered due to similarly wishful linear thinking. Our 21st-century demon will be able to estimate \(a\) very accurately, but she will not be using least squares to do so! (She will be looking for shadows.) Philosophers have also wondered whether fractal intricacy might establish the existence of real numbers in nature, proving that irrational numbers exist even if we can only see a few of the leading bits. Strange attractors offer nothing to support such arguments that cannot be obtained from linear dynamical systems. On the other hand, chaos offers a new way to use both models and our observations to define variables in remarkable detail - if our models are good enough - via states along the shadow from an empirically adequate nonlinear model. If our model shadows the observations for an extended time, then all the shadowing states will fall into a very narrow range of values, providing a way to define values for observables like temperature to a precision beyond that at which our usual concept of temperature breaks down. We will never get to an irrational number, but an empirically adequate model could supply a definition of arbitrary accuracy, using the observations while placing the model into a role not unlike that of the third umpire. That said, the traditional connection between temperature and our measurements of it via a noise model, remains safe until useful shadowing trajectories are shown to exist. Another philosophical quandary arises in terms of how to define the 'best' forecast in practice. Probabilistic forecasts provide a distribution as each forecast, while the target observation we verify against will always be a single event: when the forecast distribution differs from one forecast to the next, we have yet another 'apples and oranges' problem and can never evaluate even one of our forecast distributions as a distribution. The success of our models tends to lull us towards the happy thought that mathematical laws govern the real-world systems of interest to us. Linear models formed a happy family. The wrong linear model can be close to the right linear model, and seen to be so, in a sense that does not apply to nonlinear models. It is not easy to see that an imperfect nonlinear model is 'close to' the right model given only observations: we can see that it allows long shadows, but if the two models have different attractors - and we know that the attractors of very similar mathematical models can be very different - then we do _not_ know how to make ensembles that produce accountable probability forecasts. We must reconsider how our nonlinear models might approach Truth, in the case that Truth can be encapsulated in some 'right' model. We have no scientific reason to believe that such a perfect model exists. Our philosopher might turn from muddy issues raised on the quest for Truth and contemplate the implications of there being nothing more than collections of imperfect models. What advice might she offer our physicist? If new computer power allows the generation of ensembles over everything we can think of (initial conditions, parameter values, models, compilers, computer architecture, and so on), how do we interpret the distributions that come out scientifically? Or expose the folly of hiding from these issues behind a single simulation from a particularly complicated ultra-high-resolution model? Lastly, note that when working with the wrong model, we may ask the wrong question. Who is who in la Tour's card game? The question assumes a model in which each player can be only a mathematician or a physicist or a statistician or a philosopher, and that there must be a representative of each discipline at the table. Perhaps this assumption is false. As real-world scientists, can each of our players take on every role? ### The burden of proof: what is chaotic, really? If we stay with mathematical standards of proof, then very few systems can be proven to be chaotic. The definition of mathematical chaos can only be applied to mathematical systems, so we cannot begin to prove a physical system is chaotic, or periodic for that matter. Nevertheless, it is useful to describe physical systems as periodic or chaotic as long as we do not confuse the mathematical models with the systems we use them to describe. When we have the model in hand, we can see whether it is deterministic or stochastic, but even after knowing it to be deterministic, proving it to be chaotic is non-trivial. Calculating Lyapunov exponents is a difficult task, and there are very few systems for which we can do this analytically. It took almost 40 years to establish a mathematical proof that the dynamics of the 1963 Lorenz Systemwere chaotic, so the question regarding more complicated equations like those used for the weather is likely to remain open for quite some time. We cannot hope to defend a claim that a physical system is chaotic unless we discard the mathematicians' burden of proof, and with it the most common meaning of chaos. Nevertheless, if our best models of a physical system appear to be chaotic, if they are deterministic, appear to be recurrent, and suggest sensitive dependence by exhibiting the rapid growth of small uncertainties, then these facts provide a working definition of what it means for a physical system to be chaotic. We may one day find a better description of that physical system which does not have these properties, but that is the way of all science. In this sense, the weather is chaotic while the economy is not. Does this imply that if we were to add a so-called random number generator to our weather model we no longer believe real weather is chaotic? Not at all, as long as we only wish to employ a random number generator for engineering reasons, like accounting for defects in the finite computerized model. In a similar vein, the fact we cannot employ a true random number generator in our computer models does not imply we must consider the stock market deterministic. The study of chaos has laid bare the importance of distinguishing between our best models and the best way to construct computer simulations of those models. If our model structure is imperfect, our best models of a deterministic system might well turn out to be stochastic! Perhaps the most interesting question of all to come out of chaotic forecasting is the open question of a fourth modelling paradigm: we see our best model fail to shadow, we suspect that there is no way to fix this model, either within the deterministic modelling scheme of our physicist, or within the standard stochastic schemes of our statistician. Can further study of mathematical chaos suggest a synthesis that will give us access to models that can at least shadow physical systems? ### Shadows, chaos, and the future Our eyes once opened, we may pass on to a yet newer outlook of the world, but we can never go back to the old outlook. A. Eddington (1927) Mathematics is the ultimate science fiction. While mathematicians can happily limit their activities to domains where all their assumptions hold ('almost always'), physicists and statisticians must deal with the external world through the data to hand and the theories to mind. We must keep this difference in mind if we are going to use words like 'chaos' when speaking with mathematicians and scientists; a chaotic mathematical system is simply a different beast than a physical system we call chaotic. Mathematics proves; science struggles merely to describe. Failure to recognize this distinction has injected needless acrimony into the discussion. Neither side is 'winning' this argument, and as the previous generation slowly leaves the field, it is interesting to observe some members of the next generation adopt an ensemble approach: neither selecting nor merging but literally adopting multiple models _as a model_ and using them in unison. Rather than playing as adversaries in a contest, can our physicist, mathematician, statistician, and philosopher work as a team? The study of chaos helps us to see more clearly which questions make sense and which are truly nonsensical: the study of chaotic dynamics has forced us to accept that some of our goals are unreachable given the awkward properties of nonlinear systems. And given that our best models of the world are nonlinear - models for the weather, the economy, epidemics, the brain, the Moore-Spiegel circuit, even the Earth's climate system - this insight has implications beyond science, extending to decision support and policy making. Ideally, the insights of chaos and nonlinear dynamics will come to the aid of the climate modeller, who, when asked to answer a question she knows to be meaningless, is empowered to explain the current limits to our knowledge andcommunicate the available information. Even if model imperfections imply that there is no policy-relevant probability forecast, a better understanding of the underlying physical process has aided decision makers for ages. All difficult decisions are made under uncertainty; understanding chaos has helped us to provide better decision support. Significant economic progress has already been made in the energy sector, where the profitability of using information-rich weather ensembles has led to daily use of uncertainty information from trading floors of the markets to the control rooms of national electricity grids. Prophecy is difficult; it is never clear which context science will adopt next, but the fact that chaos has changed the goal posts may well be its most enduring impact on science. This message needs to be introduced earlier in education; the role of uncertainty and the rich variety of behaviour that mathematically simple systems reveal is still largely unappreciated. Observational uncertainty is inextricably melded with model error, forcing us to re-evaluate what counts as a good model. Our old goal to minimize least squares has been proven to mislead, but should we replace them with a search for shadows, for a model with good-looking behaviour, or the ability to make more accountable probability forecasts? From our new vantage point, we can see more clearly which questions make sense, calling forth challenges to the foundational assumptions of mathematical physics and to applications of probability theory. Are our modelling failures due to our inability to select the correct answer from among the available options, or is there no suitable option on offer? How do we interpret simulations from models which are not empirically adequate? Regardless of our personal beliefs on the existence of Truth, chaos has forced us to rethink what it means to approximate Nature. The study of chaos has provided new tools: delay reconstructions that may yield consistent models even when we do not know the 'underlying equations', new statistics with which to describe dynamical systems quantitatively, new methods of forecasting uncertainty, and shadows that bridge the gaps between our models, our observations, and our noise. It has moved the focus from correlation to information, from accuracy to accountability, from artificially minimizing arguably irrelevant error to increasing utility. It rekindles debate on the status of objective probability: can we ever construct an operationally useful probability forecast, or are we forced to develop novel _ad hoc_ methods for using probabilistic information without probability forecasts? Are we quantifying our uncertainty in the future of the real world or exploring the diversity in our models? Science seeks its own inadequacy; coping with constant uncertainty in science is not a weakness but a strength. Chaos has provided much new cloth for our study of the world, without providing any perfect models or ultimate solutions. Science is a patchwork, and some of the seams admit draughts. Early in the film _The Matrix_, Morpheus echoes the words of Eddington that open this last section: This is your last chance. After this, there is no going back. You take the blue pill and the story ends. You wake up in your bed and you believe whatever you want to believe. You take the red pill and you stay in Wonderland and I show you how deep the rabbit hole goes. Remember that all I am offering is the truth. Nothing more. Chaos is the red pill. This page internationally left blank **Abstract** The thesis is divided into two parts. First, we introduce a new method to compute the _state space_ which is a collection of states and the _state space_ which is a collection of states and the _state space_ which is a collection of states and the _state space_ which is a collection of states and the _state space_ which is a collection of states and the _state space_ which is a collection of states. The _state space_ which is a collection of states and the _ **butterfly effect**: An expression that encapsulates the idea that small differences in the present can result in large differences in the future. **chaos** (**C**): A computer program that aspires to represent a chaotic mathematical system. In practice, all digital computerized dynamical systems are on or evolving towards a periodic loop. **chaos** (**M**): A mathematical dynamical system which (a) is deterministic, (b) is recurrent, and (c) has sensitive dependence on initial state. **chaos** (**P**): A physical system that we currently believe would be best modelled by a chaotic mathematical system. **chaotic attractor**: An attractor on which the dynamics are chaotic. A chaotic attractor may have a _fractal_ geometry or it may not; so there are _strange_ chaotic attractors and chaotic attractors that are not strange. **conservative dynamical systems**: A dynamical system in which a volume of _state space_ does not shrink as it is iterated forward. These systems cannot have _attractors_. **delay reconstruction**: A _model state space_ constructed by taking time-delayed values of the same variable in place of observations of additional state variables. **deterministic dynamics**: A dynamical system that can be iterated without recourse to a random number generator, whose initial state defines all future states under iteration. **dissipative dynamical system**: A dynamical system for which, on average, a volume of _state space_ shrinks when iterated forward under the system. While the volume will tend to zero, it need not shrink to a point and may approach a quite complicated _attractor_. **doubling time**: The time it takes an initial uncertainty to increase by a factor of two. The average doubling time is a measure of predictability. **effectively exponential growth**: Growth in time which, when averaged into the infinite future, will appear to be exponential-on-average, but which may grow rather slowly, or even shrink, for long periods of time. **ensemble forecast**: A forecast based on the iterates of a number of different initial states forward (perhaps with different parametervalues, or even different models) and in so doing reveals the diversity of our model(s) and so provides a lower bound on the likely impacts of uncertainty in model-based forecasts. **exponential growth**: Growth where the rate of increase in X is proportional to the value of X, so that as X gets larger, it grows even faster. **fixed point**: A state of a dynamical system which stays put; a stationary point whose future value under the system is its current value. **flow**: A dynamical system in which time is continuous. **fractal**: A self-similar collection of points or an object that is self-similar in an interesting way (more interesting than, say, a smooth line or plane). Usually, one requires a fractal set to have zero volume in the space that it lives, as a line in two dimensions has no area, or a surface in three dimensions has no volume. **geometric average**: The result of multiplying N numbers together and then taking the Nth root of the product. **indistinguishable state**: One member of the collection of points which, given an observational _noise_ model, you would not expect to be able to rule out as having generated the observations actually generated by some target trajectory X. This collection is called the set of indistinguishable states of X and has nothing to do with any particular set of observations. **infinitesimal**: A quantity smaller than any number you can name, but strictly greater than zero. **iterate**: To apply the rule defining a dynamical _map_ once, moving the state forward one step. **linear dynamical system**: A dynamical system in which sums of solutions are also solutions, more generally one that allows superposition of solutions. (For technical reasons, we do not wish to say 'involves only linear rules'.) **Lyapunov exponent**: A measure of the average speed with which _infinitesimally_ close states separate. It is called an exponent, since it is the logarithm of the average rate, which makes it easy to distinguish exponential-on-average growth (greater than zero) from exponential-on-average shrinking (negative). Note that slower-than-exponentialgrowth, slower-than-exponential shrinking, and no-growth-at-all are all combined into one value (zero). **Lyapunov time**: One divided by the _Lyapunov exponent_, this number has little to do with the predictability of anything except in the most simplistic chaotic systems. **map**: A rule that determines a new state from the current state; in this kind of mathematical dynamical system, time takes on only discrete (integer) values, so the series of values of X are labelled as X\({}_{i}\) where i is often called 'time'. **model**: A mathematical dynamical system of interest either due to its own dynamics or the fact that its dynamics are reminiscent of those of a physical system. **noise** (**measurement**): Observational uncertainty, the idea that there is a 'True' value we are trying to measure, and repeated attempts provide numbers that are close to it but not exact. Noise is what we blame for the inaccuracy of our measurements. **noise** (**dynamic**): Anything that interferes with the system, changing its future behaviour from that of the deterministic part of the model. **noise model**: A mathematical model of noise used in the attempt to account for whatever is cast as real noise. **non-constructive proof**: A mathematical proof that establishes that something exists without telling us how to find it. **nonlinear**: Everything that is not linear. **observational uncertainty**: Measurement error, uncertainties due to the inexactness of any observation of the state of the system. **pandemonium**: _Transient dynamics_ that display characteristics suggestive of chaos, but only over a finite duration of time (and so not recurrent). **parameters**: Quantities in our models that represent and define certain characteristics of the system modelled; parameters are generally held fixed as the model state evolves. **Perfect Model Scenario** (**PMS**): A useful mathematical sleight-of-hand in which we use the model in hand to generate the data, and then pretend to forget that we have done so and analyse the 'data' using our model and tools. More generally, perhaps, any situation in which we have a perfect model of the mathematical structure of the system we are studying. **periodic loop**: A series of states in a deterministic system which closes upon itself: the first state following from the last, which will repeat over and over forever. A periodic orbit or limit cycle. **Poincare section**: The cross-section of a _flow_, recording the value of all variables when one variable happens to take on a particular value. Developed by Poincare to allow him to turn a flow into a _map_. **predictability** (**M**): Property that allows construction of a useful forecast distribution that differs from random draws from the final (climatological) distribution; for systems with attractors, this implies a forecast better than picking points blindly from the attractor. **predictability** (**P**): Property that allows current information to yield useful information about the future state of a system. **prediction**: A statement about the future state of a system. **probabilistic**: Everything that is not unequivocal, statements that admit uncertainty. **random dynamics**: Dynamics such that the future state is not determined by the current state. Also called stochastic dynamics. **recurrent trajectory**: A trajectory which will eventually return very close to its current state. **sample-statistic** (**S**): A statistic (for example: the mean, the variance, the average _doubling time_, or largest _Lyapunov exponent_) that is estimated from a sample of data. The phrase is used to avoid confusion with the true value of the statistic. **sensitive dependence** (**P**): The rapid, exponential-on-average, separation of nearby states with time. **shadowing** (**M**): A relationship between two perfectly known models with slightly different dynamics, where one can prove that one of the models will have some trajectory that stays near a given trajectory of the other model. **shadowing** (**P**): A dynamical system is said to'shadow' a set of observations when it can produce a trajectory that might well have given rise to those observations given the expected observational _noise_; a shadow is a trajectory that is consistent with both the noise model and the observations. **state**: A point in _state space_ that completely defines the current condition of that system. **state space**: The space in which each point completely specifies the state, or condition, of a dynamical system. **stochastic dynamics**: See _random dynamics_. **strange attractor**: An _attractor_ with _fractal_ structure. A strange attractor may be chaotic or non-chaotic. **time series** (**M, P, S**): A series of observations taken to represent the evolution of a system over time; the location of the nine planets, the number of sunspots, and the population of mice are examples. Also, the output of a mathematical model. Also (**S**): Confusingly, the model itself. **transient dynamics**: Ephemeral behaviour as in one game of roulette, or one ball in either the Galton Board or the NAG Board, since eventually the ball stops. See _pandemonium_.