Lyapunov Exponents in Haskell: Part 2

November 6, 2012

In the previous article, we looked at some of the details of a scheme for calculating the Lyapunov exponents of a system of differential equations. In this post, we’ll see how to implement this scheme in Haskell. The code we describe is available as a Gist here.

Automatic differentiation

Given a dynamical system

dxdt=F(t,x(t)),\frac{dx}{dt} = F(t, x(t)),

one thing we need to do is calculate the gradient of FF for given values of tt and xx. We can do this most easily using automatic differentiation, a method that allows us to calculate derivatives of functions to machine precision without using finite differences. There are a number of Haskell libraries that provide automatic differentiation facilities, but we’ll use Edward Kmett’s ad package. The only thing we need is the jacobian function from the Numeric.AD module. Using this, we calculate the right hand side of our “basic equation”, QTFQQ^T \nabla F \; Q, using the function

Here, the function rot calculates a simplified symbolic representation of the nn-dimensional rotation matrix QQ and evalRot evaluates this symbolic representation for given numeric values of the angle variables.

The beauty of the automatic differentiation approach is that we are able to provide the functions defining the right hand side of the dynamical system we want to analyse as normal Haskell functions. The only slight downside is that we need to use rank-2 types to be able to express the type of the input functions correctly: the automatic differentiation approach works by evaluating these functions at a different type than the Double -> [Double] -> [Double] type we would use for straightforward numerical integration of the dynamical system, and we need to carry that generality in the type of the first parameter of the rhs function.

The State type is a simple record that carries around the state of the underlying dynamical system along our reference trajectory, plus the λi(t)\lambda_i(t) and θk(t)\theta_k(t):

There is a little bit of back-and-forth between list and Matrix representations to accommodate the requirements of the ad package, but we mostly try to work in terms of the hmatrix Vector and Matrix types, at least as long as we’re dealing with numeric values.

Representing rotation matrices

Much of what we need to do is tied up with how we represent the QQ matrix and how we simplify the QTQ̇Q^T \dot{Q} expression that appears on the left hand side of our basic equation. We use a simplified symbolic representation of these matrices to allow us to simplify them once and for all before we start using them at every step of our numerical integration. The algebraic forms of QQ and QTQ̇Q^T \dot{Q} are identical for all systems of the same size, and the QTQ̇Q^T \dot{Q} rapidly becomes quite complicated for larger nn, so it’s worth dealing with this complexity up front.

Basic expression types

Entries in QQ and QTQ̇Q^T \dot{Q} in general consist of sums of terms involving an integer coefficient and products of sines and cosines of the θk\theta_k and (for QTQ̇Q^T \dot{Q}) a single time derivative of one of the θk\theta_k. We represent these expressions using some simple types. (We define Show instances for all of these for debugging, but I won’t show the code for those here.) First, simple factors, which are either cosθk\cos \theta_k (C k), sinθk\sin \theta_k (S k) or θ̇k\dot{\theta}_k (D k):

Then, terms are products of factors, with an integer coefficient:

Finally, expressions are sums of terms:

We define multiplication of terms in the obvious kind of way (maintaining factors in a canonical sorted order):

and then we have a Num instance for expressions which, apart from expression simplification, is pretty self-explanatory:

And we have a few small helper functions for creating elementary expressions:

Simplification of expressions

When we construct the QTQ̇Q^T \dot{Q} matrix below, the expressions involving the θk\theta_k become quite complicated, and it makes sense to simplify as far as possible. Because of the structure of the entries in this matrix, we can restrict the simplifications performed to removal of zero terms, merging of terms involving identical factors, and (the most complex) simplification of trignometric sums using the identity cos2θ+sin2θ=1\cos^2 \theta + \sin^2 \theta = 1. Simplification is repeated, using these three steps, until no further changes can be made:

The most complex simplification step is the reduction of sums of terms, one involving cos2θk\cos^2 \theta_k and the other sin2θk\sin^2 \theta_k. We only do one simplification step here, since simplify repeats simplification steps until we reach a fixed point. The approach is pretty brute force, simply searching for a pair of matching terms (i.e. two terms with identical factors except that one term has a cos2θk\cos^2 \theta_k factor and the other a sin2θk\sin^2 \theta_k factor) and merging them.

Other expression manipulations

As well as the operations implemented by the Num instance for Expr, we also need to be able to differentiate expressions, which we do symbolically using the product rule and Leibnitz’s rule:

and we also need to be able to find the numerical value of an expression given values for the θk\theta_k:

Expression matrices

All the definitions above deal with individual expressions, but most of what we want to do involves matrices of expressions. We use a simple list representation for these matrices since none of them will be very large. We call this type Mat to avoid conflicts with hmatrix types. We use this type only for our symbolic manipulations to build the T(θ)T(\theta) matrix used to calculate the θ̇k\dot{\theta}_k.

We define a Functor instance for Mat and a few basic matrix operations: transpose, dot products, and matrix multiplication (represented by the operator %*%):

Rotation matrices

Given the above definitions, representing elementary rotation matrices (Givens rotations) is simple. The arguments to the elemRot function are the system dimension n and a tuple ((i,j),k) giving the indices of the rotation, i.e. saying which θij=θk\theta_{ij} = \theta_k is being used, and hence which two axes are involved in the 2-dimensional elementary rotation.

The full rotation matrix for dimension n is then generated by

We memoize the result of this function (using the MemoTrie package, but any other memoizing approach would work) to prevent recalculation of these potentially complicated values. We need an ordering of the matrices, which we generate using the idxs function. Throughout the code, we need to refer to elements of the lower-triangular part of matrices (all the matrices we deal with are either symmetric or skew-symmetric, so we deal with the diagonal and the lower triangular elements separately). For an n×nn \times n matrix, there are n(n1)/2n(n-1)/2 elements in the lower-triangular part, which we label with integers 1n(n1)/21 \dots n(n-1)/2 in the order given here:

Given a matrix of expressions, we also provide a convenience function to convert from expression form to numeric values by passing in vector of angles:

Dealing with the left hand side

Given the above matrix and expression types, we can easily calculate a symbolic representation of QTQ̇Q^T \dot{Q}:

We can test this in GHCi:

Prelude> :load lyapunov.hs
[1 of 1] Compiling LyapunovExponents ( lyapunov.hs, interpreted )
Ok, modules loaded: LyapunovExponents.
*LyapunovExponents> qtqdot 2
[0,-D1]
[D1,0]

showing that for a 2-dimensional system,

QTQ̇=(0θ̇1θ̇10)Q^T \dot{Q} = \begin{pmatrix} 0 & -\dot{\theta}_1 \\ \dot{\theta}_1 & 0 \end{pmatrix}

*LyapunovExponents> qtqdot 3
[0,-C2 C3 D1 - S3 D2,C2 S3 D1 - C3 D2]
[C2 C3 D1 + S3 D2,0,-S2 D1 - D3]
[-C2 S3 D1 + C3 D2,S2 D1 + D3,0]

so that for a 3-dimensional system,

QTQ̇=(0cosθ2cosθ3θ̇1sinθ3θ̇2cosθ2sinθ3θ̇1cosθ3θ̇2cosθ2cosθ3θ̇1+sinθ3θ̇20sinθ2θ̇1θ̇3cosθ2sinθ3θ̇1+cosθ3θ̇2sinθ2θ̇1+θ̇30)Q^T \dot{Q} = \begin{pmatrix} 0 & -\cos \theta_2 \cos \theta_3 \dot{\theta}_1 - \sin \theta_3 \dot{\theta}_2 & \cos \theta_2 \sin \theta_3 \dot{\theta}_1 - \cos \theta_3 \dot{\theta}_2 \\ \cos \theta_2 \cos \theta_3 \dot{\theta}_1 + \sin \theta_3 \dot{\theta}_2 & 0 & -\sin \theta_2 \dot{\theta}_1 - \dot{\theta}_3 \\ -\cos \theta_2 \sin \theta_3 \dot{\theta}_1 + \cos \theta_3 \dot{\theta}_2 & \sin \theta_2 \dot{\theta}_1 + \dot{\theta}_3 & 0 \end{pmatrix}

and finally, just to demonstrate the increasing complexity of QTQ̇Q^T \dot{Q} as nn gets larger:

*LyapunovExponents> qtqdot 4
[0,-S5 D3 - C3 C5 S4 D2 - C2 C3 C4 C5 D1,-C3 C4 C6 D2 + C2 C3 C6 S4 D1 - C5 S6 D3
    + C3 S4 S5 S6 D2 + C2 C3 C4 S5 S6 D1,C3 C4 S6 D2 - C2 C3 S4 S6 D1 - C5 C6 D3
    + C3 C6 S4 S5 D2 + C2 C3 C4 C6 S5 D1]
[S5 D3 + C3 C5 S4 D2 + C2 C3 C4 C5 D1,0,-C5 C6 D4 - S6 D5 - C5 C6 S2 D1 - C5 C6 S5 D6
    - S3 S4 S6 D2 - C2 C4 S3 S6 D1 + C4 C6 S3 S5 D2 - C2 C6 S3 S4 S5 D1 + C5 C6 S5 D6,
    C5 S6 D4 - C6 D5 + C5 S2 S6 D1 + C5 S5 S6 D6 - C6 S3 S4 D2 - C2 C4 C6 S3 D1
    - C4 S3 S5 S6 D2 + C2 S3 S4 S5 S6 D1 - C5 S5 S6 D6]
[C3 C4 C6 D2 - C2 C3 C6 S4 D1 + C5 S6 D3 - C3 S4 S5 S6 D2 - C2 C3 C4 S5 S6 D1,
    C5 C6 D4 + C5 C6 S2 D1 + S3 S4 S6 D2 + S6 D5 + C2 C4 S3 S6 D1 - C4 C6 S3 S5 D2
    + C2 C6 S3 S4 S5 D1,-C6 S6 D6 + C5 S5 S6 S6 D5 + C6 S6 D6 - C5 S5 S6 S6 D5,
    -S5 D4 - D6 - S2 S5 D1 - C4 C5 S3 D2 + C2 C5 S3 S4 D1]
[-C3 C4 S6 D2 + C2 C3 S4 S6 D1 + C5 C6 D3 - C3 C6 S4 S5 D2 - C2 C3 C4 C6 S5 D1,
    -C5 S6 D4 - C5 S2 S6 D1 + C6 S3 S4 D2 + C6 D5 + C2 C4 C6 S3 D1 + C4 S3 S5 S6 D2
    - C2 S3 S4 S5 S6 D1,S5 D4 + S2 S5 D1 + D6 + C4 C5 S3 D2 - C2 C5 S3 S4 D1,
    C6 S6 D6 + C5 C6 C6 S5 D5 - C6 S6 D6 - C5 C6 C6 S5 D5]

In order to extract explicit values for the θ̇k\dot{\theta}_k, we need to construct the matrix T(θ)T(\theta), such that T(θ)θ̇=LT(\theta) \dot{\theta} = L, where LL is a vector of the lower triangular entries of QTFQQ^T \nabla F \; Q in the order defined by the idxs function. The TT matrix is calculated by the following (memoized) function:

and we find:

*LyapunovExponents> tMatrix 2
[1]
*LyapunovExponents> tMatrix 3
[C2 C3,S3,0]
[-C2 S3,C3,0]
[S2,0,1]
*LyapunovExponents> tMatrix 4
[C2 C3 C4 C5,C3 C5 S4,S5,0,0,0]
[-C2 C3 C6 S4 - C2 C3 C4 S5 S6,C3 C4 C6 - C3 S4 S5 S6,C5 S6,0,0,0]
[C5 C6 S2 + C2 C4 S3 S6 + C2 C6 S3 S4 S5,S3 S4 S6 - C4 C6 S3 S5,0,C5 C6,S6,0]
[C2 C3 S4 S6 - C2 C3 C4 C6 S5,-C3 C4 S6 - C3 C6 S4 S5,C5 C6,0,0,0]
[-C5 S2 S6 + C2 C4 C6 S3 - C2 S3 S4 S5 S6,C6 S3 S4 + C4 S3 S5 S6,0,-C5 S6,C6,0]
[S2 S5 - C2 C5 S3 S4,C4 C5 S3,0,S5,0,1]

For the code as presented here, compiled with optimisation on using GHC, calculating the 10×1010 \times 10 T(θ)T(\theta) for a 5-dimensional system takes a bit more than 2 minutes using a single core on my machine. There is reason to believe that it might still be practical to use this method for larger systems though – I’ve made no attempt at all to make my symbolic computations efficient, and it might be possible to develop some sort of recurrence relation directly relating T(θ)T(\theta) for an n+1n+1-dimensional system to T(θ)T(\theta) for an nn-dimensional system, which would make the computation of TT quicker. (We can also build reduced systems for evaluating a subset of the Lyapunov spectrum: more on this in a later article, if I can get it to work!)

Putting things together

Given the above elements, we can now write a function that computes the full right hand side derivative for numerical integration of the state of our dynamical system supplemented with the λi\lambda_i and θk\theta_k. We calculate ẋ\dot{x} for the main system state using the function f passed in, then calculate the right hand side of our “basic equation”, QTFQQ^T \nabla F \; Q using the rhs function. The λ̇i\dot{\lambda}_i are the diagonal entries of this matrix. Finally, extracting the lower triangular entries from QTFQQ^T \nabla F \; Q and building the T(θ)T(\theta) matrix allows us to determine the θ̇k\dot{\theta}_k.

We then need a couple of helper functions to help interface with the ODE solvers in the Numeric.GSL.ODE package:

and we’re ready to go with the main public interface to the code:

Called with a function f, an initial state for the full system (produced by the initialState function) and a vector of time values, this produces as output a matrix with the time values in the first column, the values of the variables of the ODE system in the next nn columns, estimates for the Lyapunov exponents of the system in the next nn columns and the values of the angles θk\theta_k in the last n(n1)/2n(n-1)/2 columns.

Two examples

The driven van der Pol oscillator

Let’s first look at the same system as we treated by hand in the previous post:

ż1=z2\dot{z}_1 = z_2

ż2=d(1z12)z2z1+bcosωt\dot{z}_2 = -d (1-z_1^2) z_2 - z_1 + b \cos \omega t

with dd, bb and ω\omega are real constants. As before, we will put d=5d = -5, b=5b = 5 and ω=2.466\omega = 2.466. In Haskell:

One would then hope that we could type something like the following in GHCi to see an estimate for the first Lyapunov exponent of the system:

GHCi, version 7.4.2: http://www.haskell.org/ghc/  :? for help
...> :load VDP.hs Numeric/LyapunovExponents.hs
...> :m + Numeric.LyapunovExponents Numeric.LinearAlgebra
...> let ic = initialState [1,1]
...> let ts = linspace 1001 (0,100) :: Vector Double
...> let soln = lyapunov vdp ic ts
...> (snd soln) @@> (1001,4)
Bus error

Oops. This may work on a 32-bit system (I’ve not tried it), but on a 64-bit system, you will get bitten by this bug. It’s likely to be fixed in GHC 7.8.1, but it’s kind of a nasty thing to have to deal with, so it’s no surprise that it’s been punted a few times.

It’s only a problem in GHCi though, not the GHC compiler, so we can avoid the problem by compiling a little program like this:

Compiling and running this works fine, and here are graphs of the Lyapunov exponent estimates:

First LE of van der Pol oscillator

Second LE of van der Pol oscillator

These results are identical to those from the manual calculation in the previous post (λ10.092±0.005\lambda_1 \approx 0.092 \pm 0.005, λ26.828±0.013\lambda_2 \approx -6.828 \pm 0.013), values which agree with other calculations in the literature1.

The Lorenz system

Next, let’s look at the Lorenz system

dxdt=σ(yx)\frac{dx}{dt} = \sigma (y - x)

dydt=x(ρz)y\frac{dy}{dt} = x (\rho - z) - y

dzdt=xyβz\frac{dz}{dt} = x y - \beta z

with the “traditional” parameter choices σ=10\sigma = 10, r=28r = 28 and b=8/3b = 8/3. The Haskell representation of this system is:

and, as before, we use a little freestanding program to integrate the equations governing the Lyapunov exponents to get around the GHCi bug:

Here are the results:

First LE of Lorenz system

Second LE of Lorenz system

Third LE of Lorenz system

The estimates of the Lyapunov exponents are λ10.897±0.003\lambda_1 \approx 0.897 \pm 0.003, λ20.000±0.001\lambda_2 \approx 0.000 \pm 0.001 and λ314.560±0.003\lambda_3 \approx -14.560 \pm 0.003, results that again are consistent with other calculation methods.

Some comments

The code presented here is no more than a proof of concept:

  • The symbolic calculations are not done in a particularly efficient way, and we might hope, in particular, to do a much better job of calculating the entries in the QTQ̇Q^T \dot{Q} matrix that we need for determining the θ̇k\dot{\theta}_k – at least by memoizing these calculations, they only get done once;

  • We have no way of calculating only a subset of the Lyapunov spectrum, which should be possible using this method, and is the only practical way to proceed for larger systems;

  • We have glossed over some important technical issues, including the question of whether the T(θ)T(\theta) matrix is always nonsingular – in general, this is a complicated question and there are more sophisticated schemes that exist to ensure that our approach remains well-founded: for now, we just trust to luck;

  • We need to calculate the Jacobian F\nabla F in full, which is impractical for larger systems, meaning that we might want to try to use some sort of matrix-free method (this is tricky, but is worth it for larger systems: even for the small systems considered here, a large fraction of the compute time goes into evaluations of F\nabla F).

Given all those caveats, the code works (although it’s slow), and was pretty straightforward to write in Haskell. Haskell really is nice for symbolic calculations, and the ability to implement things like automatic differentiation directly in the language itself is a real plus.


  1. For example, S. Habib and R. D. Ryne, Phys. Rev. Lett. 74, 70 (1995).