50
$\begingroup$

In this comment I wrote:

...default SciPy integrator, which I'm assuming only uses symplectic methods.

in which I am refering to SciPy's odeint, which uses either a "non-stiff (Adams) method" or a "stiff (BDF) method". According to the source:

def odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0,
           ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0,
           hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12,
           mxords=5, printmessg=0):
    """
    Integrate a system of ordinary differential equations.

    Solve a system of ordinary differential equations using lsoda from the
    FORTRAN library odepack.

    Solves the initial value problem for stiff or non-stiff systems
    of first order ode-s::
        dy/dt = func(y, t0, ...)
    where y can be a vector.
    """

Here is an example where I propagate a satellite's orbit around the earth for three months just to show that it precesses as expected.

I believe that non-symplectic integrators have the undesirable property that they will tend not to conserve energy (or other quantities) and so are undesirable in orbital mechanics for example. But I'm not exactly sure what it is that makes a symplectic integrator symplectic.

Is it possible to explain what the property is (that makes a symplectic integrator symplectic) in a straightforward and (fairly) easy to understand but not inaccurate way? I'm asking from the point of view of how the integrator functions internally, rather than how it performs in testing.

And is my suspicion correct that odeint does use only symplectic integrators?

$\endgroup$
6
  • 4
    $\begingroup$ As a strong rule of thumb, you should only hope a black box integrator is symplectic if it requires you to separate out position and momentum equations. $\endgroup$
    – origimbo
    Commented Mar 26, 2018 at 10:44
  • $\begingroup$ @origimbo Thanks. These do, and it looks like odeint is a Python wrappoer for fairly old, established, and well referenced source codes, (edited question, references ODEPACK and LSODA) although I certainly admit to using it in black-box mode. My linked example shows the the 6D state vector consists of three positions and three velocities. $\endgroup$
    – uhoh
    Commented Mar 26, 2018 at 11:07
  • 12
    $\begingroup$ The ODE integrators in ODEPACK and LSODA are not symplectic integrators. $\endgroup$ Commented Mar 26, 2018 at 14:01
  • 2
    $\begingroup$ Here's a worked example comparing two very simple solvers: Euler and Symplectic Euler: idontgetoutmuch.wordpress.com/2013/08/06/…. $\endgroup$ Commented Mar 26, 2018 at 16:02
  • 2
    $\begingroup$ The book by Hairer, Nørsett, and Wanner gives a good explanation of symplectic methods. Look at Figure 16.1 in particular, and the figures here. $\endgroup$
    – J. M.
    Commented Mar 27, 2018 at 12:24

2 Answers 2

90
$\begingroup$

Let me start off with corrections. No, odeint doesn't have any symplectic integrators. No, symplectic integration doesn't mean conservation of energy.

What does symplectic mean and when should you use it?

First of all, what does symplectic mean? Symplectic means that the solution exists on a symplectic manifold. A symplectic manifold is a solution set which is defined by a 2-form. The details of symplectic manifolds probably sound like mathematical nonsense, so instead the gist of it is there is a direct relation between two sets of variables on such a manifold. The reason why this is important for physics is because Hamiltonian's equations naturally have that the solutions reside on a symplectic manifold in phase space, with the natural splitting being the position and momentum components. For the true Hamiltonian solution, that phase space path is constant energy.

A symplectic integrator is an integrator whose solution resides on a symplectic manifold. Because of discretization error, when it is solving a Hamiltonian system it doesn't get exactly the correct trajectory on the manifold. Instead, that trajectory itself is perturbed $\mathcal{O}(\Delta t^n)$ for the order $n$ from the true trajectory. Then there's a linear drift due to numerical error of this trajectory over time. Normal integrators tend to have a quadratic (or more) drift, and do not have any good global guarantees about this phase space path (just local).

What this tends to mean is that symplectic integrators tend to capture the long-time patterns better than normal integrators because of this lack of drift and this almost guarantee of periodicity. This notebook displays those properties well on the Kepler problem. The first image shows what I'm talking about with the periodic nature of the solution.

enter image description here

This was solved using the 6th order symplectic integrator from Kahan and Li from DifferentialEquations.jl. You can see that the energy isn't exactly conserved, but its variation is dependent on how far the perturbed solution manifold is from the true manifold. But since the numerical solution itself resides on a symplectic manifold, it tends to be almost exactly periodic (with some linear numerical drift that you can see), making it do very nicely for long term integration. If you do the same with RK4, you can get disaster:

enter image description here

You can see that the issue is that there's no true periodicity in the numerical solution and therefore overtime it tends to drift.

This highlights the true reason to choose symplectic integrators: symplectic integrators are good on long-time integrations on problems that have the symplectic property (Hamiltonian systems). So let's walk through a few things. Note that you don't always need symplectic integrators even on a symplectic problem. For this case, an adaptive 5th order Runge-Kutta method can do fine. Here's Tsit5:

enter image description here

Notice two things. One, it gets a good enough accuracy that you cannot see the actual drift in the phase space plot. However, on the right side you can see that there is this energy drift, and so if you are doing a long enough integration this method will not do as well as the solution method with the periodic properties. But that raises the question, how does it fare efficiency-wise vs just integrating extremely accurately? Well, this is a bit less certain. In SciMLBenchmarks.jl you can find some benchmarks investigating this question. For example, this notebook looks at the energy error vs runtime on a Hamiltonian equation system from a quadruple Boson model and shows that if you want really high accuracy, then even for quite long integration times it's more efficient to just use a high order RK or Runge-Kutta Nystrom (RKN) method. This makes sense because to satisfy the symplectic property the integrators give up some efficiency and pretty much have to be fixed time step (there is some research making headway into the latter but it's not very far along).

In addition, notice from both of these notebooks that you can also just take a standard method and project it back to the solution manifold each step (or every few steps). This is what the examples using the DifferentialEquations.jl ManifoldProjection callback are doing. You see that guarantees conservation laws are upheld but with an added cost of solving an implicit system each step. You can also use a fully-implicit ODE solver or singular mass matrices to add on conservation equations, but the end result is that these methods are more computationally-costly as a tradeoff.

So to summarize, the class of problems where you want to reach for a symplectic integrator are those that have a solution on a symplectic manifold (Hamiltonian systems) where you don't want to invest the computational resources to have a very exact (tolerance <1e-12) solution and don't need exact energy/etc. conservation. This highlights that it's all about long-term integration properties, so you shouldn't just flock to them all willy-nilly like some of the literature suggests. But they are still a very important tool in many fields like Astrophysics where you do have long time integrations that you need to solve sufficiently fast without having absurd accuracy.

Where do I find symplectic integrators? What kind of symplectic integrators exist?

There are generally two classes of symplectic integrators. There are the symplectic Runge-Kutta integrators (which are the ones shown in the above examples) and there are implicit Runge-Kutta methods which have the symplectic property. As @origimbo mentions, the symplectic Runge-Kutta integrators require that you provide them with a partitioned structure so they can handle the position and momentum parts separately. However, counter to the comment, the implicit Runge-Kutta methods are symplectic without requiring this, but instead require solving a nonlinear system. This isn't too bad because if the system is non-stiff this nonlinear system can be solved with functional iteration or Anderson acceleration, but the symplectic RK methods should still probably be preferred for efficiency (it's a general rule that the more information you provide to an integrator, the more efficient it is).

That said, odeint does not have methods from either of these families, so it is not a good choice if you're looking for symplectic integrators. In Fortran, Hairer's site has a small set you can use. Mathematica has a few built in. The GSL ODE solvers have implicit RK Gaussian point integrators which IIRC are symplectic, but that's about the only reason to use the GSL methods.

But the most comprehensive set of symplectic integrators can be found in DifferentialEquations.jl in Julia (recall this was used for the notebooks above). The list of available symplectic Runge-Kutta methods is found on this page and you'll notice that the implicit midpoint method is also symplectic (the implicit Runge-Kutta Trapezoid method is considered "almost symplectic" because it's reversible). Not only does it have the largest set of methods, but it's also open-source (you can see the code and its tests in a high-level language) and has a lot of benchmarks. A good introductory notebook for using it to solve physical problems is this tutorial notebook. But of course it's recommended you get started with the package through the first ODE tutorial.

In general you can find a detailed analysis of numerical differential equation suites at this blog post. It's quite detailed but since it has to cover a lot of topics it does each at less detail than this, so feel free to ask for it to be expanded in any way.

$\endgroup$
8
  • 21
    $\begingroup$ With this answer I seem to have hit the Stack Exchange Jackpot! This is the perfect answer for me, as I understand some of it immediately and the parts that I do not leave me anxious to read further. I really appreciate the time you've taken to thoroughly source this answer as well as include other helpful and instructive links. $\endgroup$
    – uhoh
    Commented Mar 26, 2018 at 15:38
  • 1
    $\begingroup$ Before going into mathematical details, we could roughly say that symplectic means conservation of volume, couldn't we? $\endgroup$
    – Miguel
    Commented Mar 26, 2018 at 18:18
  • 3
    $\begingroup$ FTR, the reason the adaptive 5th order Runge-Kutta performs so much better than RK4 here is not that it has higher order but that it chooses more suitable step sizes. The reason RK4 performs so badly is mainly that the step size is inappropriately high at the perigee; the same solver with half the step size would give a much better solution. (Just, it would waste lots of time resolving the orbit finely around the apogee, where this isn't necessary at all.) $\endgroup$ Commented Mar 27, 2018 at 9:02
  • 1
    $\begingroup$ excellent exposition. As a side question: the OP starts with a reference to Python -- are there recommended Python tutorials/packages along the lines of the linked Julia examples? $\endgroup$ Commented Sep 6, 2018 at 20:36
  • 1
    $\begingroup$ The only Python package I know of for these kinds of integrators is diffeqpy, where it's not documented on the README but you can access all of these same methods and re-write this in Python using that package. $\endgroup$ Commented Sep 6, 2018 at 23:57
20
$\begingroup$

To complement Chris Rackauckas answer, to state some of the mathematical nonsense as well as some stuff you almost certainly know, a dynamical system is Hamiltonian if there is a description with coordinates $\mathbf{p}$ and $\mathbf{q}$ and a functional, $\mathcal{H(\mathbf{p},\mathbf{q})}$ such that $$\frac{d\mathbf{q}}{dt}=+\frac{\partial \mathcal{H}}{\partial\mathbf{p}}$$ and $$\frac{d\mathbf{p}}{dt}=-\frac{\partial \mathcal{H}}{\partial\mathbf{q}}.$$ This motion conserves the value of $\mathcal{H}$ along trajectories, but it also has an additional property, namely that if we define a mapping $$ \mathbf{p}(t),\mathbf{q}(t)= \phi_t(\mathbf{p}(t_0),\mathbf{q}(t_0))$$ then this mapping conserves the two form $d\mathbf{p}\wedge d\mathbf{q}$. For a problem in which $p$ and $q$ are one dimensional you can think of this as saying that the area inside closed curves on the phase space are conserved. This ensures all kinds of nice stability properties, since "balls" of trajectories have to stay "close" to each other.

In terms of numerics, a symplectic integrator acts in the same way, also conserving this area/two form. In turn this means that there is a conserved "numerical Hamiltonian" (which may not be [read 'is not'] the same as the exact one). Note that stability is not the same as accuracy, so that most of the advantages of symplectic methods come when integrating for very long times (e.g. your method may rapidly place a satellite on the wrong side of the Earth, while never allowing it to decay into it).

$\endgroup$
5
  • 1
    $\begingroup$ Thank you for this! I will now use words above my pay-grade. n-balls of trajectories are more at risk when they are near bifurcations such as those in 3-body simulations. cf. Doedel et al. 2007, Int. J. Bifurcation and Chaos, v 17, no. 8 (2007) 2625–2677 How did I do? Also ieec.cat/hosted/web-libpoint/papers/… $\endgroup$
    – uhoh
    Commented Mar 26, 2018 at 15:52
  • 3
    $\begingroup$ Unless the reader is aware of the mathematical details, the mention of stability is misleading, since conservation of volume does not mean that individual trajectories remain close. $\endgroup$
    – Miguel
    Commented Mar 26, 2018 at 18:20
  • 3
    $\begingroup$ @Miguel I think this is one of those situations where the reader who doesn't follow the mathematical details is damed either way, but in terms of the usual numericist's troika of accuracy, stability and computational efficiency, I'd argue that stressing the stability benefits is useful. I'm happy to accept suggestions for a rewrite if you can think of something better which isn't wilfully inaccurate. $\endgroup$
    – origimbo
    Commented Mar 26, 2018 at 18:50
  • $\begingroup$ @origimbo What about this? The mental image is a 1cm$^2$ sheet of jelly whose particles are subject to the flow: individual trajectories can be completely different, but the particles still ocuppy 1cm$^2$, so providing a notion of stability in the long term. $\endgroup$
    – Miguel
    Commented Mar 26, 2018 at 20:24
  • 3
    $\begingroup$ @Miguel: But the blob of particles is allowed to split into two or more parts. Its total volume just has to stay constant. $\endgroup$ Commented Mar 27, 2018 at 4:24

Not the answer you're looking for? Browse other questions tagged or ask your own question.