14
$\begingroup$

I am creating a C++ Physics Simulation where I need to move an rigid body through an acting force field.

Problem: simulation does not conserve energy.
Quesiton: abstractly, how is conservation of energy handled in modern day physics simulations?

My specific instance:
Simulation parameters:
- Constant force field F = <-x, 0, 0> (spring force field resulting in oscillation)
- One rigid body with one point mass at (1, 0, 0).

This should result in oscillation of the body from (1, 0, 0) to (-1, 0, 0) and back.

It works well for the first couple of seconds, and then gradually the body gains energy unboundedly, as the body goes to max position of x = 1.1, then x = 1.3, then x = 1.7, etc.

I am pretty sure this is what is happening but I do NOT know how to fix it the mathematical model of my system: Since I am updating the body using discrete time steps, whenever the body is at (0.99, 0, 0) or something near but below 1.0, the body's position, X, gets updated going rightward, a little past 1.0, and thus the body permanently gains a little bit of energy. This process repeats over and over again and the body continually gains energy.

This would naturally be a problem with how I am solving the movement differential equation using discrete time steps.
How can I go about simulating this in order to conserve energy, and keep the simulation accurate, even with weird force fields?


Side notes
Equation of motion:

updateBody(dt):
    X += V * dt
    P += F * dt
    L += T * dt
    Q += 0.5 * (quaternion(re: 0, im: W) * Q)
    Q = normalize(Q)
  • X : position of center of mass (vector)
  • P : momentum (vector)
  • L : angular moment (vector)
  • Q : orientation (quaternion)
  • V : velocity of center of mass (vector)
  • W : angular velocity (vector)
  • dt: the timestep to update with
  • F : total force (vector) = sum of forces acting on all point masses of rigid body
  • T : total torque (vector) = sum of torques acting on all point masses of rigid body

Event loop:

while(true):
    t = getTime()
    dt = t - t'

    updateBody(dt)
    render()

    t' = t
$\endgroup$
7
  • $\begingroup$ I had one idea of fixing this: Noting the inital kinetic and potential energy of the object and adjusting the kinetic energy at each timestep in order to match the compliment of the exact potential energy at that location. But this would imply that all force fields will have a computable potential function, which may not be true. And this would be increasingly hard with a more complex force field that I do not know at compile time. $\endgroup$ Commented Dec 31, 2019 at 5:43
  • 5
    $\begingroup$ You're probably encountering the instability of forward Euler time integration. The simplest fix, extremely popular in computer graphics, is to switch to a symplectic integrator like semi-implicit Euler. $\endgroup$
    – user3883
    Commented Dec 31, 2019 at 7:46
  • 1
    $\begingroup$ How do you compute the energy? For a point-mass, your force field $(-x,y,z)$ should have the potential energy $\frac12(x^2-y^2-z^2)$. Note that it is an oscillator only in $x$ direction. For an oscillation in all directions you would need a force field $-(x,y,z)=(-x,-y,-z)$. $\endgroup$ Commented Dec 31, 2019 at 14:45
  • 2
    $\begingroup$ @Rahul This was my exact problem. I changed the code to semi-implicit Euler and it worked fantastically. Thanks so much! $\endgroup$ Commented Dec 31, 2019 at 19:53
  • 5
    $\begingroup$ Symplectic methods preserve all linear and quadratic invariants. Which means that for a method of order $p$ they preserve a quantity to a higher order $h^{p+1})$ or better which itself is a $O(h^p)$ modification of the physical quantity. (This pattern breaks down where the field is singular, passing close to a singular state can change the constants of motion dramatically in their numerical versions.) $\endgroup$ Commented Dec 31, 2019 at 20:25

1 Answer 1

29
$\begingroup$

There are a few ways to conserve energy during ODE integration.

Method 1: Symplectic Integration

The cheapest way that is to use a symplectic integrator. A symplectic integrator solves the ODE on a symplectic manifold if it comes from one, and so if the system comes from a Hamlitonian system, then it will solve on some perturbed Hamiltonian trajectory. Some people incorrectly think that this means the solution will conserve energy, but rather it means that the solution will be on some symplectic path that's "close" to the original and it will not drift much over time, meaning that the energy drift is better contained than with other ODE solvers. This SO question and answer is a high level introduction to this idea, so consult that for more information.

In Julia's DifferentialEquations.jl, it just amounts to defining the ODE as a DynamicalODE and solving it with a symplectic integrator. Here's a version using the 8th order symplectic integrator:

using DifferentialEquations
function HH_velocity!(du,v,u,p,t)
  dx,dy = v
  du[1] = dx
  du[2] = dy
end
function HH_acceleration!(dv,v,u,p,t)
  x,y  = u
  dv[1] = -x - 2x*y
  dv[2] = y^2 - y -x^2
end
initial_positions = [0.0,0.1]
initial_velocities = [0.5,0.0]
prob = DynamicalODEProblem(HH_acceleration!,HH_velocity!,initial_velocities,initial_positions,tspan)
sol2 = solve(prob, KahanLi8(), dt=1/10);
plot(sol2, vars=(3,4), title = "The orbit of the Hénon-Heiles system", xaxis = "x", yaxis = "y", leg=false)

enter image description here

Note that if you have a second order ODE, there is a helper function that does the velocity part for you, so you can equivalently write:

prob = SecondOrderODEProblem(HH_acceleration!,initial_velocities,initial_positions,tspan)
sol2 = solve(prob, KahanLi8(), dt=1/10);

For more details on defining Hamiltonian/symplectic systems, consult the documentation on dynamical ODE problems.

Method 2: Use post-step projection

After each step you can project back to the manifold. By the triangle inequality you can show that the order of this method is preserved, i.e. a 5th order method with a projection to the manifold after each step is still 5th order accurate. This is done in Julia's DifferentialEquations.jl with the ManifoldProjection callback in the callback library, like:

using DifferentialEquations, Plots
u0 = ones(2)
function f(du,u,p,t)
  du[1] = u[2]
  du[2] = -u[1]
end
prob = ODEProblem(f,u0,(0.0,10_000.0))
function g(resid,u,p,t)
  resid[1] = u[2]^2 + u[1]^2 - 2
  resid[2] = 0
end
cb = ManifoldProjection(g)
sol1 = solve(prob,Tsit5())
sol2 = solve(prob,Tsit5(),callback=cb)
plot(sol1,vars=(1,2),title="Long time solve of harmnic oscillator",label="No Projection")
plot!(sol2,vars=(1,2),label="Projection")

enter image description here

That's not a really thick line, that's the numerical solution drifting outward really really slowly!

Method 3: Solving a DAE

The third method is to solve a DAE. A DAE is essentially an ODE with constraints. For example, take the Robertson chemical reaction ODE:

using DifferentialEquations
function rober(du,u,p,t)
  y₁,y₂,y₃ = u
  k₁,k₂,k₃ = p
  du[1] = -k₁*y₁+k₃*y₂*y₃
  du[2] =  k₁*y₁-k₂*y₂^2-k₃*y₂*y₃
  du[3] =  k₂*y₂^2
  nothing
end
prob = ODEProblem(rober,[1.0,0.0,0.0],(0.0,1e5),(0.04,3e7,1e4))
sol = solve(prob)
plot(sol,tspan=(1e-2,1e5),xscale=:log10)

enter image description here

Instead of solving 3 ODEs:

\begin{aligned} \frac{dy_1}{dt} &= -0.04y₁ + 10^4 y_2 y_3 \\ \frac{dy_2}{dt} &= 0.04 y_1 - 10^4 y_2 y_3 - 3*10^7 y_{2}^2 \\ \frac{dy_3}{dt} &= 3*10^7 y_{3}^2 \\ \end{aligned}

we can solve 2 ODEs and a conservation equation, since in this case we know that $y_1 + y_2 + y_3 = 1$ in this reaction system. Thus we can solve the DAE defined by:

\begin{aligned} \frac{dy_1}{dt} &= -0.04y₁ + 10^4 y_2 y_3 \\ \frac{dy_2}{dt} &= 0.04 y_1 - 10^4 y_2 y_3 - 3*10^7 y_{2}^2 \\ 1 &= y_{1} + y_{2} + y_{3} \\ \end{aligned}

There are two ways to do this. One way is to use a singular mass matrix, i.e. solve $Mu'=f(u,p,t)$. If you make the last row of the mass matrix all zeros, you can use the third equation to write down the conservation equation, which is shown in the DifferentialEquations.jl documentation here like

using DifferentialEquations
function rober(du,u,p,t)
  y₁,y₂,y₃ = u
  k₁,k₂,k₃ = p
  du[1] = -k₁*y₁+k₃*y₂*y₃
  du[2] =  k₁*y₁-k₂*y₂^2-k₃*y₂*y₃
  du[3] =  y₁ + y₂ + y₃ - 1
  nothing
end
M = [1. 0  0
     0  1. 0
     0  0  0]
f = ODEFunction(rober,mass_matrix=M)
prob_mm = ODEProblem(f,[1.0,0.0,0.0],(0.0,1e5),(0.04,3e7,1e4))
sol = solve(prob_mm,Rodas5(),reltol=1e-8,abstol=1e-8)

Alternatively, you can define an ODE in its fully implicit form $f(\frac{du}{dt},u,p,t)=0$ and use this to encode the constraints of the DAE. This is shown in the DAE portion of the DifferentialEquations.jl documentation as:

function f(out,du,u,p,t)
  out[1] = - 0.04u[1]              + 1e4*u[2]*u[3] - du[1]
  out[2] = + 0.04u[1] - 3e7*u[2]^2 - 1e4*u[2]*u[3] - du[2]
  out[3] = u[1] + u[2] + u[3] - 1.0
end
u₀ = [1.0, 0, 0]
du₀ = [-0.04, 0.04, 0.0]
tspan = (0.0,100000.0)
using DifferentialEquations
differential_vars = [true,true,false]
prob = DAEProblem(f,du₀,u₀,tspan,differential_vars=differential_vars)
using Sundials
sol = solve(prob,IDA())

Notice the only new idea in this form is differential_vars, where we specify that variables 1 and 2 are given by differential equations, while variable 3 is given by an algebraic equation. Both of these forms give the same plot as the ODE in this case, so there's no use showing the plots, but this can be used in cases where you find loss of energy to encode some variables in a way that requires energy to be conserved.

Method 4: Very Very Accurate ODE solving

The last way is simple: just set abstol=1e-14,reltol=1e-14, and if the simulation is accurate enough, energy will be mostly conserved. Of course, this can get expensive.

Which Method is Best?

That is very problem dependent. Usually using a DAE is much heavier because you have to use an implicit method, so I would recommend against that unless there are other aspects of the equation that require it to be a DAE. Sometimes decreasing the tolerance is all you need, so I would recommend trying that first. Then I would recommend trying the ManifoldProjection: it's actually quite efficient if the system is small or the system is solved at high accuracy (since in practice it only needs to project after every few steps after it drifts beyond some tolerance), but its computational cost grows as $\mathcal{O}(n^3)$ where $n$ is the number of ODEs. So for very large systems, this will be by far more expensive than the actual ODE stepping, so it's not recommended for that case. In that case, high accuracy symplectic integrators usually get the job done, maybe adding a ManifoldProjection on that which only fires every once in awhile.

For some benchmarks, you may want to check out DiffEqBenchmarks.jl, which has comparisons between these approaches for high energy accuracy solving of Hamiltonian systems:

System 1 System 2

This shows that for small enough systems solved at high accuracy, a high order RK method + ManifoldProjection will perform the best (notice that there are specialized high order RKs for dynamical ODEs as well which perform even better than the standard 1st order ODE solvers on these specific equations!). Note that this shows the performance of optimized implementations, as indicated by the cross-language benchmarks, and the performance comparisons of less optimized implementations could differ.

We are always looking for more benchmarks, so feel free to donate a benchmark along these lines! Please get in contact with me if you want help in doing so!

$\endgroup$
2

Not the answer you're looking for? Browse other questions tagged or ask your own question.