0
$\begingroup$

There are $N$ particles with positions $x_i(t)$ and velocities $v_i(t)$ and mass 1. There is a potential function $U_{i,j}(x_i, x_j)$ between each pair of particles, which is $0$ unless the particles are close enough. I want to update the simulation so that total energy $\sum_{i,j} U_{i,j}(x_i, x_j) + \sum_i \frac{1}{2}v_i^2$ is conserved, using only information local to the particles. It's alright if the simulation is otherwise not too accurate but I want to rule out any possibility of the energy growing exponentially (or decaying to 0) no matter how many time steps are used or how coarse they are. Also if a group of particles is disconnected from the rest, energy within that group of particles should be conserved.

Conserving Energy in Physics Simulation with imperfect Numerical Solver gave some answers but I don't think they would work here. I want conservation even after arbitrary numbers of time steps, and with coarse time steps - I think that rules out symplectic integration. I want isolated groups of particles to be independent of what's going on elsewhere - I think that rules out post-step projection.

I have in mind a message passing scheme where particles and potential functions pass packets of energy so that energy of one can only go up if energy of a neighbor goes down a corresponding amount, tweaking the particle velocities to ensure this. Also in my half-thought-out scheme, potential functions can "store" a local energy surplus or deficit, and adjust future interactions to bring this surplus/deficit closer to 0. But I'm having some difficulty working out the details and it would be great if something like this already exists.

$\endgroup$
5
  • $\begingroup$ As far as I understand, symplectic integration ensures energy conservation, no matter what the time step is (maybe there can be stability constraints of course). However, if the time step is fixed, they ensure conservation of an invariant of the discrete system. In your approach I am not quite sure how you would decide which particle shall receive the "stored" energy deficit to compensate. $\endgroup$
    – Laurent90
    Commented Nov 29, 2020 at 10:21
  • $\begingroup$ In my understanding symplectic integration greatly improves energy conservation over methods like Runge-Kutta, but it's not perfect and there can still be some drift, particularly if the time steps are too coarse. The message passing approach does have some trouble like you're describing that seems to only be solvable in an ad hoc way. Maybe look at the estimated work each potential function has recently done to each particle, and perhaps "stored" energy can go to particles in proportion to that work. $\endgroup$
    – causative
    Commented Nov 29, 2020 at 10:35
  • $\begingroup$ Could you just normalize the total energy of each particle so that you "re-distribute" the energy excess / deficit among all particles at each step? I see this as distributing numerical error, not "communication" between particles. $\endgroup$
    – Charlie S
    Commented Nov 29, 2020 at 12:35
  • $\begingroup$ Problem with that is, say that one particle suddenly moves into a very high part of a potential function due to the coarse time steps, and as a result goes shooting off very fast. We don't want every particle's speed to be very slightly reduced. Aside from the unintended slowing down of most of the system, we'd still be left with one particle that's way too fast. Energy should be conserved in local interactions, not just globally. $\endgroup$
    – causative
    Commented Nov 29, 2020 at 18:04
  • $\begingroup$ What have you tried so far? You have a tough problem and could easily speculate forever and come up with many reasons why any method would fail. Sometimes its best to address the problems as they come up instead of anticipating failure. The linked post provides good suggestions. $\endgroup$
    – Charlie S
    Commented Nov 30, 2020 at 20:00

1 Answer 1

2
$\begingroup$

Sorry to make this an answer, but I don't have enough reputation to comment and I feel the following clarification could be helpful:

While symplectic intergrators do not exactly conserve energy, they make the total energy of the system remain bounded, independently of the number of time steps [1 p.343 ff]. Your max error (regarding the energy) will therefore be $\mathcal{O}(h)$ instead of $\mathcal{O}(nh)$, so no exponential growing. (btw your problem is an example in the book, chapter 1.4)

Another way that comes to my mind would be to formulate each time step as an optimization problem, with energy conservation as equality constraint, but this would be a huge additional computation effort.

[1] Hairer, Ernst; Lubich, Christian; Wanner, Gerhard, Geometric numerical integration. Structure-preserving algorithms for ordinary differential equations, Springer Series in Computational Mathematics 31. Berlin: Springer (ISBN 3-540-30663-3/hbk). xvii, 644 p. (2006). ZBL1094.65125.

$\endgroup$
6
  • 2
    $\begingroup$ I'd add to this that symplectic integrators do exactly conserve the energy, but of a slightly perturbed Hamiltonian, and that you can calculate that Hamiltonian to arbitrary order in $\delta t$ from the Zassenhaus formula. $\endgroup$ Commented Nov 30, 2020 at 17:43
  • $\begingroup$ What if you have gas molecules bouncing around in a box with coarse time steps, and occasionally one jumps to right next to another one in an extremely steep part of the potential function, so they both get shoved apart very fast, not conserving energy? Can't that keep happening again and again with e.g. velocity Verlet method ( en.wikipedia.org/wiki/Verlet_integration#Velocity_Verlet ), so energy increases without bound? I don't see how the velocity Verlet equations prevent that. $\endgroup$
    – causative
    Commented Nov 30, 2020 at 19:28
  • $\begingroup$ I added a source to my answer, but Daniel basically said everything: Imagine approximate numerical solutions being exact solutions to some modified equations. For symplectic integrators, it can be shown that these modified equations are also hamiltonian, meaning their Hamiltonian will be conserved. I'm not sure about the exact relation between this modified Hamiltonian and the energy of the initial system, but for me this always worked ;) These modified equations depend on the step size, so it is probably a good idea to keep it constant (even if coarse). I changed that in my answer, too. $\endgroup$
    – Yann
    Commented Dec 2, 2020 at 5:01
  • $\begingroup$ This being said, what you are afraid of is about numerical instability, which can occur even if energy is conserved, and I don't see any numerical integrator that could avoid that, especially with large time steps. If refining the time steps is not an option, I'd give the optimization-approach a try;) $\endgroup$
    – Yann
    Commented Dec 2, 2020 at 5:20
  • $\begingroup$ My concern isn't about numerical stability. The problem I described would happen even with infinite precision. It only requires that a particle can step into a potential function at a potential value much greater than its own kinetic energy, which is a result of the speed of the particle and the coarseness of the time steps. My suspicion is that although the possible error in energy is bounded by a constant, this constant might be unacceptably high. I could try finer time steps but that would cost speed so I'm looking for an alternative :) $\endgroup$
    – causative
    Commented Dec 2, 2020 at 9:24

Not the answer you're looking for? Browse other questions tagged or ask your own question.