1.3. Implementation on the NBody Level
Bob: This gives us an individual time scale, an estimate of the amount
of time after which to expect significant changes, but on a per-particle
basis. For some particles this will result in a rather long time scale,
if they happen to move through the outskirts of an N-body system. Other
particles, close to the center, will wind up with a much shorter time scale.
In order to play it safe for all particles, we have no choice but to compute
the minimum of all these time scale estimates, and use that for the shared
time step for the whole system.
Alice: Yes, inefficient as it still is, and soon we'll do better with
individual time steps. But at least we're already doing a lot better with
shared time steps than we did with constant time steps.
Bob: I sure hope so, but we'd better test that, to make sure that is the
case. Here is how we can implement the global time step finder, on
the level of the N-body class:
def collision_time_scale
time_scale = VERY_LARGE_NUMBER
@body.each do |b|
indiv_time_scale = b.collision_time_scale(@body)
if time_scale > indiv_time_scale
time_scale = indiv_time_scale
end
end
time_scale
end
In addition, we have to make a few more changes. For one thing, the
documentation string for the clop function has to be changed, and it
is a good idea to do that right away, before we forget what it was
exactly that we were doing.
Alice: You're really getting organized at your old age . . .
Bob: Just you wait to see how organized I'll be when I reach your age!
Alice: I know, I asked for it. I'll keep my mouth shut.
Bob: In addition, I 'll take out the little trick I had added to make
sure that we didn't just miss a time boundary, because of floating point
round-off error. In nbody_cst1.rb we had:
t_dia = @time + dt_dia - 0.5*dt
t_out = @time + dt_out - 0.5*dt
t_end = @time + dt_end - 0.5*dt
which in nbody_sh1.rb will have to revert to:
t_dia = @time + c.dt_dia
t_out = @time + c.dt_out
t_end = @time + c.dt_end
Alice: I see. In the constant time step case, you could afford flagging
a halting condition half a step before it really had to happen, since there
would be no danger that you would stop prematurely, given that the step
size was fixed. In the case of variable time steps, you may wind up just
barely before the time of an output, and in that case you still have
to make an extra step.
Hmmm. This brings us to the question of diagnostics. In the variable
time step scheme, you will always overshoot by a fraction of the last
time step size, since it will be infinitely unlikely that you happend
to land exactly on the intended halting time t_end.
Bob: With infinitely large, you mean something like the size of the
largest number that can be represented in double precision floating point.
Yes, I see what you mean. I guess we could make the last step smaller,
adjusting it in such a way that it would exactly hit the required ending
time.
Alice: But to be consistent, you would have to do the same thing for
any time of output, for both particle output and diagnostics output,
In other words, you would have to shrink the time step each time you
are about to overshoot past a time t_dia and t_out,
in addition to caring about not passing t_end.
Bob: We could, but I prefer to not worry about such niceties for now.
I'm eager to get to individual time steps. Once we're there, we can
try to clean up such type of details.
Alice: I agree, mostly because I see an extra complication lurking.
If we would put time steps in a Procrustean bed, cutting off whatever
sticks beyond an output time, this would give the following unfortunate
side effect. If we ran a simulation from the same initial conditions,
but with different intervals for diagnostics output, say, we would force
the system to make slightly more time steps for the case for which we
required more frequent output. This in turn would slightly modify the
orbits of all particles, by modifying the numerical errors involved
with integrations with finite step sizes.
Bob: Aha, and since N-body systems are exponentially unstable, even a
very small difference will propagate quickly through the system, resulting
in a different type of evolution. Indeed, it would be unfortunate if
a difference in diagnostics output frequency would alter the orbits of
the particles in the system. I agree, better to leave the rough
edges, and let particles overshoot a bit. In that way, the integration
errors will not be affected.
1.4. Multiple Time Step Schemes
Alice: Those are all the changes that we need to make? It seems almost
too simple.
Bob: The real complications will occur once we move to individual time
steps. I can't wait! But let's be careful and first test our shared
time step implementation.
Alice: Wait, there is something that still bothers me. Are you sure that
all the many integration schemes that you have implemented will still work,
across the transition from constant to shared time steps?
Bob: Why not?
Alice: Well, who knows why yes or why not. I want to be sure.
Bob: As long as all particles still share the same step size, at any
given time, why would they care what will happen at the next step?
They will happily step forwards in lock step at the current step.
Alice: Even for Yoshida's algorithm?
Bob: I would think so. Let's have a look at Yoshida's fourth-order
integrator in nbody_sh1.rb
def yo4
d = [1.351207191959657, -1.702414383919315]
old_dt = @dt
@dt = old_dt * d[0]
leapfrog
@dt = old_dt * d[1]
leapfrog
@dt = old_dt * d[0]
leapfrog
@dt = old_dt
end
See, all particles step forward together, then backward together, and
then forward again, all jointly. Everything is nicely self-contained,
and really, this should work independently of any change in step size.
The current step doesn't care what the previous step did or what the
next step size will be.
Alice: Okay, Yoshida's algorithms are fine here, I agree. But what
about the multistep algorithms? Here is what we have for the case of
nbody_cst1.rb, for the simplest one, a second-order scheme:
def ms2
if @nsteps == 0
calc(" @prev_acc = acc(ba,eps) ")
rk2
else
calc(" @old_acc = acc(ba,eps) ")
calc(" @jdt = @old_acc - @prev_acc ")
calc(" @pos += @vel*dt + @old_acc*0.5*dt*dt ")
calc(" @vel += @old_acc*dt + @jdt*0.5*dt")
calc(" @prev_acc = @old_acc ")
end
end
Bob: Aaaahhh, hmmmm, well, you are right, after all, to not trust me
jumping to conclusions! This algorithm does not carry over in any
direct way to shared time steps. Look at the definition of @jdt.
It is implicitly assumed that the dt value used in the previous time step is
the same as the dt value for the current time step.
Alice: A perfectly correct assumption for the case of constant time steps,
which we have been working with so far. It never hurts to check, when you
are relaxing conditions you had been relying on so far.
Bob: I admit. Thanks! Well, we have two options. Either we can just
omit all multi-step schemes from nbody_sh1.rb, or we can try to
repair the situation. It shouldn't be too hard to re-derive the expressions
while taking into account the fact that each time step has a different size.
Alice: In the spirit of getting on with things, I vote for just leaving
them out for now. As you said, by the time we get to individual time steps,
we can add these methods again, and clean up the whole code.