Previous ToC Up Next

1. Shared Time Steps

1.1. Estimating the Time Step Size

Alice: Well, what is next? So far we've been using a constant value for the time step. As long as we use softening, that is fine in most cases, at least if we can make sure that particles all move with similar velocities.

Bob: Yes, we could rely on the fact that we had a natural time scale T, given by the time needed for a particle with a typical velocity to move over a distance equal to the softening length. To keep our integration accurate, all we had to do was to make sure that the time step we used was a small fraction f of T.

Alice: But even in that case, we could have run into trouble. If any particle were to move with a velocity larger than the typical velocity by a factor of more than 1/ f, such a particle could cross a distance more than the softening length within one time step. Since the gravitational potential, and hence the gravitational force, could chance significantly over such a distance, such a large time step would spell danger. Already for this reason, it would be better to use adaptive time steps, even if we were to use softening.

Bob: And for our main purpose, modeling dense stellar systems, we are not interested in softening. In our case, we are dealing with point particles, for all practical purposes. Only if stars approach each other within a distance comparable to the sum of their radii, do we have to take into account deviations from point-mass behavior. That deviation, if needed, will then be far more complex than adding a simple form of Plummer softening.

Alice: And if we are dealing with encounters between neutron stars and/or black holes, deviations from point-mass behavior will show up only at tiny distances, of order a hundred kilometers or so.

Bob: So we'll leave softening for those folks who want to study collisionless stellar dynamics, where by definition particle-particle interactions are not considered to be of interest. And yes, this means that we will have to introduce an adaptive way to determine the time step for each particle.

Alice: For starters, let us stick with a shared time step size, the same for all particles at any given time, but variable over time. What we need to do, is to determine an estimate for the time scale T over which we can expect significant changes to occur in the system. For the shared but variable time step we can then simply use the product of f and T, where f is a constant, with a small value that remains fixed during a run, and T is time dependent, and computed on the fly.

Bob: The most natural guess would be the time scale for collisions to occur. Even if particles are moving head-on toward each other, it will take of order relative distance divided by relative velocity to hit each other. And if they move in different directions, the time scale for significant changes in their relative position will still be given by this ratio.

Alice: But what if the particles happen to move at roughly the same velocity, in both magnitude and direction? In that case your collision time scale estimate will give much too large a number, in fact it will produce infinity if the relative velocity is exactly zero.

Bob: Good point. I guess we need to include another criterion as well. How about a free fall timescale? We can estimate the time it takes for two particles to meet each other, starting at rest. From dimensional analysis it is clear that this time scale must be something like relative distance squared divided by relative acceleration.

Alice: The square root of that quantity, you mean.

Bob: Yes, you should listen to what I mean, not to what I say!

Alice: If you say so; I mean, I'll try!

1.2. Implementation on the Body Level

Bob: There will be no ambiguity once we code it up. I'll copy the code nbody_cst1.rb into a new file nbody_sh1.rb. To start with, I'm going to get rid of all the references to softening. As we just discussed, there is no need to bother with that extra complication once we have an adaptive time step choice.

On the level of the Body class, I'll add the following method:

   def collision_time_scale(body_array)
     time_scale_sq = VERY_LARGE_NUMBER
     body_array.each do |b|
       unless b == self
         r = b.pos - @pos                                                     
         v = b.vel - @vel                                                     
         r2 = r*r
         v2 = v*v
         estimate_sq = r2 / v2            # [distance]^2/[velocity]^2 = [time]^2
         if time_scale_sq > estimate_sq
           time_scale_sq = estimate_sq
         end
         a = (@mass + b.mass)/r2
         estimate_sq = sqrt(r2)/a         # [distance]/[acceleration] = [time]^2
         if time_scale_sq > estimate_sq
           time_scale_sq = estimate_sq
         end
       end
     end
     sqrt(time_scale_sq)
   end

Alice: with the dimensional analysis arguments nicely commented in. very nice. Let me follow what you just wrote. You've chosen to work with the square of the time scale, sensibly called time_scale_sq, and at the very end you return the square root of that quantity. It is initialized as a very large number, and I presume that for each body that is not the body we are dealing with, you lower the timescale estimate, if that body will encounter our given body within a time shorter than the estimate so far.

As for the details, in the loop where you compare the body self with another body b, you first estimate the time it would take for the two particles to meet, if they were to move at roughly constant speed. If that estimate would lead to a shorter time scale than the time scale we've obtained so far, we replace the value of time_scale_sq by that of estimate_sq.

Similarly, when the two body will meet each other in a free fall time that is shorter than our time scale estimate so far, we make the same kind of replacement. Looks good!

1.3. Implementation on the NBody Level

Bob: This gives us an individual time scale, an estimate of the amount of time after which to expect significant changes, but on a per-particle basis. For some particles this will result in a rather long time scale, if they happen to move through the outskirts of an N-body system. Other particles, close to the center, will wind up with a much shorter time scale. In order to play it safe for all particles, we have no choice but to compute the minimum of all these time scale estimates, and use that for the shared time step for the whole system.

Alice: Yes, inefficient as it still is, and soon we'll do better with individual time steps. But at least we're already doing a lot better with shared time steps than we did with constant time steps.

Bob: I sure hope so, but we'd better test that, to make sure that is the case. Here is how we can implement the global time step finder, on the level of the N-body class:

   def collision_time_scale
     time_scale = VERY_LARGE_NUMBER
     @body.each do |b|
       indiv_time_scale = b.collision_time_scale(@body)
       if time_scale > indiv_time_scale
         time_scale = indiv_time_scale
       end
     end
     time_scale
   end

In addition, we have to make a few more changes. For one thing, the documentation string for the clop function has to be changed, and it is a good idea to do that right away, before we forget what it was exactly that we were doing.

Alice: You're really getting organized at your old age . . .

Bob: Just you wait to see how organized I'll be when I reach your age!

Alice: I know, I asked for it. I'll keep my mouth shut.

Bob: In addition, I 'll take out the little trick I had added to make sure that we didn't just miss a time boundary, because of floating point round-off error. In nbody_cst1.rb we had:

     t_dia = @time + dt_dia - 0.5*dt                                          
     t_out = @time + dt_out - 0.5*dt                                          
     t_end = @time + dt_end - 0.5*dt                                          

which in nbody_sh1.rb will have to revert to:

     t_dia = @time + c.dt_dia                                                 
     t_out = @time + c.dt_out                                                 
     t_end = @time + c.dt_end                                                 

Alice: I see. In the constant time step case, you could afford flagging a halting condition half a step before it really had to happen, since there would be no danger that you would stop prematurely, given that the step size was fixed. In the case of variable time steps, you may wind up just barely before the time of an output, and in that case you still have to make an extra step.

Hmmm. This brings us to the question of diagnostics. In the variable time step scheme, you will always overshoot by a fraction of the last time step size, since it will be infinitely unlikely that you happend to land exactly on the intended halting time t_end.

Bob: With infinitely large, you mean something like the size of the largest number that can be represented in double precision floating point. Yes, I see what you mean. I guess we could make the last step smaller, adjusting it in such a way that it would exactly hit the required ending time.

Alice: But to be consistent, you would have to do the same thing for any time of output, for both particle output and diagnostics output, In other words, you would have to shrink the time step each time you are about to overshoot past a time t_dia and t_out, in addition to caring about not passing t_end.

Bob: We could, but I prefer to not worry about such niceties for now. I'm eager to get to individual time steps. Once we're there, we can try to clean up such type of details.

Alice: I agree, mostly because I see an extra complication lurking. If we would put time steps in a Procrustean bed, cutting off whatever sticks beyond an output time, this would give the following unfortunate side effect. If we ran a simulation from the same initial conditions, but with different intervals for diagnostics output, say, we would force the system to make slightly more time steps for the case for which we required more frequent output. This in turn would slightly modify the orbits of all particles, by modifying the numerical errors involved with integrations with finite step sizes.

Bob: Aha, and since N-body systems are exponentially unstable, even a very small difference will propagate quickly through the system, resulting in a different type of evolution. Indeed, it would be unfortunate if a difference in diagnostics output frequency would alter the orbits of the particles in the system. I agree, better to leave the rough edges, and let particles overshoot a bit. In that way, the integration errors will not be affected.

1.4. Multiple Time Step Schemes

Alice: Those are all the changes that we need to make? It seems almost too simple.

Bob: The real complications will occur once we move to individual time steps. I can't wait! But let's be careful and first test our shared time step implementation.

Alice: Wait, there is something that still bothers me. Are you sure that all the many integration schemes that you have implemented will still work, across the transition from constant to shared time steps?

Bob: Why not?

Alice: Well, who knows why yes or why not. I want to be sure.

Bob: As long as all particles still share the same step size, at any given time, why would they care what will happen at the next step? They will happily step forwards in lock step at the current step.

Alice: Even for Yoshida's algorithm?

Bob: I would think so. Let's have a look at Yoshida's fourth-order integrator in nbody_sh1.rb

   def yo4
     d = [1.351207191959657, -1.702414383919315]
     old_dt = @dt
     @dt = old_dt * d[0]
     leapfrog
     @dt = old_dt * d[1]
     leapfrog
     @dt = old_dt * d[0]
     leapfrog
     @dt = old_dt
   end

See, all particles step forward together, then backward together, and then forward again, all jointly. Everything is nicely self-contained, and really, this should work independently of any change in step size. The current step doesn't care what the previous step did or what the next step size will be.

Alice: Okay, Yoshida's algorithms are fine here, I agree. But what about the multistep algorithms? Here is what we have for the case of nbody_cst1.rb, for the simplest one, a second-order scheme:

   def ms2
     if @nsteps == 0
       calc(" @prev_acc = acc(ba,eps) ")
       rk2
     else
       calc(" @old_acc = acc(ba,eps) ")
       calc(" @jdt = @old_acc - @prev_acc ")
       calc(" @pos += @vel*dt + @old_acc*0.5*dt*dt ")
       calc(" @vel += @old_acc*dt + @jdt*0.5*dt")
       calc(" @prev_acc = @old_acc ")
     end
   end

Bob: Aaaahhh, hmmmm, well, you are right, after all, to not trust me jumping to conclusions! This algorithm does not carry over in any direct way to shared time steps. Look at the definition of @jdt. It is implicitly assumed that the dt value used in the previous time step is the same as the dt value for the current time step.

Alice: A perfectly correct assumption for the case of constant time steps, which we have been working with so far. It never hurts to check, when you are relaxing conditions you had been relying on so far.

Bob: I admit. Thanks! Well, we have two options. Either we can just omit all multi-step schemes from nbody_sh1.rb, or we can try to repair the situation. It shouldn't be too hard to re-derive the expressions while taking into account the fact that each time step has a different size.

Alice: In the spirit of getting on with things, I vote for just leaving them out for now. As you said, by the time we get to individual time steps, we can add these methods again, and clean up the whole code.
Previous ToC Up Next