Previous | ToC | Up | Next |

** Alice**: Hi, Bob! You've been up to something again, I can tell.

** Bob**: I got hooked on exploring multi-step methods.

** Alice**: Don't tell me that you went to 6th order!

** Bob**: No, in the sense that I did not * just* go to 6th order.
I actually went to 6th * and* 8th order.

** Alice**: No! That can't be, with your dislike for combinatorics. You're
pulling my leg.

** Bob**: No, I did not do the combinatorics myself. Instead, I used a
symbolic manipulation package. When I saw you struggling with all those
substitutions when we tracked the fourth-order nature of the
Runge-Kutta integrator, I was already wondering whether we shouldn't
use a symbolic package. At that point I didn't bring it
up, since we were in the middle of trying to figure out how things
worked in the first place, and introducing yet another tool might have
been confusing.

And also when we generalized my two-step method to a four-step method,
I could see the merit of tracking things done with pen and paper.
Certainly when we're going to show this to our students, we should
demand that they are at least * able* to do it with pen and paper. But
now that the procedure has become crystal clear, we are just faced
with solving a large number of linear equations with the same number
of unknowns, and that is really not something I want to do by hand;
not would I learn much from doing so.

Incidentally, I also had a look at the literature, and I found that people general solve this type of problem using what they call divided differentces, really a form of Newton interpolation. In that case they do not have to solve the system of equations that we set out to do. Even so, their approach is not that simple either when going to higher order. Given the ease of using a symbolic manipulation program, I think that our approach is just as good as the standard one.

Here is what I found, with the results for the coefficients directly plugged back into the code:

def ms6(dt) if @nsteps == 0 @ap5 = acc yo6(dt) elsif @nsteps == 1 @ap4 = acc yo6(dt) elsif @nsteps == 2 @ap3 = acc yo6(dt) elsif @nsteps == 3 @ap2 = acc yo6(dt) elsif @nsteps == 4 @ap1 = acc yo6(dt) else ap0 = acc jdt = (ap0*137 - @ap1*300 + @ap2*300 - @ap3*200 + @ap4*75 - @ap5*12)/60 sdt2 = (ap0*45 - @ap1*154 + @ap2*214 - @ap3*156 + @ap4*61 - @ap5*10)/12 cdt3 = (ap0*17 - @ap1*71 + @ap2*118 - @ap3*98 + @ap4*41 - @ap5*7)/4 pdt4 = ap0*3 - @ap1*14 + @ap2*26 - @ap3*24 + @ap4*11 - @ap5*2 xdt5 = ap0 - @ap1*5 + @ap2*10 - @ap3*10 + @ap4*5 - @ap5 @pos += (vel+(ap0+ (jdt+(sdt2+(cdt3+pdt4/6)/5)/4)/3)*dt/2)*dt @vel += (ap0+(jdt+(sdt2+(cdt3+(pdt4+xdt5/6)/5)/4)/3)/2)*dt @ap5 = @ap4 @ap4 = @ap3 @ap3 = @ap2 @ap2 = @ap1 @ap1 = ap0 end end

** Alice**: Again not so bad, and each time derivative still fits on one line.
I see a jerk, a snap, a crackle, and a pop. But what is that next line,
this ` x` variable? It must be the derivative of the pop, given that you
multiply it with yet one higher power of the time step. What does it stand
for?

** Bob**: To be honest, I had no idea what to call it, so I just called it
` x`. The advantage was that I could then call the next two derivatives
` y` and ` z`, for the eighth-order scheme.

** Alice**: Iron logic! And you used Yoshida's sixth-order integrator, to
rev up the engine, five times.

** Bob**: Yes, I am glad we had both `yo6` and `yo8` lying
around. They sure came in handy!

** Alice**: And you'd like to show me the eighth-order version as well,
don't you?

** Bob**: I can't wait! Here it is, in file `ms8body.rb`.

def ms8(dt) if @nsteps == 0 @a7 = acc yo8(dt) elsif @nsteps == 1 @a6 = acc yo8(dt) elsif @nsteps == 2 @a5 = acc yo8(dt) elsif @nsteps == 3 @a4 = acc yo8(dt) elsif @nsteps == 4 @a3 = acc yo8(dt) elsif @nsteps == 5 @a2 = acc yo8(dt) elsif @nsteps == 6 @a1 = acc yo8(dt) else a0 = acc j = (a0*1089 - @a1*2940 + @a2*4410 - @a3*4900 + @a4*3675 - @a5*1764 + @a6*490 - @a7*60)/420 s = (a0*938 - @a1*4014 + @a2*7911 - @a3*9490 + @a4*7380 - @a5*3618 + @a6*1019 - @a7*126)/180 c = (a0*967 - @a1*5104 + @a2*11787 - @a3*15560 + @a4*12725 - @a5*6432 + @a6*1849 - @a7*232)/120 p = (a0*56 - @a1*333 + @a2*852 - @a3*1219 + @a4*1056 - @a5*555 + @a6*164 - @a7*21)/6 x = (a0*46 - @a1*295 + @a2*810 - @a3*1235 + @a4*1130 - @a5*621 + @a6*190 - @a7*25)/6 y = a0*4 - @a1*27 + @a2*78 - @a3*125 + @a4*120 - @a5*69 + @a6*22 - @a7*3 z = a0 - @a1*7 + @a2*21 - @a3*35 + @a4*35 - @a5*21 + @a6*7 - @a7 @pos += (vel+(a0+(j+(s+(c+(p+(x+y/8)/7)/6)/5)/4)/3)*dt/2)*dt @vel += (a0 +(j +(s+(c+(p+(x+(y+z/8)/7)/6)/5)/4)/3)/2)*dt @a7 = @a6 @a6 = @a5 @a5 = @a4 @a4 = @a3 @a3 = @a2 @a2 = @a1 @a1 = a0 end end

As you can see, this time I abbreviated the variables yet a little
more: instead of `@ap3`, for example, I wrote `@a3`,
since by now the meaning is clear, I hope.

** Alice**: Let's see whether your symbolic manipulator has done its job
correctly . . .

** Bob**: . . . and whether I have done my job correctly, typing in the
results. We'll start with our, by now almost canonical, set of values:

|gravity> ruby integrator_driver4g.rb < euler.in dt = 0.01 dt_dia = 0.1 dt_out = 0.1 dt_end = 0.1 method = ms6 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 0.1, after 10 steps : E_kin = 0.129 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 3.38e-13 (E_tot - E_init) / E_init =-3.86e-13 1.0000000000000000e+00 9.9499478008960474e-01 4.9916426216165405e-02 -1.0020902859905861e-01 4.9748796006154566e-01Hmm, almost too good. Let's be bold and try a much longer time step, to get some more playing room, away from machine accuracy.

|gravity> ruby integrator_driver4h.rb < euler.in dt = 0.05 dt_dia = 0.1 dt_out = 0.1 dt_end = 0.1 method = ms6 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 0.1, after 2 steps : E_kin = 0.129 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 1.53e-13 (E_tot - E_init) / E_init =-1.75e-13 1.0000000000000000e+00 9.9499478009234266e-01 4.9916426209296906e-02 -1.0020902857520513e-01 4.9748796006113827e-01Hey, that is

** Alice**: That is strange indeed. Independent of the order, the results
should at least get worse, certainly for such a short part of a Kepler
orbit. It's not like we just encountered a singularity or something.

** Bob**: It feels as if I have just violated the second law of thermodynamics
or something. How can that be?

** Alice**: As you say in cases like this, let's wiggle the wires, just to
see what happens. How about taking an even larger time step, by a factor
of two?

** Bob**: May as well. Here is the result:

|gravity> ruby integrator_driver4i.rb < euler.in dt = 0.1 dt_dia = 0.1 dt_out = 0.1 dt_end = 0.1 method = ms6 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 0.1, after 1 steps : E_kin = 0.129 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 9.12e-12 (E_tot - E_init) / E_init =-1.04e-11 1.0000000000000000e+00 9.9499478026806454e-01 4.9916425775239165e-02 -1.0020902692758932e-01 4.9748796009965129e-01

** Bob**: If we would have just done the last two runs, we would have been
very happy and content. How difficult it is, to test a piece of software!
We know that there is something * seriously* wrong, because the first run
of the last three gave a result that was way off, and yet the last two
runs with much larger time steps seem to behave like a charm.

** Alice**: Like a charm indeed. Now there must be * something* different,
between the first run, and the last two.

** Bob**: But what can there be different? Same code, some procedure, same
Bob, same Alice.

** Alice**: Only larger time steps. And since you kept the total
integration time constant, fewer steps in total. In fact, the last run
had just one step, and the run before that only two steps. But how can
that make a diff . . . . Ah! You talked about revving up the engine . . .

** Bob**: . . . so if I am only doing * one* or * two* time steps, as I've
done in the last two runs . . .

** Alice**: . . . you're not testing Bob, you're testing Yoshida!

** Bob**: And clearly, Yoshida has done a very good job indeed. And in the
next-to-last run, where I took ten time steps, I just started to run the
eight-step method, but only in the last three steps. Okay! What a relief.
No problems with the law of thermodynamics, after all!

** Alice**: So we have to do many more steps, much more than the minimum
number of seven, needed to switch over from the startup `yo6` method
to the real `ms6` method. Let's shoot for a full hundred time steps:

|gravity> ruby integrator_driver4j.rb < euler.in dt = 0.01 dt_dia = 1 dt_out = 1 dt_end = 1 method = ms6 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 1, after 100 steps : E_kin = 0.867 , E_pot = -1.74 , E_tot = -0.875 E_tot - E_init = 1.31e-08 (E_tot - E_init) / E_init =-1.5e-08 1.0000000000000000e+00 4.3185799584762230e-01 3.7795822363439124e-01 -1.3171720029068033e+00 5.0109728337030257e-03

|gravity> ruby integrator_driver4k.rb < euler.in dt = 0.001 dt_dia = 1 dt_out = 1 dt_end = 1 method = ms6 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 1, after 1000 steps : E_kin = 0.867 , E_pot = -1.74 , E_tot = -0.875 E_tot - E_init = 2.02e-14 (E_tot - E_init) / E_init =-2.31e-14 1.0000000000000000e+00 4.3185799595666452e-01 3.7795822148734887e-01 -1.3171719961439259e+00 5.0109410148471960e-03

|gravity> ruby integrator_driver4l.rb < euler.in dt = 0.002 dt_dia = 1 dt_out = 1 dt_end = 1 method = ms6 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 1, after 500 steps : E_kin = 0.867 , E_pot = -1.74 , E_tot = -0.875 E_tot - E_init = 1.36e-12 (E_tot - E_init) / E_init =-1.55e-12 1.0000000000000000e+00 4.3185799595664653e-01 3.7795822148753511e-01 -1.3171719961446775e+00 5.0109410176396871e-03Great! Close to factor 64, and now really with the multi-step integrator, not with a little startup help from Yoshida. We're talking 500 and 1000 steps here, so we're finally testing the six-step code itself.

** Bob**: Time to test the eighth-order implementation. Given that we are
now warned about not using few time steps, let's start with one hundred
steps, as we did before:

|gravity> ruby integrator_driver4m.rb < euler.in dt = 0.01 dt_dia = 1 dt_out = 1 dt_end = 1 method = ms8 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 1, after 100 steps : E_kin = 0.867 , E_pot = -1.74 , E_tot = -0.875 E_tot - E_init = 5.61e-10 (E_tot - E_init) / E_init =-6.41e-10 1.0000000000000000e+00 4.3185799594296315e-01 3.7795822152601549e-01 -1.3171719965318329e+00 5.0109417456880440e-03

|gravity> ruby integrator_driver4n.rb < euler.in dt = 0.005 dt_dia = 1 dt_out = 1 dt_end = 1 method = ms8 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 1, after 200 steps : E_kin = 0.867 , E_pot = -1.74 , E_tot = -0.875 E_tot - E_init = 3.44e-12 (E_tot - E_init) / E_init =-3.93e-12 1.0000000000000000e+00 4.3185799595658086e-01 3.7795822148755803e-01 -1.3171719961463324e+00 5.0109410188389162e-03

|gravity> ruby integrator_driver4o.rb < euler.in dt = 0.0025 dt_dia = 1 dt_out = 1 dt_end = 1 method = ms8 at time t = 0, after 0 steps : E_kin = 0.125 , E_pot = -1 , E_tot = -0.875 E_tot - E_init = 0 (E_tot - E_init) / E_init =-0 at time t = 1, after 400 steps : E_kin = 0.867 , E_pot = -1.74 , E_tot = -0.875 E_tot - E_init = 1.45e-14 (E_tot - E_init) / E_init =-1.66e-14 1.0000000000000000e+00 4.3185799595666458e-01 3.7795822148734654e-01 -1.3171719961439252e+00 5.0109410148204553e-03

** Bob**: And close to machine accuracy as well.

** Alice**: I think we can declare victory, once again. Congratulations,
Bob: your second implementation of an eighth-order integration scheme,
after your `yo8` implementation. And this time you found all
the coefficients yourself!

** Bob**: Or, to be honest, the symbolic integrator found them. Well,
I'm glad it all worked, especially after that scare with that start-up
mystery!

And of course, if we really wanted to go all out, I should implement full predictor-corrector versions for each of them. Well, perhaps some other day.

Previous | ToC | Up | Next |