# Thread: More precise orbital simulation program

1. Originally Posted by publius
And that bothers me why it blows up like that with the average, but not the new velocity. I'm going to do some ciperhin' on that............

It bothers me because we're modelling this as a constant acceleration over the time step.

dv/dt = a --> v = at + v0 = dx/dt --> x = 1/2 at^2 + v0t + x0;

Substitute at = v - v0 and we have x = 1/2 (v - v0)t + v0t + x0;
or

x = 1/2(v + v0) + x0

But, as it is, what we're doing is saying:

x = vt + x0 = at^2 + v0t + x0, which is equivalent to saying it is instantly accelerated to v, and moves at that constant speed over the t interval.

That just does not compute to me right now.

-Richard
Well, if we add the velocities over equal time intervals for some acceleration to find the distance travelled, we get something like 1+2+3+4+5+... = n*(n+1)/2, so for infinitesimal time intervals, the distance travelled becomes just x=a*t^2/2, but not just over a single time interval. To cut the difference in half each time is to cut the acceleration, so bodies just fly out of orbit. What we need to do is to subtract the value a*ti^2/2 each time somehow as it goes through the loops to get rid of the extra acceleration that way and put the orbital paths precisely where they should be. That would make it much more precise, so I guess that's the next thing to work on. See my post #23.

2. In regard to averaging the velocities, I have to say that I don't really get it either. It would seem like something like that should work out. I've already used it to make orbital paths millions of times more precise, but only for circular orbits, and it is the only thing that works for this so far, by averaging out the original and final velocities for the distance, but with a couple of other things thrown in with it. I'll have to keep going through the values and see if I can't figure out what's the deal.

3. Have you run it without the average velocities yet? Just curious. Try it for qq = 1.0L + f3*(1.0L/6.0L + f3*(1.0L/14.0L + f3/26.0L) ). The orbits should be really tight even at higher time steps with that, except for the natural precession, of course. At least it is for Mercury and the sun, all the way up to 500K time step. I'll keep trying to find the precise formula for it, though. It should become more obvious what it should be with more elliptical orbits.

4. Originally Posted by publius
And that bothers me why it blows up like that with the average, but not the new velocity. I'm going to do some ciperhin' on that............
I think this is what they call the difference between an "implicit" (use new v) and "explicit" (use old v) approach. The average is a combination, which should formally be the most accurate, but accuracy is not the only issue-- stability is also a concern. Implicit modes are more stable, so I'm not surprised the implicit approach does not blow up when the partially explicit approach, though seeming more accurate, does blow up.

5. Originally Posted by grav
Have you run it without the average velocities yet? Just curious. Try it for qq = 1.0L + f3*(1.0L/6.0L + f3*(1.0L/14.0L + f3/26.0L) ). The orbits should be really tight even at higher time steps with that, except for the natural precession, of course. At least it is for Mercury and the sun, all the way up to 500K time step. I'll keep trying to find the precise formula for it, though. It should become more obvious what it should be with more elliptical orbits.

I'll get around to playing some more tonight. As it was, with the erroneous averaging logic ("old v" was pretty close to new v for the planets because it was saved only after the most significant contribution, from the sun, was already applied) was pretty close. Only a difference in the 4-5th digit at most. Your 'qq' was pretty good, at least for the inner planets. I suspect it might be too much for slower speed, higher distance, but I'd have to carefully look (I think the outer planets were slightly precessing in the opposite direction).

Now, if you can come up with a correction that keeps us from loosing the moon in short order at 64K timestep, I will be impressed shore 'nuff. Better hurry, because we loose it in a hundred years or so........

Just for fun, I may let it loose the moon, the tighten up the time step just a bit for better accuracy and see just what happens with another object in earth's solar orbit. Will one or both be thrown out, or will they collide, or just what will happen.

-Richard

6. Originally Posted by Ken G
I think this is what they call the difference between an "implicit" (use new v) and "explicit" (use old v) approach. The average is a combination, which should formally be the most accurate, but accuracy is not the only issue-- stability is also a concern. Implicit modes are more stable, so I'm not surprised the implicit approach does not blow up when the partially explicit approach, though seeming more accurate, does blow up.
Ken,

Thanks for looking at this. So this may be just the way this particular ball bounces -- or orbits. When I see something counter to expectations, I always want to know why.

-Richard

7. Originally Posted by publius
I'll get around to playing some more tonight. As it was, with the erroneous averaging logic ("old v" was pretty close to new v for the planets because it was saved only after the most significant contribution, from the sun, was already applied) was pretty close. Only a difference in the 4-5th digit at most. Your 'qq' was pretty good, at least for the inner planets. I suspect it might be too much for slower speed, higher distance, but I'd have to carefully look (I think the outer planets were slightly precessing in the opposite direction).
Thanks publius. With the program the way it is now, I don't think you should see much or any precession at 64K, except for that due to the planets affecting each other, of course. I think the average velocities may have been throwing it off some, so if you liked it before, you should really be impressed now . Also, the variable F3 which is used in the terms for qq is F3=MM*ti^2/d^3, so it becomes exponentially smaller with lesser time intervals and greater distance, and will not overcompensate, but actually does an even better job with the correction. It's only at very large time steps and small distances where F3 comes close to 1 that the precession becomes noticeable again, since the terms are not brought out as an infinite sum and because I have not yet found the precise formula for it, as well as the paths of the orbits running off anyway because of the large time step to begin with. By the way, that new formula for qq will just be qq=1 + F3/6.0L + F3*F3/14.0L for now (final answer for the time being ). That is a simple and short version that won't take too much time for the computer to run, and it works about as well as anything else I've come up with so far. I'm having trouble reading past that third term.

Now, if you can come up with a correction that keeps us from loosing the moon in short order at 64K timestep, I will be impressed shore 'nuff. Better hurry, because we loose it in a hundred years or so........
-Richard
I think the averaged velocities probably did that. It was throwing the orbits way out in mine, but of course yours stayed somewhat closer as it ran through the loops. If not, then we'll need to keep working on a more precise orbit so we can run it at that speed or greater. I think it should work now, though, because the program will plot anything up to 10 points or so per orbit, and the moon would have 40 at 64K, which would be enough to throw it away with the velocities slightly off, but it should only show some mild precession with the new program at that speed.

8. Okay, I've run the Earth-moon system and it is stable, but with only 40 points to plot at such a high time step, F3 is close to 1, so it still precesses quite a bit, although gradually. The planets should be fine, though. So without qq as precise as it could be, we may want to jimmy-rig the equation for the Earth-moon system in the solar system simulation. An idea for one way to do this would be with the formula for qq in the code to be qq=1 + F3/6.0L + F3*F3*q1[j]*q1[k]/14.0L . Here, we would set q1 for all of the planets at 1 at the start of the program, but the moon would be 1.263 . That will stabilize the precession precisely. I know, it's not exactly ideal, but it will work for this particular simulation, anyway. We could do that for the rest of the planets as well, if they were slightly off, but we would have to do it for sets of every two bodies and take the highest value so that we don't take away the natural precession with it. I wouldn't suggest making something like this part of the permanent program or anything, though, but it's just a thought for experimenting with it. Now that I think about it, though, since the Earth-moon system precesses so well at high speed, I might try to use it to come up with a closer approximation for qq to begin with, at least for our solar system, and post that instead.

9. I've added an extra loop to the code for qq, so the terms can be brought out indefinitely at will and I'm using that to find the precise formula now. One can set the value of 'qa' to whatever they want at the beginning of the program to decrease the precession better, but for the formula I've got this time, qa=5 should be good enough to suffice for the Earth-moon system, anyway. It will only slow that part of the program down slightly, maybe a percent or so, but it's worth it, especially since it allows the simulation to be used at greater time steps. For just the planets, it can probably be lowered to qa=2. I've also found a better formula for when it is brought out like this at higher speeds (although that will probably change in the future as I refine it). The moon will only precess a few hundred arc-seconds per year now at 64K, and other tight orbits at large time steps respond likewise. So here's the new code. I've colored the new loop in blue.

Code:
```// CodeOpGrav.cpp : N-body loop for Gravity Simulator
// Use Grav's "qq" thing

#include "stdafx.h"
#define MSIZE 800
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD  ul_reason_for_call,
LPVOID lpReserved
)
{
return TRUE;
}

__declspec(dllexport) int _stdcall RunLoop(int NumObjects, double * restrict ObjMass, double * restrict objx, double * restrict objy, double * restrict objz, double * restrict objvx, double * restrict objvy, double * restrict objvz, double * restrict objsize, long * restrict CollisionObjectA, long * restrict CollisionObjectB)
{
if (NumObjects < 1) return 0; //Should be 1 or more

// static double tM[MSIZE],tM2[MSIZE];
static double tM,tM2;
// static double dx[MSIZE], dy[MSIZE], dz[MSIZE];
static double dx, dy, dz;
// static double D[MSIZE], f[MSIZE], D2[MSIZE];
static double D, f1, f2, f3, D2, D3;
double fx;
double fy;
double fz;
double qq;
int Collisions;
Collisions = 0;
for (int k=1;k<=NumObjects-1;k++) {
tM = ObjMass[k] * 398600440000000;

for (int j=k+1;j<=NumObjects;j++) {

dx = objx[j] - objx[k];
dy = objy[j] - objy[k];
dz = objz[j] - objz[k];
D2 = dx*dx+dy*dy+dz*dz;
D3 = (D = sqrt(D2) )* D2;

if  (((objsize[k] + objsize[j])/2) > D) {
Collisions = Collisions + 1;
CollisionObjectA[Collisions] = j;
CollisionObjectB[Collisions] = k;
}
tM2 = ObjMass[j] * 398600440000000;
f3 = (tM + tM2)/D3;
qq = 1;
for (int qb=1;qb<=qa;qb++) {
qq = qq + pow(f3,qb)*(2-1/qb)/qb/(4*qb+2);
}
f1 = tM*qq/D3;
f2 = tM2*qq/D3;
if (tM > 0) {
objvx[j] = objvx[j] - dx*f1;
objvy[j] = objvy[j] - dy*f1;
objvz[j] = objvz[j] - dz*f1;
}
if (tM2 > 0) {
objvx[k] = objvx[k] + dx*f2;
objvy[k] = objvy[k] + dy*f2;
objvz[k] = objvz[k] + dz*f2;
}

}

}
//#pragma vector always

for (int h=1;h<=NumObjects;h++) {
objx[h] = objx[h] + objvx[h];
objy[h] = objy[h] + objvy[h];
objz[h] = objz[h] + objvz[h];
}

//End of loop computations ------
return 1; //Finished successfully

}```
Last edited by grav; 2007-Apr-30 at 04:26 AM.

10. Grav,

I was meaning to get back to work at the old VS, looking at this, but I haven't. But, I'm going to have be the Gillianren of computer grammer here.

Code:
```qq = 1;
for (qb=1;qb<=qa;qb++) {
qq = qq + f3^qb*(2-1/qb)/qb/(4*qb+2);
}```
This will not compile by a long shot. In C, one must declare variables before one uses them; there are no implicit typing rules as with BASIC or even FORTRAN. That's what all that stuff about "static double" is at the very top of the function body. So 'qb' and 'qa' must be declared according to which type they are. The intent is integer.

Second, C does not have an exponentiation operator. That's '^' in BASIC and '**' in FORTRAN. One must call a function for that [ pow() is the standard library name for that]. In (optimizing) practice, most compilers will make instrinsics out of some of these functions, like sqrt(), effectively making them operators, really.

Now, in C the '^' character is the bitwise XOR operator, so your code above told the compiler you wanted to take a bitwise XOR with a floating point value (something I'm not even sure is legal) and 'qb' then multiply by the remainder of the expression! Not what you intended at all.

That's the syntax and sematics. So, until you learn C, give me your algorithm, and let me write it in C.

-Richard

11. Originally Posted by publius
That's the syntax and sematics. So, until you learn C, give me your algorithm, and let me write it in C.
Thanks. Sounds like a plan to me. I've changed the post according to what you said, but it still may need to be looked at again. I noticed I did forget the 'int' for qb in the loop. I suppose that would apply to qa at the beginning of the program then, too.

12. Originally Posted by grav
Thanks. Sounds like a plan to me. I've changed the post according to what you said, but it still may need to be looked at again. I noticed I did forget the 'int' for qb in the loop. I suppose that would apply to qa at the beginning of the program then, too.
Grav,

I'll get on it sometime. All that verbiage of mine may have been confusing, but '**' is the FORTRAN operator, so it still doesn't work. C has no such operator, you have to use a function call, ie z = pow(x, y), which returns z = x^y.

We want to avoid doing exponentiations. That takes moocho clock ticks. While there is a single hardware sqaure root instruction, there is no general power instruction. Intel's compiler can do it as an intrinsic (if it thinks it will save time), but doing it takes quite a few floating point instructions.

If you're interested, we basically make use of this

x^y = 2^(y* log_2(x)) The IEEE floating point stores numbers in binary, and base 2 is therefore the natural choice of logarithms. But even there, you have some work to do. The x87 has some log2 and 2^x functions, but the range is restricted. You have to remove the exponent yourself, then put it back in. It's complicated. So, when we're going for speed, we do not want to do the power function at all, unless we have to. For example, x^50 would probably take less time than 50 multiplications. And with non integer powers, we have to bite the bullet as well, unless there's a trick we can make use of.

And while this may be more than you want to know, declaring variables in a loop control statement is a C++ feature, not plain C (although some may allow it). There is another C++ feature that Tony used in the original code, so technically, this code is C++ and is being compiled as such, though it does not use the object oriented features nor any of the other stuff. I would've done it as plain C, to prevent the C++ "name decoration" in the object files, but I didn't want to the above mentioned things.

-Richard

13. Originally Posted by publius
We want to avoid doing exponentiations. That takes moocho clock ticks.
You're right, it would probably be much faster to do the exponentiation as we go instead of having the computer start from scratch with each one. Since the exponents are just consecutive integers, we could also do it like this.

Code:
```double qc;     {beforehand, then...}

qq = 1.0L;
qc = 1.0L;
for (int qb=1;qb<=qa;qb++) {
qc = qc*f3;
qq = qq + qc*(2.0L-1.0L/qb)/qb/(4.0L*qb+2.0L);
}```
or we could just go ahead and start it off to save a loop with

Code:
```double qc;     {beforehand, then...}

qq = 1.0L + f3/6.0L;
qc = f3;
for (int qb=2;qb<=qa;qb++) {
qc = qc*f3;
qq = qq + qc*(2.0L-1.0L/qb)/qb/(4.0L*qb+2.0L);
}```
I also noticed I forgot the 1.0L thing originally. Is this right for that?

14. Grav,

I didn't get around to putting your last changes in, but I did remove the average stuff completely (with your original 'qq' in). That did pretty darned good. Mercury, Venus, and Earth stay pretty darned tight over 20,000 years. Mars, Jupiter, and Saturn "smear" a bit. Not much (at the screen scale) but it's there. Neptune and Uranus stay pretty fixed. Pluto tends to get more circular. The Neptune "crossing" point stays fixed, but the far side pulls in a bit.

And running, the one with the Moon in it, we don't loose the moon even at 64K (at least for a few thousand years). Now, speeding it beyond that, which I don't know, the Moon does go wild. I think it collided with the earth (or something) because it dissappeared at some high time step and was nowhere to be found when I turned the graphics back on after letting it run.

-Richard

15. Originally Posted by publius
Grav,

I didn't get around to putting your last changes in, but I did remove the average stuff completely (with your original 'qq' in). That did pretty darned good. Mercury, Venus, and Earth stay pretty darned tight over 20,000 years. Mars, Jupiter, and Saturn "smear" a bit. Not much (at the screen scale) but it's there. Neptune and Uranus stay pretty fixed. Pluto tends to get more circular. The Neptune "crossing" point stays fixed, but the far side pulls in a bit.

And running, the one with the Moon in it, we don't loose the moon even at 64K (at least for a few thousand years). Now, speeding it beyond that, which I don't know, the Moon does go wild. I think it collided with the earth (or something) because it dissappeared at some high time step and was nowhere to be found when I turned the graphics back on after letting it run.

-Richard
That sounds great. I'm thinking Mars might also have some natural precession due to its proximity to Jupiter at times. Also, we need at least ten or twenty points to plot per orbit to remain stable, so the moon won't stay with us much past 100K time step or so. It would seem it would just fly away from Earth at too high a time step, but I would have thought it would still orbit the sun or something, though, like you had the first time, so maybe it did collide.

16. Originally Posted by grav
You're right, it would probably be much faster to do the exponentiation as we go instead of having the computer start from scratch with each one. Since the exponents are just consecutive integers, we could also do it like this.
......
I also noticed I forgot the 1.0L thing originally. Is this right for that?
Grav,

Yes, that's the way to do it. When writing floating point code, think about how many operations you're doing with each step. Always ask, how can I get the result I want with the fewest operations and steps.

The 'L' on the end is the way you tell the compiler that is a double precision floating point constant. Ie, 1.0 alone would be single, 1.0L is double.

What can get you in trouble is intermediate precision with expressions. The rules about precision promotion are complex, so many times its best to explicitly state what you mean.

C is a very terse language, and that's the way it was intended. To me, FORTRAN is still king (long live the king) for numerical work. C is the language for utilities/bit-banging/lower level stuff. But it's a matter of taste. C has a rich set of operators. For example in C, rather than this

a = a + b, you can write this

a += b, which does the same thing (or -=, *=, etc)

You don't even need 'if' statements actually. You can use the "ternary operator":

c = (a < b) ? ( a ) : ( b )

That expression assigns a to c if a < b, or b to c otherwise.

C can be so terse that they have contests about it. The IOCCC, the International Obfuscated C Code Contest. The objective is to write something that does something clever in the most obfuscated, terse, un-understandable form possible.

Entries have actually found bugs in compilers by using some of the most arcane, and recursive pre-processor language features. And that's sort of badge of honor to write code that is syntatically correct, but crashes a compiler by its arcaneness and/or lexical complexity.

-Richard

17. Established Member
Join Date
Mar 2005
Posts
1,234
Originally Posted by grav
That sounds great. I'm thinking Mars might also have some natural precession due to its proximity to Jupiter at times.
Mars' most facinating dynamical orbital element is its eccentricity. We are used to thinking of Mars as having an elliptical orbit, but that is only because we are creatures of the 20th and 21st centuries. At times Mars' orbit is rounder than Earth's. And Jupiter is by far the primary culprit. You can verify this by deleting Jupiter and re-simulating and comparing your results.

It's difficult to observe a changing eccentricity on a plotted diagram, but there is a feature in the File menu called "Output File". It lets you output an object's orbital elements at specified time intervals. Then you can open this .txt file in Excel as a comma-delimited file and graph the elements.

To determine if your simulated delta eccentricity is real or an artifact of the code, run it at two different timesteps and see if you get the same results.

Most of the time Mars will be more eccentric than Earth, as Earth's eccentricity also varies from a max to a min, but with a smaller amplitude than Mars. Again, Jupiter is the primary culprit.

18. Originally Posted by publius
So this may be just the way this particular ball bounces -- or orbits. When I see something counter to expectations, I always want to know why.
I didn't go into the "why" because I'm not an expert, but my understanding is that the problem is when the equations are "stiff". What this really means is that you have behavior on two timescales that you want to study, but one timescale is very much longer than the other (like the gradual evolution of a system that orbits relatively rapidly). Thus, if you want to keep the focus on the orbit, you can have unstable transients creep in when you look at the gradual evolution, but if you want to focus on the gradual evolution, you can't just average over the orbiting aspects without losing too much accuracy. It's a tradeoff between stability and accuracy that you often encounter in "stiff" situations, and it is related to the famous "Courant condition" for partial differential equations (which have feedback mechanisms not present in your ordinary differential equation, so gets even more complicated). Without getting into all that, I'll just say that oftentimes the more accurate looking approach is not the more stable-- and implicit schemes tend to work best because they don't allow the errors per orbit to accumulate into unstable behavior over the course of many many orbits.

Since you like to get your hands dirty, let's look at a very simple case. Let's say you want to model a simple harmonic oscillator over many many oscillations. A silly thing to do, as there's an analytic solution, but let's just say. OK, so the equation, using complex notation (and just take the real part of everything at the end), is v = ix (using omega=1), so v is 90 degrees out of phase with x. But an explicit scheme would say dx=dt*ix, where x is the current value of x, so the x at the next timestep becomes (1+i*dt) times the old x. Horrors-- the norm of that is sqrt(1+dt^2) > 1! So if you just run this for a very many number of oscillations, your system goes unbounded, and exponentially so. The longer you want to run it, the shorter the timesteps have to be, and the computational time goes through the roof.

So instead you use an implicit scheme, in which you say that dx =dt*ix where x is the new x, not the old one, and this requires doing some algebra to yield that the new x is 1/(1-i*dt) times the old x. The norm of that is 1/sqrt(1+dt^2) < 1, so this is stable. It's still a problem because this oscillator will dissipate energy and decay into no motion, but you can handle that with other techniques designed to force energy to be conserved. What we should really be looking at is not the oscillation of the system itself, but rather oscillatory perturbations to that motion, like a slight ellipticity in an orbit. Then you do want those perturbing oscillations to decay away, rather than unstably grow, if they weren't supposed to be there in the first place. But basically, "stiff" equations are always tricky, so that's why you need real hydrodynamics expertise to be sure the results are reliable. There's a tendency to think a solution that doesn't blow up is accurate, but that's not guaranteed, if the basic conservation laws are not upheld.

19. Ken,

Thank you. That was nicely explained, and the harmonic oscillator example was a very good way to illustrate it. "Implicit vs. Explicit" rang a bell and, and lo and behold,

http://en.wikipedia.org/wiki/Implicit_method

which gives an an example of both for Euler's method, although for

y" = -y^2, not 1/y^2.

I had a course in numerical analysis back in school, but this was just the elementary stuff. We never got into the heavy theory of numerical attacks on differential equations, other than very simple things. This course had an emphasis on physics, 3 professors collaborated, math, computer science, and a physics prof, with the mathematician sort of leading the pack.

I learned a lot there about the nuts and bolts of finite floating point arithmetic, which was the main point. If I remember, our final assignment, well the one two other students and I received was doing a numerical solution to Schrodingers equation for Helium, with two interacting electrons. Could we get anything that agreed with the measured energy levels. We did, but we had some fun. I do remember "explicit vs implicit" in there.

They held our hands generally, but there were some little details where things would run amok if we weren't careful and needed to apply the things we'd learned. I'm trying to remember -- I probably have the notes and even code stored buried somewhere -- it involved doing a big volume integral numerically each iteration, then doing something with that (our goal was the energy levels). Anyway, it took loads of computer time.

They had purchased a block of Cray time for somewhere for the course. We had to polish up our alogorithms locally, and I remember staying up nearly all night just doing that polishing on a local VAX, that even after all that time barely got started. And I remember getting an e-mail from that VAX sysadmin the next morning because I'd left that job running one time. Something happened where the priority got bumped up because no one else was on it much, and there it was hogging CPU.

Once we were sastified, we got to run it to completion on the Cray. THat was fun.

But danged if I can remember hardly any of it. :sigh:

-Richard

20. I have got more stuff "archived" here than I know what do with. I can't even remember 10% of what I do have. Poking around, I've got library after library of numerical toolbox routines to do all sorts of good stuff, including various diffy-eq routines of all sorts.

And I think I'm going to try and see if I can still cut some mustard, and use them to solve our N-body here. It will be FORTRAN all the way. First, I'm going to see if I can't get an Intel FORTRAN version of the .DLL working so Tony's program can call it. Don't hold your breath on even this simple step.

Then, I'm going to try to use some of all that good high order diffy-eq stuff and come up with someone that will rock 'n roll the ol' N-body here. Definitely don't hold your breath there, because I'm going to have to do serious reading and remembering.....but I'm gonna see if I can't still cut some mustard here.

I've got all this dadgummed computer power here and all I've been using it for is nonsense. Computers are mad to crunch numbers, and I've been wasting mine on crap.

-RIchard

21. Well, I got tired of just running trial and error formulas to find the precise value for qq, so I decided to look up some extended trig functions that begin with 1+x/6 to see if one of them might be what I'm looking for. I found arcsin(x), which when extended becomes

arcsin(x)=
x * [1 + x^2*(1/2)/3 + x^4*((1*3)/(2*4))/5 +x^6*((1*3*5)/(2*4*6))/7 + ...]=
x * [1 + x^2/6 +x^4*3/40 + x^6*5/112 + x^8*7/230.4 + ...]

The formula I'm using now comes to

qq=1 + f3/6 + f3^2*3/40 + f3^3*5/126 + f3^4*7/288 + ...]

As one will notice, they are quite similar, and exactly the same for the first few terms! I've tried it for qq, though, and it overcompensates somewhat. But what I'm thinking is that there may be another formula to be used in this way, perhaps arccos(x) for the old velocities in a similar manner I got the circular orbits millions of times more precise using Q1 and Q2 in this way (post #110), where qq would take the place of Q2 in this case, unless it is to be used with it or something. Since it always works well for circular orbits, regardless of the different masses, it may be that I just need to find a way to work the instantaneous eccentricity of the orbit into the formula. Anyone know how to find the eccentricity knowing only the current positions, velocities, and masses for two bodies (Celestial Mechanic? I tried looking back through some of our earlier work on this, but I got MEGLO from some of my own stuff. I guess I only know what I'm saying while I'm working on it. I'll probably look back on all of this later and not have a clue what I meant ).

Now, if arcsin(x) and arccos(x) are the correct formulas for this, then instead of running through the extra loop, we could find qq by simply using the lines

f4=sqrt(f3);
qq=arcsin(f4)/f4;

That would be just wonderful, except for one thing. UBasic won't let me find the arcsin(x) for any value greater than 1, and qq is always greater than 1. A value greater than 1 still works out fine with the extended formula, of course, but the computer just won't let me do it. It's not fair, I tell you! Is there any way around this, some trick to find it some other way for values greater than 1 besides the additional iterations, or something close, even?
Last edited by grav; 2007-May-01 at 03:10 AM.

22. Been doing some perfunctory reading of some of that stuff I've got. Yep, I've got a beaucoup of diffy-eq solvers. RKs of various orders, and and all sorts of other algorithm names as well.

And, yes indeedy do, Ken, there's all sorts of good stuff about "stiffness", and explicit vs implicit: Ie use this routine when she's stiff, this one when she's not, and this other one when she's in between. And all sorts of various control routines for each of the main solvers where one can tweak all sorts of things.

With all that, I think I ought to get something going. Only question is how much darn time will they take. Only one way to find out. The way the thing is structured, you the user provide a routine for the various functions:

ie y' = f(t, y), you provide what f is and it calls it. For a system, that is y_i, and it goes through.

So it will call me once asking me to give it the (what will be for this problem) the acceleration of the i_th body, which will involve all N bodies. Doing it that way, I'd calculate the same thing several times over, so to speed things up, I'd like to save the a_ij (g field of the i_th) body. But it might jump around on which way iteration it asked for, I don't know.

-Richard

23. Tony,

I hope you see this in all the posts. If you're game, in order to use some of these differential equation solving routines, the data will have to put in a different form. We basically need everything in one big array. It would be similiar to the above suggestion about an array of structures.

What you have to do is express the entire N-body equations of motion, which is a system of N 2nd order vector equations, as a system of first order equations. So that means 2N vector equations. The 3 components of each then gives us a total of 6N scalar equations.

We can arrange that however we wish, and I'm not sure which would be the most efficient until I actually go down to it. So rather than individual arrays for each of the N components of x, y, z, and vx, vy, and vz, we need them altogether like so:

x1, y1, z1, vx1, vy1, vz1, x2, y2, z2, .......

A C style struct would be this:

struct _objcoord {
double x, y, z;
double vx, vy, vz;
} Object[N];

Second, we need to pull the time step out of the masses. The way these routines work is they sort of find their own time step on the fly based on what's happenning and error estimates the routines make.

You run them like so:

CALL Solve(T_start, T_end, ....... ) [Well, actually t_start is usually taken to be zero, but this is the basic logic).

The routine then evolves the system from T_start to T_end, finding its own stride. So if we wanted to evolve our N-body, plotting every 'h' units of time, we would make repeated calls with T_end incremented by our desired plot time.

Collisions are something that will require special handling, because if I understand your logic, you just let the colliding bodies stick together inellastically. I'd have to look further when I get into it, but when that happens, we will probably have to "reset" because N changes. It might be possible to just zero out one of them and keep the current run going, but I'm not sure.

And then, what do we do if a collision happens in between calls, during the run. If our external time stepping was small enough, we'd probably see it before we had two masses just sail through each other (or get so close, our
1/r^2 calculation blew up). That is something to worry about once I get the general logic worked out.

-Richard

24. And finally, I wonder just how "stiff" our N-body really is, say on a 1-10 scale with 10 being "stiff as stiff can be"?

I've been reading about that. Like Ken said above, one measure of "stiffness" is having several effects going on at vastly different time scales of the evolution. We pretty much have that big time here. We've got things whipping around each other in near closed curves on the smallest scale. Yet on a larger scale, those curves move around a bit (ie precession, and whatever the call other effects where say the orbital plane itself moved around in 3D space), change shape, etc. And then there's collisions, which are certainly a whopping change in the system.

-Richard

25. Established Member
Join Date
Mar 2005
Posts
1,234
Over the weekend I'll have some time to look at it. It might take a long time to translate the entire program to that data format, but I might be able to make a quick temporary version so you can test. The Picard method I PM'd to you and Grav works a lot like you describe, where the routine adjusts the time step based on desired accuracy.

26. Originally Posted by tony873004
Over the weekend I'll have some time to look at it. It might take a long time to translate the entire program to that data format, but I might be able to make a quick temporary version so you can test. The Picard method I PM'd to you and Grav works a lot like you describe, where the routine adjusts the time step based on desired accuracy.
If it would be too much trouble to do it that way, another way would be to simple ensure the arrays are all stacked contigously in memory. That is, allocate one huge array (size 6N), and then stack it like so:

x1, x2, .... xN, y1, y2.....yN, z1...Zn, vx1, vx2...vxN, vy1...vyN, etc

Allocate it that way, then you use VB's equivalent of "equivalence" of arrays (ie mapping one array name to point into another) to treat them the individual objx, objy, objz, objbx, etc arrays. In C all you would do is malloc() the whole thing, then set pointers to the start of each subarray.

The simplest routine can do a 5th-6th order Runge-Kutta (some variation thereof with another name in it). And then there are more sophisticated ones beyond that.

And don't feel obligated to try to do any of this. Wait until I get a .dll interface and wrapper for the solver routines written. If and when I do, we'll worry about your main program.

Have you thought about a 64-bit (AMD64/x64) version of Gravity Simulator. Gravity simulator is in VB 6.0, right? I hear it's a bear to convert to VB .NET because of all the changes. Theoretically, a .NET (managed code) can run on 64-bit hardware without modification by simply running the CIL on the 64-bit version of the run-time interpreter.

But heck, I always figured that managed code stuff is bloated and slow as heck anyway, so it would probably slow it down anyway.

-Richard

27. I've been reading some stuff about stiffness. One way (although it is not sufficient to establish the problem is "non-stiff") has to do with eigenvalues of the Jacobian. If we've got a system of ODE's,

d(y_i)/dt = f_i(y_k, t) [ie each dy/dt is function of possibly t and all the other y's)

then you look at the Jacobian of the f vector. If the eigenvalues of that have large negative real parts, then the problem is stiff. But, a problem may still be stiff or otherwise inefficient if that's not true, but if this condition holds, you know you're stiff.

One of the fancy routines for stiff systems wants you to calculate this Jacobian for it on demand, and it uses that information to adjust itself as it evolves the system. For an N-body, that Jacobian would be one humdinger of an operation for large N.

The Jacobian for the N body here would involve lots of ~1/r^3 terms, where the r are all the distances between bodies (and done for each of the components). So that thing is basically a function of the *tidal* forces, or the gradients in all the components of the g vector on each body.

And note this Jacobian is not some constant for a given problem, but something that depends on the state of the system at any time.

So it looks like we're going to get stiffness when those gradients are significant. Ie, change the system by moving the bodies, and there are relatively large changes in the g on the bodies. Well, not just that, but how those terms interact to make the eigenvalues.

-Richard

28. Originally Posted by publius
If the eigenvalues of that have large negative real parts, then the problem is stiff.
Indeed, and note physically, this is basically saying you will have trouble if there are processes that happen faster than the scale you are trying to resolve in your calculation. Note further that if all the eigenvalues are similarly large and negative, you don't really have stiffness, you are just not resolving the short timescales you should be resolving. It's really only if you have at least one eigenvalue that is not large, and you are interested in that behavior, that you have true stiffness.

Another point to mention is that in some cases you can solve any stiffness problem by simply not including the stiff contributions, but rather replacing the stiff subspaces by equilibrium assumptions (for large negative eigenvalues) or averaging over the oscillations (for large imaginary eigenvalues). That actually simplifies the problem by reducing the dimension of your system, and is what you do if you, for example, assume that the entire Earth moves as a unit rather than resolving the separate motions of all its rotating and ringing constituents. So again you only have true stiffness if this won't work because of some kind of coupling between the different time scales, such as a resonance effect that extracts tiny signals from the fast scales and projects them in an amplified way onto the slow scales (such as when analyzing the Cassini division in Saturn's rings, for example-- you could not "average over" Mimas' orbit and ever get that).

So it seems there are actually several different types of problems with "stiff" systems, which for all I know have their own terms (maybe not all just stiff). You have the problem of large negative real eigenvalues, which means you have rapidly decaying transients that if you model explicitly over the longer timescales you are interested in, will go unstable and blow up rather than decay away, and you have the problem of large imaginary eigenvalues, which are oscillatory but cannot be averaged away in the presence of resonant couplings. Implicit treatments can solve the first problem, but not the second. I'm not sure what resolution is found for that!

29. Ken,

Thanks for more interesting explanations -- feel free to keep it up, assuming I do manage to get something going with this.

I wondered about the imaginary parts of the eigenvalues -- what I was reading didn't go into it -- just sort of an introductory thing.

I do remember something about linear systems with constant coefficients. You can write a solution of the form

y = k* e^(At), where A is a matrix, and I'll just let the odd (although well defined) notion of raising e to a matrix power go.

Buried in there are similarities with scalar equations, where eigenvalues of a matrix from the vector equation correspond to solutions of the characteristic equation for a regular scalar equation. The details I can't rememeber, but that Jacobian, for non-linear systems, can probably be seen playing that role assuming something is linear for a short period.

So, just as imaginary roots of the characteristic equation correspond to oscillatory terms and real parts damped transients, so the eigenvalues play similiar roles, but in a much more complex manner.

-Richard

30. I'm disappointed no one has posted any pictures of the outcomes of their programs. Here's one I played with many years ago. Two bodies, one ten times the mass of the other, like binary stars.... I think the precession is due to time-step mis-considerations, but I'd rather think of it as due to relativistic effects (which were not written into the program, but hey, they just appeared).

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•