Thread: A theory of eveything in 5 pages of program code!

1. Originally Posted by cjameshuff
He's probably cubing powers of 2, not multiples of 2. 0, 1, 2, 4, 8, 16...it'd fit with his computer theme. As long as you ignore non-binary digital computers, anyway.
256 is not the cube of a power of two.

He's starting with a cube (either a 1x1x1 in the odd case, or 0x0x0 in the even case) and adding a unit thickness to all six sides. That fits the description in the text as well--see the 1-D descriptions and examples too.

2. Originally Posted by Marty Wollner
This is the pseudocode for the actual program, no kiddding;
It would be easier to read if you wrapped it in [CODE] tags.

((X.mass + Y.mass) /(X.Location – Y.Location))
That looks wrong for all sorts reasons: why are you adding the masses? what happened to the inverse square law? and G? and why are you calculating the distance in each dimension, rather than the distance between the two points? and ...

And you are doing all this with 32-bit integers? Have you worked out how large each of your cells is? And what errors that will create in your simulation?

3. Originally Posted by grapes
256 is not the cube of a power of two.
...no, it isn't. So much for that idea.

Originally Posted by grapes
He's starting with a cube (either a 1x1x1 in the odd case, or 0x0x0 in the even case) and adding a unit thickness to all six sides. That fits the description in the text as well--see the 1-D descriptions and examples too.
I'd somehow missed several posts (like the one with that badly-formatted mess of pseudocode), and I gave up quickly at getting any useful information out of his web page.

Originally Posted by Marty Wollner
This is the pseudocode for the actual program, no kiddding;
This doesn't even begin to account for observed physics...what about, oh, the entire set of subatomic particles? Nuclear phenomena other than fusion? How do you even define whether a single particle is plasma or not? Or even something relatively simple...how about the photoelectric effect?

Or how about something really basic...how do you get light to travel the same speed in all directions, not just along the axes? As far as I can tell, you're using taxicab geometry for everything...

4. Member
Join Date
Dec 2011
Posts
86
Originally Posted by cjameshuff

In relativity and in reality, two clocks can go on different paths through spacetime and be brought back together after experiencing different amounts of time. There is no universal time. There is no universal rest frame and no absolute motion, an object in inertial motion will always measure the same internal physics, and can always consider itself to be at rest. You can't even define simultaneity universally...events one observer sees as taking place at the same time will happen at different times according to another observer. Time does not progress in global ticks, and distances are not measured in global intervals. To have any chance of success, your model must reproduce these effects...does it?
Yes it does, its based upon it as you will see below.

Originally Posted by cjameshuff

Where? So far I haven't seen a real attempt at explaining anything. As far as I'm concerned, you have not answered any of my questions.
My post appeared above, it discusses the idea of a unique situation that can possibly account for both exceeding 1/2 C near accelerators and also explains a possible answer to our newly observed broken speed of light.

Originally Posted by cjameshuff

Are you serious? When observations disagree with your theory, your response is that the observations don't make sense? You don't get to pick and choose which parts of reality you like.
CJ, your honestly are not listening to me... please READ my stuff before you bark. I DID explain how my system handles relativity in the example shown.

Are you ready? This won't all fit into one post, but you are insisting upon it, so I wll comply:

First Section:

Thank you for those comments, CJ.

YOU will be the guy I will convince of this theory.

I won’t refer you to any more books; I will do it on this forum right in front of you.

But first, YOU need to decide right now, YES or NO, if it is possible that the Universe is continuous, and if not, then I say it IS strictly digital; there is absolutely no compromise on this. If it is digital, then it has a SOLID framework, and it has a sequence counter, and it can make decisions in processing and it can actually be realistically accomplished.

I can’t say any of those are true if not digital, in fact, I say they’re all false. And I think you’re agreeing with me on that (other than the obvious last one, because we wouldn’t be having thus conversation otherwise.), right?

Just to clarify my claims, being as they have been funneled through the meat grinder of this BB forum requirements, all I ever said was that my theory points out some new IMPLEMENTATION details, and it is THESE DETAILS that makes me wonder how our universe CANNOT be digital, and if it is, (and here is my contribution)… I HAVE FIGURED OUT what I’m going to call “Digital Relativity”. I am really excited about it because I think it shows us a way that someone’s theory of everything can really work. That’s all I’m saying, and I will try proving it to you right here and now.

OK, back to our point of disagreement, I have stated in a previous post that I won’t try to defend digitalization any more.

So, CJ, if your just gonna sit there and blast me with analytics of observations (which is all YOU have), I have no choice but to repeat; “its all about causality and emergence” and “look at the program code, none of the observed laws are programmed in…” and repeat this conversation, now for the 4th time.

We are NOT finished, CJ, we’re only starting.

I will prove to you, right here on this forum, that my simple little program implements “Digital Relativity” without bending time and gravity. It’s so easy, a caveman could do it, it’s just a very SIMPLE loop that moves matter and radiation around using a set of loop counters. Its all integer arithmetic… in fact, it removes the need for any event timers in the system; it runs as a simple and pure finite series equation.

Is that a deal? Will that suffice?

Did you see the program code (above)?

OK, CJ, actually I’m already in the process of doing this write-up in another brand new paper I should have out by next week named “Flat-Out Disproving Relativity in 120 Minutes”.

YOU DON’T NEED TO WAIT, I will show you EVERYTHING right here, and RIGHT NOW. OK?

RiderDude is standing on a train traveling at the blazingly fast speed of 3 miles per second (this must be in Japan somewhere, I cant explain why its not metric, though).

RoadsideDude is standing on the side of the road at the same instant the train happens by and also happens to turn its headlamp on (of course… this guy is still homeless from that big sea wave).

The heart of the paradox is the question of the speed of the train being added onto the speed of light. Does this occur or not? YES OR NO?

To answer this question, we need to observe how far the light goes from both of the Dude’s points of view. We do so by asking this question:

When the light hits the mountain up ahead, will RiderDude see it before RoadsideDude?

If the answer were yes, then that would explain it; the velocity of the train was added onto the velocity of light, so RiderDude watches the light hit the mountain before RoadsideDude.

If the answer is no, then how can this happen??? ???

Well, we all know the answer is no. So how in the heck can this happen?

“NonoGnomes”?

Still with me?

108 minutes to go.

The Train Flash Incident Specifics:

Let’s repeat the scenario again with a more analytical approach.

Both RiderDude and RoadsideDude were standing at mile marker 0 when this whole, stinking thing all went down, EXACTLY at time 12:00:00. The mountain is dead ahead at mile marker 40,003 miles.

Lets call the speed of light “C” and assume it’s a fixed, pre-determined number, and in this example universe we’re operating in, let’s make it 20,000 miles per second.

· Speed of light: 20,000 miles per second
· Speed of the train: 3 miles per second
· Location of Flash: Mile marker 0
· One RiderDude is on-board the moving train
· One RoadsideDude is standing still on the side of the tracks
· Both Dudes are at mile marker 0 the instant of the flash

OK?

12:00:00… At the moment of flashing the lamp:
· RiderDude: The light is right there in front of him.

· RoadsideDude: The light is also right there in front of him.
12:00:01… One second after flashing the lamp:
And here is the paradox; should we add the speed of the train onto the speed of light flowing away from the train’s (and RiderDude’s) point of perception?

If yes, then:

· RiderDude: The light is traveling at C + 3 = 20,003 miles per second, so it is 20,003 miles down the track at this very instant.

· RoadsideDude: HE WAS STANDING STILL, so the light was traveling away from him at C + 0 = 20,000 miles per second, so from his point of view, it is now only 20,000 miles down the track.
12:00:02… Two seconds after flashing the lamp:

If still yes, then:

· RiderDude: The light is still traveling at 20,003 miles per second, so it is now 40,006 miles down the track. HE WATCHES IT HIT THE MOUNTAIN.

· RoadsideDude: The light was traveling away from him at 20,000 miles per second, so from his point of view, it is now only 40,000 miles down the track, and so HE DID NOT SEE THE LIGHT HIT THE MOUUNTAIN.

Which is wrong, they both see it hit the mountain at the same time, so what gives?

I can’t describe the problem any clearer than this. If you still don’t understand after re-reading, try another sport.

ALL righty, now my program implements the speed of light in the=is SPECIAL WAY that I keep going on and on about, and you keep reminding me about ho its not telling you anything.

Try to pay attention for a few pages, OK?

Heck, the explanation is 10 times longer than the code itself.

I’m implementing everything in terms of GC / tick. There are not fixed speeds, everything is done in relation only to one thing, and that is the maximum resolution of the system: WordSize (the max. numerical limitation is 2**WordSize)

In the example above, this number is 20,000. It could have been implemented with ANY resolution and it STILL WORKS PERFECTLY.

5. Member
Join Date
Dec 2011
Posts
86
SECOND SECTION:

All right now, CJ, here is the bottom line to my big secret discovery:

Nothing moves fast than 1 GC / Tick !!!!!

This is the key to everything, although I’m sure it’s not obvious.

All right, let me walk you through the details of implementation RIGHT HERE AND NOW:

First off, lets clearly define what were involved with:

We are defining C according to the following logic:

1.1.2 C = One GC per tick

We need to define speeds slower than C, so let’s apply the following logic:

If C is 1 GC per tick, then:

½ C is 1 GC per every 2 ticks
1/3 C is 1 GC every 3 ticks
¼ C is I GC every 4 ticks

Hence:

1.1.3 Speeds Slower than C = 1 / N C is 1 GC every N ticks

At this point, an arbitrary definition for C is no longer needed… it BECOMES the framework within which all motion in the system must operate. I think you got this far as well. My latest book describes it as:

It’s like some kind of a miracle, and it does something even more, something subtle, unnoticed, hard to explain, but of key importance…

Old way:

Infinity <- ……. -> Infinity

New way:

[ ß …….. -> ]

It makes everything work WITHIN IT’S OWN limits, where as before we had to face the possibility of infinity, now, we become automatically bound within our own limits to begin with.

For me, this was a huge breakthrough, but it still needed to be tweaked because;

If we need to define the slowest speed possible according to the following logic:

If C is 1 GC per tick, then ½ C is 1 GC per every 2 ticks, and 1/3 C is 1 GC every 3 ticks… where are my calculus pals… it looks like a speed of 0 is (1/infinity) * C. But we know that can’t work, so we will use the next slowest speed that can be defined: to make this definition, we must use the WordSize of the system.

In other words,

1.1.4 How big can N get? In our world of digital addressing, the biggest computable number is 2 ** WordSize.

Hence:

1.1.5 The slowest speed = 1 GC every 2 ** WordSize ticks

1.2 Next to fastest speed

1.2.1 Is not that easy to picture…

Slowest speed: 1 GC per (2 ** WordSize) ticks
Next to slowest: 1 GC per (2 ** WordSize – 1) ticks

Etc…

Next to fastest: ???

Fastest speed: 1 GC per (1) tick

Next to fastest is tough, because recall:

½ C is 1 GC per every 2 ticks
¼ C is 1 GC every 4 ticks

HENCE

1 / N C is 1 GC every N ticks

The inversion makes it into a fractional speed, but this “Next to fastest” we’re looking for occurs between 1 C and ½ C. Lets try ¼ C… hey wait, that’s FURTHER AWAY.

1.2.2 We need another approach…

1.2.3 Working all the way from the slowest forward…

Lets continue from the slowest…

1 GC per (2 ** WordSize) ticks
1 GC per (2 ** WordSize – 1) ticks
1 GC per (2 ** WordSize – 2) ticks
1 GC per (2 ** WordSize – 3) ticks

Lets try re-defining C and working backward from there:

Fastest speed:

1 GC per (1) tick

… Same as:

(2 ** WordSize) GCs per (2 ** WordSize) ticks

So now, starting from the fastest speed, the next slowest speeds are:

(2 ** WordSize - 1) GCs per (2 ** WordSize) ticks
(2 ** WordSize - 2) GCs per (2 ** WordSize) ticks
(2 ** WordSize - 3) GCs per (2 ** WordSize) ticks

etc…

Q: Does this same method work for the slowest speeds?

A: The new method for the fastest speeds looks like this:

(2 ** WordSize - 1) GCs per (2 ** WordSize) ticks
(2 ** WordSize - 2) GCs per (2 ** WordSize) ticks

The old method for the slowest speeds was:

Slowest speed: 1 GC per (2 ** WordSize) ticks
Next slowest: 1 GC per (2 ** WordSize – 1) ticks

Applying the new method to the slowest speeds:

Slowest speed: 1 GC per (2 ** WordSize) ticks
Same as:
2 ** WordSize GC per ((2 ** WordSize) * (2 ** WordSize)) ticks

Next Slowest speed: 1 GC per (2 ** WordSize – 1) ticks

(same as)

2 ** WordSize GC per ((2 ** WordSize - 1) * (2 ** WordSize)) ticks
2 ** WordSize GC per ((2 ** WordSize - 2) * (2 ** WordSize)) ticks
2 ** WordSize GC per ((2 ** WordSize - 3) * (2 ** WordSize)) ticks

But all of the slow stuff can factor out the 2**WordSize from the equations, leaving our old definitions for slower motion:

1. 1. 1 GC per ((2 ** WordSize)) ticks
2. 2. 1 GC per ((2 ** WordSize) - 1) ticks
3. 3. 1 GC per ((2 ** WordSize) - 2) ticks
4. 4. 1 GC per ((2 ** WordSize) - 3) ticks

BOTTTOM LINE, CJ:

AND SO, FAST STUFF is described (from the fastest on down) as:

1. 1. (2 ** WordSize) GCs per (2 ** WordSize) ticks
2. (2 ** WordSize - 1) GCs per (2 ** WordSize) ticks
3. (2 ** WordSize - 2) GCs per (2 ** WordSize) ticks
4. (2 ** WordSize - 3) GCs per (2 ** WordSize) ticks

AND SLOW STUFF is described (from the slowest on up) as:

1. 1. 1 GC per ((2 ** WordSize)) ticks
2. 2. 1 GC per ((2 ** WordSize) - 1) ticks
3. 3. 1 GC per ((2 ** WordSize) - 2) ticks
4. 4. 1 GC per ((2 ** WordSize) - 3) ticks

Implications

What is apparent here is that descriptions are different for slow stuff vs. fast. The cutoff apparently occurs at the ½ mark?

1.3.1 Does this mean that stuff closer to the fastest needs to be processed differently than the slow stuff?

1.3.2 YES IT DOES !!!!

1.3.3 The slow stuff needs to be processed by waiting a number of ticks, then moving the object 1 grid location. That’s really straightforward.

1.3.4 The fast stuff needs to be processed by waiting a numbers of ticks then moving the object that many grid locations. That is NOT straightforward.

Perhaps you need to stare at it for a while, but I’m thinking, NOTHING TRAVELS at speeds between ½ C and C!!! THIS IS CRUCIAL! Please see the Appendix: “Are you stupid, Marty #1”?

When something passes the ½ way point, there is no going back: the next available speed is C, and now it’s all radiation (and electro-static flow) ONLY!!!

How about stuff moving exactly at C? Should it be treated as fast or slow??? Fast, I guess.

The point here is that the numbers indicate that the processing must occur differently!!

Note: my books include a discussion of digital radiation right here, please see them for it.

I describe digital radiation as traversing at 1 G /tick while attenuating a frequency. I had implemented it by a continuous “flow” of waves traversing from one grid position to the next, one location per EVERY tick of time.

However, with these new considerations for handling matter by waiting “a number of clock ticks” and then moving one tick, I began questioning whether or not I still needed to make the radiation flow continuously at all. It becomes a problem…

So What’s My problem?

What didn’t occur to me was that the discontinuity of space is not limited to only one grid location at a time… it gets back to finally deciding at some point that the space-time continuum needs to be digitized… that’s simple to do, you just divide it into equal chunks. Right?

Question for you: When I applied this new 1 GC/tick limitation to this scenario, did I wind up simulating an incorrect assumption of what’s occurring? Let’s see:

· · When we broke the space-time continuum, and we could say “at any given sequence in the computations” vs. saying “at any moment in time”, we found ourselves sitting on a grid in virtual space, and needing to jump from one location to the next as the digital sequence is unraveled, right? Yup, dead horse!

· · We discovered that the speed of light should be implemented as ONE LOCATION PER TICK, right? Yup, definitely.

· So to make that happen, as we traverse from one location to the next, we need to jump from one grid location to the next grid location in the next tick of time (I call this “CONTIGUOUS traversal”, as explained above)… it seems intuitively obvious, right?

WRONG!!!

1.5 WRONG!!!

This thinking about CONTIGUOUS traversal is enticingly straightforward, but, what keeps me weird and different is always trying to look outside the box and rely upon what I recognize as guiding principles rather than our innate tendencies to approach the problem from the outside in.

In this case, the outside-in approach was planting visions in my mind of sine shaped waves, and then me trying to figure how to make these waves traverse their way through the grid so they all have the same speed, yet exhibit different frequencies.

Hey, it doesn’t have to work that way, and now I KNOW it doesn’t really work that way at all. It’s a lot simpler!!!

1.6 WHY is this wrong? What isn’t obvious??

If its now OK for “slowly moving matter” to stay still for a number of ticks and then move only one grid positon (because of the way I’m implementing the speed of light, above), then WHY DOES LIGHT, as it traverses across the grid, NEED TO MOVE CONTIGOUSLY during every tick???

1.7 A different approach to digital radiation traversal through the grid

(Here, I’m repeating myself for the 4th time, but into a cohesive summary).

I now would now like to introduce an alternative to this that is even more compelling because it is so much simpler and yet just as functional as my digital waveforms.

My current algorithm makes “Wave Flow Events [5]” “flow” by traversing the grid from contiguous location to location as the program executes.

To make things simple, in my Universe, I have defined:

The maximum rate of all flow = 1 location per tick.

NOW, think ABOUT IT, that’s the same as waiting, say, 16 ticks, and then jumping 16 grid positions, right? Try to visualize this occurring in one dimension is so easy, I don’t need to illustrate it!

(Wait ten seconds… here it comes… the creepy feeling is… NOW HERE!!)

DUH!

Suddenly, defining “waves” that exhibit “wavelengths” that occur at “frequencies” is all easily explained by this very simple approach… its not a continuous wave at all, it just jumps from spot to spot!

CREEPY, right? You bet yer’ butt it is…it’s creepy because nobody thought of it before (am I right about THAT?), and it’s just so freaking simple (well, it is)!

Also creepy is the fact that EVERYTHING now moves in jumps according to a set of counters, and that simplifies the code even more… now light motion is handled nearly exactly like the motion of matter; it uses the same counters used for the slower motion of VPs, it just uses these counters a bit differently.

Alright CJ, now HERE is my proof of how this way of implementing velocity implicitly creates “digital relativity”:

IT’S RIGHT HERE, but I bet you wont see it... I didn’t see it myself until just a few weeks ago, I will let you in on that secret too, after the proof:

12:00:00… At the moment of flashing the lamp:
· RiderDude: The light is right there in front of him. His Speed inversion counter is set to 20,000 – 20, his ‘velociy”.

· RoadsideDude: The light is also right there in front of him. His Speed inversion counter is set to 20,000 – 0, his ‘velociy” (he stands still.)

12:00:01… One second after flashing the lamp:
There is no paradox; should we add the speed of the train onto the speed of light flowing away from the train’s (and RiderDude’s) point of perception? Hell No !! The speed of light is ALWAYS 1 GC / TICK !!

THIS is how its done:

· RiderDude: The light is traveling at C = 20,000 miles per second, so it is 20,000 miles down the track at this very instant.

· RoadsideDude: The light is also traveling away from him at C = 20,000 miles per second, so from his point of view, it is now only 20,000 miles down the track.

12:00:02… Two seconds after flashing the lamp:

Still THIS is how its done:

· RiderDude: The light is still traveling at 20,000 miles per second, so it is now 40,000 miles down the track. HE DID NOT SEE THE LIGHT HIT THE MOUUNTAIN (yet).

· RoadsideDude: The light is still traveling at 20,000 miles per second, so from his point of view, it is now 40,000 miles down the track, as well, and so HE ALSO DID NOT SEE THE LIGHT HIT THE MOUNTAIN (yet)

BUT, at exactly 12:00:02 plus 6 ticks, both Dudes see it at the same instant.

There is nothing to account for but it still works because we are confining the tick speed to within its own system of resolution. It as simple as it sounds!!! BUT THE MECHANISM is crazy!!!

No time or gravity was bent folded spindled or mutilated in this explanation.

And now, how’s about a bit of magic:

Now, why does this work?

This is just SOOO cool…

The numbers used for velocity in these examples aren’t like other “numbers”.

These numbers are BOUND WITHIN THE IMPLICIT SPEED OF LIGHT, ALWAYS ONE GC / TICK, and THEN broken up into an appropriate resolution (20,000 in our example).

OK HERE IS HOW “DIGITAL RELATIVITY” REALLY WORKS:

Look at our range of speeds again, lowest to highest:

Slowest: 1 gc/ 20,000 tick…

Next: 1 gc/ 20,001 ticks...

½ C: 1 GC / 10,000 ticks…

Not obvious, but the next speed can NOT be defined because it doesn’t fit into an even multiple for this particular resolution. (Please see “are you stupid, Marty #1”)

Are you staring at this and not seeing it? I didn’t either at first, but THINK ABOUT IT... are these velocity numbers really all scaled equally???

EHH??

Each speed number (and there are ALWAYS a finite number of them) really covers slightly different SPEEDS…

If we call speed # 1 GC / 20,000 ticks, that is being scaled into the range between 0 and C (actually ½ C but I’m not gonna argue that any more, I just proved why it should be implemented that way, above.)

The next is 1 GC /20001 ticks.

Are these actually equivalent units? NO they aren’t, each has a very slightly different factoring quotient.

Proof:

1. 1 GC/ 20000 T

2. 1 GC / 20001 T

3. 1 GC /20002 T.

4. 1 GC /20003 T.

5. 1 GC /20004 T.

6. 1 GC /20005 T.

Is the difference between speed # 2 and speed # 4 the same as between speed # 3 and speed # 5?

Lets see:

Is (1/20003 – 1/20001) equal to (1/20004 – 1/20002)

NOPE!!

Just look at it a bit longer and it dawns upon you… the scaling is implicit.
Last edited by Marty Wollner; 2011-Dec-11 at 04:16 PM. Reason: clarity

6. Member
Join Date
Dec 2011
Posts
86
THIRD (last) SECTION:

All right, now CJ let me ask you some questions:

Does this show you how what I’m calling “digital relativity” works clearly?

Do you agree that IT REALLY works?

Is not the burden of proof now upon YOU to try to explain these same observations WITHOUT this simple solution?

In all of it non-compliance with Newton and Shrodinger? With Shrodinger crying “digital, digital” while Feynman’s approach was to simply add all transitions together and then take the average value at a moment of infinitesimal time, (something impossible to occur).

THEY STILL can’t get it to work with a mountain of infinity based formulas, yet I got it to work with two lines of code, and this scaling can be applied to any size U. and still work perfectly…. Automatically drawing the scale in according to the resolution.

The speed of light is not an externally defined constant, neither is grid size nor tick rate, leaving only ONER parameter undefined.. WordSize, and now I think I may have that figured out as well.

In order for this program to complete all of the proposed state transitions, IT MUST BE programmed exactly the way it is, these generic values for hat, speed, geometry changes, etc. are all work in their “relative” context, and any resolution, and there aren’t any parameters… what does that tell you, CJ?

COME ON, CJ, WAKE UP!!!

Now, I want you to refute anything I have said here without saying something about what you think you know about what you think somebody has observed, go right ahead, I wont reply to it any more.

But if you want to try to understand these new concepts, then look at what I’m saying right here, and give me some constructive criticism, please I welcome it. In this case, please, tell the world about this because I think THIS IS BRAND NEW SCIENCE.

I can also prove the double-slit experiment, quantum entanglement, clumping of homogeneous clouds of dust, and EVERY other phenomenon we observe, all from 5 pages of code.

Thanks for your time, CJ. I appreciate your challenges; I feel they make my case stronger.

Marty
Discflicker.com

Appendix: Are you stupid, Marty?

Are you stupid, Marty?

Voice of reason: Red text
Marty’s responses: Black text
NUMBER 1: Why can’t things be stopped, and why can’t there be any speeds faster than ½ C?

If we needed to use max resolution covering 1 to 1000, then

Stopped = 0 GC / tick

Ok I will admit I didn’t mention that, I was suggesting that the slowest speed needed to account for WordSize, and trying to explain how its derived. Of course stuff can sit still for a long time, in fact I believe that particle #1 never moves from position 0 EVER!

Fastest: C = 1 GC / Tick

Slowest = 1 GC /99 Ticks

Next = 1 GC / 98 tics

… Nearing ½ way … watch out …

1 GC / 51 ticks

1 GC / 50 ticks

OK, HALF WAY, now what Marty?

1 GC / 49 ticks (“Oops”)

1 GC / 48 ticks

1 GC / 47 ticks

WHAT’S YOUR PROBLEM, Are you stupid?

It’s ironic that you should use my own logic against me. Ever hear of a “double-edge sword”?

You are correct my dear critic, and bravo on your comprehension of this fairly simple concept.

Now let’s say you have to actually program this thing and your working on the “slow speed” stuff.

OK

You would try to get your numbers to work in common units for geometric block number changes, heat units, frequencies, distances, etc. Of course… your programming this thing and you need resolution-independent quantum units just like this, yes?

Of course, “the same way chemists have been doing it for mini-eons”, right?

EXACTLY. So you would define speeds as numbers like “20” in the example Pseudocode for motion of matter.

And you could just assume the program would go ahead and do it correctly, because if you did not account for scaling it all, and it works great at the slowest speeds.

HUH? What?

???

Yup

At low speed, any “pure speed number” can be defined independently from the system’s resolution just like this:

1 GC per ((2 ** WordSize)) ticks ZERO
1 GC per ((2 ** WordSize) - 1) ticks ONE
1 GC per ((2 ** WordSize) - 2) ticks TWO
1 GC per ((2 ** WordSize) - 3) ticks THREE

If you think about it, THAT is the intuitive way to consider it. Only advanced developers like us would go ahead and include a scaling factor right away.

And everything would work just fine.

Until you try using speeds faster than ½ C.

At THAT point, scaling IS required in the code.

But this code is pure and true and simple. So perhaps it works that way because of that reason alone. I actually believe it for non-scientific reasons, but there is another reason, more scientific…

I could have also done it the other way for the fastest speeds…

(2 ** WordSize) GCs per (2 ** WordSize) ticks C
(2 ** WordSize - 1) GCs per (2 ** WordSize) ticks C - ONE
(2 ** WordSize - 2) GCs per (2 ** WordSize) ticks C - TWO
(2 ** WordSize - 3) GCs per (2 ** WordSize) ticks C - 3THREE

Could I have done it both ways without introducing the concept of resolution?

NOPE. Not without actually performing relativity, but think of the code involved with that?

So, without accounting for resolution, it can easily be programmed one way (only allowing speeds faster between ½ C and C) or the other (only allowing speeds between stopped and 1/ C). If there must be a choice, isn’t this obvious? (Ammnnn the answer here is YES in case you’re an Aspie like me).

Please see the Appendix: the proposed mechanism of the ½ point exchange.

Appendix: the proposed mechanism of the ½ point exchange.

I’m proposing that as matter gets hot enough (I’m guessing the same ½ way unit number on the heat scale), it begins shedding off radiation all at C.

The specific mechanism can be either velocity or heat. I don’t think it’s the mean-free velocity of the component, I think its heat. Why do I say this? Because otherwise, how can stuff heat up sitting in a vacuum? In a vacuum, there are “no” collisions with the matter. The matter itself might be vibrating really fast, but that won’t make much of a mean free distance. So I think the mechanism is heat.

Now get this: The mechanism I’m proposing works like this: when the atomic force algorithm is running, it will be calculating the resulting heat level. I believe that any heat the results in a number past this will be handled as such:

First off, any actions taken by the first VP in a collection of bound (with other VPs) will also be done on behalf of all the others. In other words, if a component is constructed of 4 VPs numbered 5, 7, 9, ad 2300, only the action of VP 5 will be performed, and its affect will be applied to all of them.

Now lets say that we’re in a U. created at WordSize = 11.

2**11 is the max resolution number = 2048. So the ½ point (and maximum heat level) = 1024 in this universe.

Now lets say that VP 5’s next heat level will be 1129. This leaves a net of 105 heat units and I’m saying that now, a low-energy wavelength of radiation is emitted. The frequency? It AINT 105! Its 2048 – 105 (because this is nirvana, right?)

If the NEXT resulting heat level is 1133, the resulting wavelength would be 2048 – 109.

Note that the act of creating radiation is an EXOTHERIC one, relieving the component of energy and thus reducing it’s heat content.

If the NEXT resulting heat level is back to 1129, the resulting wavelength would be 2048 – 105 again.

This can be considered as an “oscillation” between 105 and 109, in frequency of 4.

If the resulting heat level is and even number, I’m trying to figure out how to handle this because all waves must have an odd number for the wavelengths. Otherwise they don’t fit symmetric patterns.

Now I have yet to discuss the actual mechanism behind electricity, but if there is one extra particle in all of our even number heat flux exchanges that might be it. That’s just a wild guess.

I’m not certain of the mechanism behind our observation of any of this occurrences… its just some root causality going on, but I certainly hope everyone can see how these mechanisms might produce what we view to be “reality”.
Last edited by Marty Wollner; 2011-Dec-11 at 04:16 PM. Reason: clarity

7. Member
Join Date
Dec 2011
Posts
86

((X.mass + Y.mass) /(X.Location – Y.Location))
That looks wrong for all sorts reasons: why are you adding the masses? what happened to the inverse square law? and G? and why are you calculating the distance in each dimension, rather than the distance between the two points? and ...

My explantion:

Im glad you asked that, it reveals an important aspect of the program.... it works in any number of dimensions, and when its run in 3-d, the resulting computations ARE Newtons laws. The cool part here: When you really get into this stuff, you understand why as well.. it is simple the process of derivation (the reverse of integration). its so cool to watch this stuff map itself out into the 3-d world.

I have a proof in my book "big bang formation of mattter" showing this exact thing and also showing how parabolic motion results when the program's mometun is exposed to gravitatinal force.

Now, please revisit that simple little "movement by gravity" function again, and keep in mind its called externally for each of the 3 different dimensions.

Another cool thing... all of the processing is all done one dimension at a time, and it doesnt even try to combine these.. In my U., we all move in each dimension SEPERATELY and INDEPENDELTLY from all other dimensions and also from the negative direction, if its defined.

Last edited by Marty Wollner; 2011-Dec-11 at 03:56 PM. Reason: Clarify this is in response to a question form "Strange".

8. Member
Join Date
Dec 2011
Posts
86
Originally Posted by cjameshuff
...no, it isn't. So much for that idea.

I'd somehow missed several posts (like the one with that badly-formatted mess of pseudocode), and I gave up quickly at getting any useful information out of his web page.
Everything is clearly explained in the first link I sent everyone, the letter to Greg at:

http://spikersystems.com/FlashNet_Po...regMoxness.htm

and

http://spikersystems.com/FlashNet_Po...entsInTime.htm

[/quote]

Originally Posted by cjameshuff

This doesn't even begin to account for observed physics...what about, oh, the entire set of subatomic particles? Nuclear phenomena other than fusion? How do you even define whether a single particle is plasma or not? Or even something relatively simple...how about the photoelectric effect?

Or how about something really basic...how do you get light to travel the same speed in all directions, not just along the axes? As far as I can tell, you're using taxicab geometry for everything...

COME ON! You must know that the virtual Universe ONLY consists of the grid itself, right? Getting light to behave like I did was too simple to even demo... it simply :
-Waits (freq) # of ticks
-then jumps (freq #) of grid positions, IN EVERY POSSIBLE DIRECTION. That's all there is to it, everything it can possibly hit is right there in those corners of those blocks.

The affect of this is an expanding cube shaped configuration that fills every possible corner within it, but in (freq) JUMPS of expansion. IT IS THE SAME photon appearing in all of these expanded corners at the very same instant. The algorythm selects only one if multiple substrates are concurretly hit, in order to obey the laws of enerygy conservation.

The mechanism of quantum entaglement is instantly explained... while the photo cell on the other side of the room is struck, it absorbeds the (concurrent) one that they cant observe. When two WFEs travel by each other, there CAN be interference, and I'm assumnng it will follow the patterns we observe, if I had sometime, I could proove that within a month or so.

Please dont ask me questions about longs or ints, or any implementatin questions. Obviously, i'm trying to lay down the principals, and not ready to verify any observed laws other than by special experiments set up in the code, like my example of "digital relativity".

I account for every effect except the photoelectic effect and I have a proposed method for that: I think its easiest to explain by looking at the transitions: I've listed them in a previous post:

In the "nuclear force handler", nuclear force (only during the big bang and when then "Atomic" force exceeds limits, resulting in atomic collapse)

In the "Fusion" handler, mass is created from the joining together of fee plasma. I know if its free or not by virture of it having x.mass > 0. If un-fused, the fusion procees takes next priority.

The "atomic force" handler accounts for collisions, kinetic recoil, and nuclear collapse in case of exceeded limits. It also generates radition, and I'm suggeting that eventually, it all gets collected in one place... at old location 0, still holding particle number 1, which never moves in any of my diagrams. with only 2 lines of code changes, i was able to make VP 0 be the big black hole in the center of the U. It winds up gathering all of the light that's otherwise too slow to get there.

With one other one-line code change, I was able to queue up all of the VPs that drifted out to the end of the range... what I'm calling the "sugar - coat". Next tick of time... they ALL advance forward 1 GC to their ROLLOVER positins... locatin 0. in the following tick, ka-boom

ITS IN THERE!!!

Its so simple its scarry!

9. Originally Posted by Marty Wollner

10. How do you see light hit a mountain apart from it reflecting back at you and hit your eye which does not seem to be part of your scenario.

11. Member
Join Date
Dec 2011
Posts
86
Originally Posted by loglo
How do you see light hit a mountain apart from it reflecting back at you and hit your eye which does not seem to be part of your scenario.
Obviously, the light would have to be reflected back to those guys so they can see it. YES.

And the math for that complicates matters because the veliocity of RiderDude may have just continued on, or may have changed....

HEY, I know... What I'm showing is the"one-way" version for clearity and simplicity.

Do you understand the POINT of this example?

Please check out my explanations for WHY it happens... something I'm calling "digital relativity".

Last edited by Marty Wollner; 2011-Dec-11 at 04:13 PM. Reason: type -os

12. I'll just delurk for a bit to comment, since this whole idea kind of relates to work I've done in the past. Before recently changing jobs to something more space-related, I spent many (too many!) years as a games programmer, and even wrote physics code from time to time. I've worked through the code on the web site (do people still really use Basic??) and the pseudo-code posted here, and neither comes even close to dealing with "physics in the real world" as you might call it.

I'm not entirely against the idea of a disrete universe, for both space and time, and I think there are some explorations of that idea going on, but it's going to be far more complex than this one, and almost certainly be described as a mathematical model, not as a fragment of source code.

However, one basic idea here which stands out for me is the idea that velocities can only be integer fractions of C. Apart from the problem, already posed, of particle accelerators regularly exceeding C/2, it must also mean that non-integer fractions aren't possible in this universe, e.g. I can have C/2, C/3, C/4, but not 0.4*C or 0.615*C. Even at low, everyday, velocities this discretisation would, I think, be measurable in certain circumstances, and as far as I know there haven't been any reports of it.

I'm also puzzled by the choice of grid at one spatial unit = C / 1 second. The second is purely a human unit of measurement, so has no actual relationship to the universe at large. For some supposed non-human observer, this could make the grid unit "C / 1.235 rels", which makes the idea of a discrete universe nonsense from their point of view.

To make any idea like this work you need to go back to more basic concepts, and not even think about how to simulate it until you have an approach that works independently of arbitrary units.

13. Member
Join Date
Dec 2011
Posts
86
Originally Posted by molesworth
I'll just delurk for a bit to comment, since this whole idea kind of relates to work I've done in the past. Before recently changing jobs to something more space-related, I spent many (too many!) years as a games programmer, and even wrote physics code from time to time. I've worked through the code on the web site (do people still really use Basic??) and the pseudo-code posted here, and neither comes even close to dealing with "physics in the real world" as you might call it.

I'm not entirely against the idea of a disrete universe, for both space and time, and I think there are some explorations of that idea going on, but it's going to be far more complex than this one, and almost certainly be described as a mathematical model, not as a fragment of source code.
Yup. It just like this:

Take every one of Feynman's transitions and change the math from: "For I = 1 to infinity" into: "For I = 1 to (2 ** WordSize)"

PRIORITIZE THE RULES !!!
like nuclear first, then fusion, then collisions (including handling of radition)

Put an "If " statement to test for co-oocupation and USE the APPROPRITE rule accordingly.

Bingo, you have the theory of everything. You only need one paramerter: WordSize. I belive it is determined as the larget possible prime number.

Originally Posted by molesworth

However, one basic idea here which stands out for me is the idea that velocities can only be integer fractions of C. Apart from the problem, already posed, of particle accelerators regularly exceeding C/2, it must also mean that non-integer fractions aren't possible in this universe, e.g. I can have C/2, C/3, C/4, but not 0.4*C or 0.615*C. Even at low, everyday, velocities this discretisation would, I think, be measurable in certain circumstances, and as far as I know there haven't been any reports of it.

I'm also puzzled by the choice of grid at one spatial unit = C / 1 second. The second is purely a human unit of measurement, so has no actual relationship to the universe at large. For some supposed non-human observer, this could make the grid unit "C / 1.235 rels", which makes the idea of a discrete universe nonsense from their point of view.

To make any idea like this work you need to go back to more basic concepts, and not even think about how to simulate it until you have an approach that works independently of arbitrary units.
WOW !!!

That was a totally AWESOME reply!!

You are all over it... the INTEGER definitions of speed being scaled into FRACTIONS of C.

BRAVO !!!

I'm calling this "digital relativity".

I been posting a lot, so you might have missed the section "are you stuid marty" its full of type-os, but YEP, that's what you're saying (not the stupid part lol) !!

This is so cool. Do you think anyone will try to consider THAT as the answer to what we see? Can you imagine if we are right about "digital relativity"... how much intellect was spent on infinity-based math formulas that only estimate the end result of infinate resolution, when 5 pages of code shows it ALL?

Originally Posted by molesworth
"I'm also puzzled by the choice of grid at one spatial unit = C / 1 second"
I'm not sure what your talikng about here... if you actualy got it from an old piece of code or a type-o on my part, but SECONDS is NEVER a consideration, that is only something observed from within what I call the "inner U.".

Hey, what do you think about my new ideas that there is only ONE WAY for this to be constructed, and there are NO parameters left, other than WordSize, and I think that IT might be "the largest prime number possible", confining our existence to one and only one "pre-determined story".

Also, and much more important, if there is ANY degree of POSSIBILTY to this, then idea of the "TheTruth" machine becomes a REAL POSSIBILITY , and so everyone should just go aghead and asume it will be sittting there on the Judge's bench in every courtroom in the near future, and so everyone should start to follow th golden rule, RIGHT AWAY, eh? That's my real reason for this... just any hint of solid mathematical possibilty, and we all have a non-religeous basis for accountability and harmony for everyone. I mean, if you KNEW that in 10 years, this technology was available, would you rape that puppy next door? Would you dump a vat of toxic waste into the community swimming hole on you way home from work? Me neither!
My game server is up right now if you want to check out my 28 casino style games, and play 'em world-wide.

THANK YOU !!!!
Last edited by Marty Wollner; 2011-Dec-11 at 06:21 PM. Reason: spulling, add feynman's decision points

14. Originally Posted by Marty Wollner
how much intellect was WAITED [wasted?] on infinity-based math
...
and I think that IT might be "the largest prime number possible"
That will be infinitely large, then. Isn't there a slight contradiction there.

Now back to your simulation method:

1. Did you invent this way of simulating yourself or did you learn it somewhere?
2. Do you know how to determine the errors in this sort of simulation and if it will converge or not?
3. How large are your cells?
4. How large is the universe you are simulating? (presumably based on cell size and word size)
5. Are you using integers or reals
6. What is the maximum value you can represent? And at what resolution?
8. What do you do at the boundaries of your simulation?
9. You said, "the universe will end up in the shape of a perfect cube." This wouldn't be because you have chosen a cube as your simulation grid, would it?

15. Member
Join Date
Dec 2011
Posts
86
Originally Posted by Strange
That will be infinitely large, then. Isn't there a slight contradiction there.

Now back to your simulation method:

1. Did you invent this way of simulating yourself or did you learn it somewhere?
2. Do you know how to determine the errors in this sort of simulation and if it will converge or not?
3. How large are your cells?
4. How large is the universe you are simulating? (presumably based on cell size and word size)
5. Are you using integers or reals
6. What is the maximum value you can represent? And at what resolution?
8. What do you do at the boundaries of your simulation?
9. You said, "the universe will end up in the shape of a perfect cube." This wouldn't be because you have chosen a cube as your simulation grid, would it?

Now back to your simulation method:

1. Did you invent this way of simulating yourself or did you learn it somewhere?

I TOTALLY came up with everything, ENTIRELY on my own. I used to be a production engineering programmer for Ford. One of my data collection techniques was using “double buffering”. That’s where you store incomming data in one array and concurrently proceess the last collection from a different array.

What does that have to do with reality? Feynman? EVERYTHING!!! I haven’t dicussed this yet, this is a perfect tiime. Its what I call “Perfect reproduceability”:

If your going to apply changes to the list of VPs as you go through it, your’re going to mess up subsequent calculations whilts your working on it. “Marty’s solution”: Use the double-buffering technique on the universe.

I start out with TWO sets of VPs. One is called the CURRENT SET, and the other is called the ALTERNATIVE SET. In the first tick of time, the progarm just sequentially considers each VP one at a time. For I = 1 to the count of VPs in the list, it goes throught the “therory of everythig rules (I call it God’s program), and FOR EACH ONE, DETERMINES THE NEXT values for location, heat, mass, and “S”, the inversion speed counter. The resuts are store in the ALTERNATIVE set’s associated VOP record by index, so it doesn’t overwrite anything needed perhaops in subsequernt calcualtions in the same TICK cycle.

When all of ‘em are proceesed,

and here’s that creepy feeing for me big-time, I just say:

“The alternative set is now the current set”

THAT is a tick of time. Creepy, eh?

THAT is how I have turned a continuous mathematical equation like y = 2x into a set of sequential steps and applied it to a collection of numbers without disturbing the other calculations.

That is how I turned “God’s program” into a simple FINTE SERIES EQUATION.

2. Do you know how to determine the errors in this sort of simulation and if it will converge or not?

If you read the FINE commnets from MOLESWORTH, you can see that I’m really still very much in the development phases, and I’m NOT saying that my rendition is THE theory of everything, its just there as a teaching aid!

(Although, I really belive it is the real deal as shown, that has yet to be proven and, as MOLESWORTH points out, the theory should be worked out first. Hence the teaching aid.)

I certainly didn expect anyone to ask me questions of implemenation!

That said, in the theoretical sense, this program will have zero errors!

Every movement is under control, and the 1 GC/T restriction really puts rock solid control on what can happen.

From the 20 zillion foot point of view it’s a masterpice, but from within the code, its simply a list of numbers that keeps changing.

When you say “CONVERGE” I assue your talking about my overall univeral flow… orderlyness> mass> gravity> collisions> heat and raditation> seperation of light from matter> orderlyness.

No, I have not yet done this, I’m in proces, and your welcome to my newest code under development.

I am tracking the following in it:

UniverseEntrophy
Rem: The stretch and twist of the rubber band; the quantity orderlyness.

Particle_0_Mass

OtherThanParticle_0_Mass

TotalMass

KineticVelocities
Rem: we start out with no movement (particle 0 never moves)

KineticForceLevel
Re: displays the sum of (velocity * "Non-P0" mass) for all components

TotalHeat
Rem: we start out with heat = 0 too! (paricle 0 stays cold the entire time)

KineticHeatLevel
Rem: displays the sum of (heat * mass) for all components

Rem: but ALL the particles just can't ALL get along, so big bang boom. We start by increasing velocity and heat, and decreasing mass ALL in tick 1. This makes a plasma vapor spray about according to a couple of patterns. In tick 2, this instanately results in fusion of every particle except P0.

FusionCounter
Rem: Counts fusion events for display purposes, rolls over at (WordSize**2) / 2

WFECounter
Rem: Counts radiation events for display purposes, rolls over at (WordSize**2) / 2

WFEActiveCount
Rem: Displays the sum of radiation events currently active dring the display tick,
rolls over at (WordSize**2) / 2

WFERecoveredCount
Rem: Displays the count of radiation events converted into mass in particle 0.

WFERecoveredMass
Rem: Displays the overall mass converted from the photons hitting particle 0. Ths should equal the negative of mass of particle 0 !!

SugarCoatCount
The count of VPs queued at the last position

3. How large are your cells?

Im not not a great VB coder, my older stuff was alays done in C, but I stared this project out in VB in the last centry because I felt it would be very simple, like the pesudocode shows, and that I could demonstrate it as such.

Obviously, I always wanted to make it all ADJUSTABLE according to WordSize, and so in my more recent versions, I’m using a stupid VB “REDIM” statement to do so, and I’m finding that I’m wasting a lot of time trying to get it to work. The VB limitations have me hand-tied… as far as my max cell size goes.

If you look at octopus_2, I DO have an adjustable parameters for the current cell size (I call this the RANGE), BUT I also allow adjustment in the dimension count! This really messes with the max-size, within these stupid VB bounds… right now think my max cell size is around 400 in one-d, but its 200 in 2-d, 100 in 3-d,…

I introduced a bug into the array processing, and now I’m just trying to get it to work again!

The old octpopus_2 did work ok, it showed the nuclear expansion and te fusion of plasma. You can get it from the web, source code is included.

4. How large is the universe you are simulating? (presumably based on cell size and word size)

Once again, I don’t want to get too hung up on implementation, the theory is that its just gonna work.

I have thought about programming this on a super-huge scale. My books claim that I can do it all on an old atari-64 hooked up to an external storage unit.

I’m proposing my next major overhaul version, a new kind of data type that can store HUGE integers… lets call it a V_Int data type, and all the code’s equations would call special functions for addition and other artihmatic fnctions upon it.

If I had such a data type, and if I started WordSize out at 6,400 (only 1/10 the address capacity of the atari-64), I could get a few of them in memory at the same time and do the calcuations on them as such, store the output in the ALTERNATIVE set on external storage, and fetch the next VP recrd to be processed.

The extenal storage then needs some really huge capacity.

At that point, its just a matter of making it appear in our inner-universe that there are a zillion VPs, but in reality, the program only uses a small list of them, and overlaps these as a kind of “paging algorm”. This way, a realistic VP capacity can be used to implement a nearly un-realistic virtual capacity, at a trade-off of addtional computations.

BUT IT COULD work… I could run the theory of everything on an old atrai-64 with enough resolution to account for our own universe.

Do you agree, at least in theory?

5. Are you using integers or reals

(You touched a raw nerve here, lol)

In my 25 plus years as a professional programmer I have yet to use anything but integers in all of my code, if I could in any way avoid it. I haven’t worked in the past 10 years, but MY SYSTEMS ARE STILL running… they are the backbone of the entire Ford motor company production systems architecture, which is a conglomeration of many, many, vendor-purchsed solutions… except for the backbone that glues the flow of data everywhere; the Alarm Notification System, which I wrote everyline of code over 14 years on the job.

So, no reals, is all integers always every time no matter what, period!

6. What is the maximum value you can represent? And at what resolution?
See 5.

WOOOOAAAHHH..

YOU DON’T GET IT !!

THIS IS KEY…

Its not simulating reality IT IS REALITY. The numbers themselves are the “matter” we see The grid is our home. We are living in a sea of changing NUMBERS, and that’s all there is to reality.

Creepy, ehh?

I hope you get this. If you read any of my books you would, I literally beat a horse AND a poney to death trying to explain this concept.

That’s what they mean calling it a theory of everything.

8. What do you do at the boundaries of your simulation?

I’m in the process of wrtting this up right now. My new program will provide a switch so you can play around with it in several ways.

The cool thing here, it works great for 1-d, and when it gets run in 3-d, YES then it ends up in the cube shape you keep asking me about;

All newtonian motion is really creepy slow, 1 GC/T and then only happening every so many ticks. The construction of that last sentence was due to my Asprger’s sydrome, so PLEASE BEAR WITH ME on my styles and mannerisms and length descriptions. My books allow me time to sit down and think these concepts out in so they can be expressed in a way that can be understood, building one concept on top of the other. So, anyway, when a vp is sitting at position (max – 1), IT IS TOTALLY under conrtrol, and at some point, the motion counter says, OK, move 1 GC. OK?

If we are in what I call a “Queueing Universe”, the rule says:
If (the count of VPs all at the max location = the count of VPs) then
Rem: they are all there, go ahead and let it advace 1 location… the next location past the end of the range is back to 0.

When this overal processing loop is done, ALL VPs will be in location 0, and the next tick of time will be another big bang.

If we are in what I’m calling a “Fountain Universe”, I just let each of them continue motion right on through the rollover as the next position handleing at the last location.

9. You said, "the universe will end up in the shape of a perfect cube." This wouldn't be because you have chosen a cube as your simulation grid, would it?
It because of (8) when implemented in 3-d that’s what happens.

16. Originally Posted by Marty Wollner
I TOTALLY came up with everything, ENTIRELY on my own.
I kinda had a feeling that was the case. I think you need to go and do some studying. People have been working with, and analysing, these sorts of simulations for many years. There are many not-so-obvious problems which can prevent the simulation not accurately representing the thing being simulated.

“The alternative set is now the current set”

THAT is a tick of time. Creepy, eh?
Well, not exactly creepy. But I remember thinking it was pretty cool when I learnt this technique (well done for figuring it out by yourself)

I certainly didn expect anyone to ask me questions of implemenation!
But you cliam that your implementation accurately represents reality.

That said, in the theoretical sense, this program will have zero errors!
And that is clearly wrong. All simulation techniques have errors. The important point is to understand the sources of the errors, their effects and how to minimize them.

When you say “CONVERGE” I assue your talking about my overall univeral flow… orderlyness> mass> gravity> collisions> heat and raditation> seperation of light from matter> orderlyness.
No. And the fact you don't know what this means is a clear indication of problems.

Rem: Counts fusion events for display purposes, rolls over at (WordSize**2) / 2
Rem: Counts radiation events for display purposes, rolls over at (WordSize**2) / 2
Rem: Displays the sum of radiation events currently active dring the display tick, rolls over at (WordSize**2) / 2
You don't see this "roll over" as a possible problem?

3. How large are your cells?
You didn't really answer my question. I meant, how large in terms of physical space, are the cells in your model? 1 light year, 1 mile, 1 cm, 1 um, 1 Planck lenght?

4. How large is the universe you are simulating? (presumably based on cell size and word size)
Again, I really meant, how large if the physical universe you are simulating: 1 cubic millimeter? 1 cubic parsec?

BUT IT COULD work… I could run the theory of everything on an old atrai-64 with enough resolution to account for our own universe.
Do you agree, at least in theory?
Well, as similar simulations of the evolution of the universe tend to run for months on clusters of supercomputers .... no.

5. Are you using integers or reals

So, no reals, is all integers always every time no matter what, period!

6. What is the maximum value you can represent? And at what resolution?
See 5.
I'll take that non-answer to mean 32-bit integers. What does this say about the range of sizes of objects you can model? Do you mode things at the level of atoms? Or galaxies? Or?

WOOOOAAAHHH..

YOU DON’T GET IT !!

THIS IS KEY…

Its not simulating reality IT IS REALITY.
No. You have written a crude simulator. You need to understand why it is not an accurate model of reality (even in principle).

9. You said, "the universe will end up in the shape of a perfect cube." This wouldn't be because you have chosen a cube as your simulation grid, would it?
It because of (8) when implemented in 3-d that’s what happens.
So it is just an artefact of your model. It doesn't say anything about reality.

17. Established Member
Join Date
Jan 2008
Posts
464
Marty Wollner. The thing you are failing to reinvent is the subset of "Scientific Computing" called "Numeric Simulations". This is an extremelly well studied subject and while I don't know you background in computers and math I think you would progress much faster if you'd got an introductionary textbook.

18. Established Member
Join Date
Oct 2009
Posts
1,410
Originally Posted by Marty Wollner
The cool thing here, it works great for 1-d, and when it gets run in 3-d, YES then it ends up in the cube shape you keep asking me about;

All newtonian motion is really creepy slow, 1 GC/T and then only happening every so many ticks. The construction of that last sentence was due to my Asprger’s sydrome, so PLEASE BEAR WITH ME on my styles and mannerisms and length descriptions.
Pursuant to the preceding post's recommendations, you would indeed do well to review the literature. Quite a bit of thinking has already been done on the subject. You could start, eg, with Konrad Zuse's work (from the late 1960a) on what has come to be called "digital physics"; Ed Fredkin's classic cellular-automata proposal; and Wolfram's development of these ideas in his A New Kind of Science. These have all been much discussed and critiqued.

19. Member
Join Date
Dec 2011
Posts
86
Originally Posted by Geo Kaplan
Pursuant to the preceding post's recommendations, you would indeed do well to review the literature. Quite a bit of thinking has already been done on the subject. You could start, eg, with Konrad Zuse's work (from the late 1960a) on what has come to be called "digital physics"; Ed Fredkin's classic cellular-automata proposal; and Wolfram's development of these ideas in his A New Kind of Science. These have all been much discussed and critiqued.

Thanks for the information. I really did look into it, and have yet to find anyone who did it keeping it pure and simple. I have yet to see anyone come up with the idea of using the digitized geometric expansion of matter into adjacent locations to implement heat and implicitly result in PV= NRT and quantitize heat flux in square levels without explicitly programming it in. I have yet to see any of those works have the capacity to go through the entire life cycle of the universe both beginning and ending in a big bang. And I did it all without needing relativity!

Of course I know it sounds retarded for someone to just show up and say LOOK LOOKY look what I invented all by myself!! I really do have solid reasons for taking this blind approach. Please see the memo to Strange coming up shortly.

I HAVE TWO QUESTIONS FOR EVERYONE HERE
... I hope you have read my statements about this introducing a sliver of possibility that our existence is pre-programmed and that it can be analyzed and interpreted as a single finite series equation, and because it IS digital, it can be replicated with 100% accuracy, and if so, this introduces another sliver of possibility that we could run this program on computers right here on earth and replicate our own U. for the purpose of seeing what the real truth is.

1) After reading through all my ideas, and, if this IS a DIGITAL u., is there any chance of this possibility, in your professional opinion?

2) Do you think my introduction of this very simple explanation and solid operational mechanism could help sway someone into considering the possibility?

THIS is so important to me.

Thanks again.

Marty

20. Established Member
Join Date
Jan 2008
Posts
464
Originally Posted by Marty Wollner
[b]
1) After reading through all my ideas, and, if this IS a DIGITAL u., is there any chance of this possibility, in your professional opinion?
No, you are doing the classic beginning student mistake of thinking that the model is reality. Further more, your model obviously gives the wrong values and lacks any error analysis. Error analysis is the back bone of of scientific computing, else we couldn't trust the cars, buildings and airplanes designed using such methods.

2) Do you think my introduction of this very simple explanation and solid operational mechanism could help sway someone into considering the possibility?
Superficially similar ideas like Finite Element Methods are used to do things like heat transfer and many other things. FEM however is not simple (the math just to prove they work are pretty horrid) but they do work and billions are spent on them every year. Grid-based methods exist for simpler (mostly "toy") problems but are limited in what they can do. Also while they use regular shaped grids they ALWAYS use floating point values, not integers or numbers divied by integer fractions. This is vital and if you don't understand why you really need to study this properly because you lack to much background for people to teach you it on a forum. The first scientific computing class I took that covers just the basics assumes a total amount of 200 hours of study. I have since taken 4 equally long courses on top of that and one twice as long.

If you really is serious about this you either need to get yourself to a good university or plan to do some serious studying on you own.

21. Order of Kilopi
Join Date
Mar 2010
Location
United Kingdom
Posts
4,156
1) After reading through all my ideas, and, if this IS a DIGITAL u., is there any chance of this possibility, in your professional opinion?
Seems to rather go against Quantum Mechanics which is profoundly non-deterministic. I think there is a lot of effort required on your part to show that the current reasoning behind abandoning determinism is wrong. It has been quite extensively tested. he argument is, I guess, that you don't accept QM or in fact any physics at all because it all comes from your model. But that makes your job huge. Now not only do you have to explain the universe's appearance but its laws...

2) Do you think my introduction of this very simple explanation and solid operational mechanism could help sway someone into considering the possibility?
It is perhaps too simple for what it claims to do. Ignoring the "this is reality" bits any model of the universe that throws out as much established physics as yours does is going to have to be studied extensively. In order for it to be interesting enough to do that it has to make some new predictions that shed light on existing problems, or are easily testable. The way it is presented now makes it easy to ignore to be honest.

22. Member
Join Date
Dec 2011
Posts
86
Dear Strange. I really appreciate you taking the time to respond, and I am taking all of your comments as positive and constructive. Thank you.

I keep trying to get you to understand that I’m trying to ask some questions about how the speed of light might really operate in our universe, and that is the direction I want this discussion heading… the issues of what I’m calling “digital relativity”. I am, however, a programmer and I can answer all of your implementation questions as well.

And when I’m done answering them, and when I remind your for the 7th time now that the snippets of code I’m showing are only here to demonstrate a concept, will you finally at last listen to the concept?

I know a lot about simulations. I have a degree from MSU where I took theoretical computing classes and I have applied these concepts in ways most people would not understand.

As we speak, machines all over the world are drilling holes, assembling COMPONENTS together, etc. and a lot of that real-time production data is being fed into systems that they let me, ON MY OWN, WITHOUT ANYONE’S HELP, design FROM SCRATCH, and program every line of.

A big part that was simulation testing. I used to work with Randy Newton (a direct descend of sir Isaac) back at Volvo Automated Systems in 1983. He wrote the simulator for the automated guided vehicle systems that I programmed the control system for.

What I admittedly DON’T know about is our current technology used in creating simulations of the observed universe. That’s a SIMLUATION APPLICATION.

These applications can be very complex, have their own set of high-tech jargon, and terms like cell, parsec and omega, etc… terms that

1. I sense you are very familiar with, and,
2. You (correctly) sense I‘m not familiar with.

Am I right about both of these?

If so, then certainly you must be questioning why I have any business at all even posting in this site. AM I RIGHT? What are my qualifications, right?

Of course, of course I DON’T belong in a conversation with someone who is into it. I FULLY AGREE!

“I kinda had a feeling that was the case. I think you need to go and do some studying. People have been working with, and analysing, these sorts of simulations for many years. There are many not-so-obvious problems which can prevent the simulation not accurately representing the thing being simulated.

This one paragraph paraphrases our misunderstading. I’m suggesting that a couple of these not so obvious things that we think we need to purposefully compensate for are misunderstood, and the reason why boils all the way back to the discission that you finally dropped… the one of if the U. is digital or not, yes or no.

If it IS a digital U., and if it does get implemented something like the way I’m suggesting, these not-so-obvious problems totally disapppear because they don’t exist! I don’t need to worry about gravity compression or time-flow-displacement or any such (sorry) nonsense, because it just wouldn’t happen in this simple model; it is designed to unfold like the real thing, yet it doesn’t have the issue of replicating the paradox of gravity-time-displacement because of the inversion of internally scaled time.

That’s the exact point I’M trying to make in the first place… please always keep this in mind… IF it IS a digital U., THEN what…

I have yet to see anyone else take this approach. Are you aware of any? Can you point me to them, because this is something I did look for, and didn’t find anything other than speculation on the original Turing machine?

The misunderstanding, as I see it, is this: You want a simulation application to run in which it accounts for all of the observable data we know of. I am NOT providing that, I know I said that, (and actually believe it), but my actual demo is not in any kind of condition to show it run to completion.

I also know I have to prove it. But how? I have already shown you how the basic principles will begin to create these “COMPONENTS”.

How do I know what these “components” will do when there are 10 gazillion of ‘em? I have no idea, how can I in 64mb of array space for all of my dimensions?

Originally Posted by Strange

Well, not exactly creepy. But I remember thinking it was pretty cool when I learnt this technique
It’s SUPER creepy for me because I’m thinking about me becoming aware of things from instant to instant, with no clue of what it means to wait in-between.

· Is it like a computer simulation, running along with a sequential and synchronous beat of the CPU clock?

· If they paused the program and then resumed it, WOULD I DETECT IT?

· Ever see that TV commercial where a guy’s watching a video, pauses it, walks in the next room and resumes it?

· Ever hear the old philosophy: Is it possible that everything you are and everything around you was all suddenly created 1 zillionth of a second ago?

· If they paused the program and then changed it before they resumed it, WOULD I DETECT IT?

I could take this whole thing a LOT further, but that’s far enough to show why it’s creepy to me.

Originally Posted by Strange

But you claim that your implementation accurately represents reality.
Where did I say that? I said it was a demo, and I privately said I think I will work as is. “As is” means just a few changes. First off a new data type for all my integers with unlimited capacity, up to a specified scaling factor, WordSize.

Originally Posted by Strange

And that is clearly wrong. All simulation techniques have errors. The important point is to understand the sources of the errors, their effects and how to minimize them.
That is where you are WAY wrong. This is not a simulation. This is a program that manipulates data according to a tightly defined set of rules. Every transaction performed by the sequence processor id under complete control; the data that gets moved cannot be moved out of the bound of this system.

This is the truth, right here: as far as I know, I am the world record holder of a continuous up-time remote data collection system. If you don’t believe me, just shoot a memo off to jkarr1@ford,com , a computer room manager at the Rouge complex, and ask him about the FCIS at the ROB and “Marty”.

This is a data collection system (Ford Cell Information System) the I conceived, was allowed to create from scratch (I wrote every line of code, about a ½ million total), and I installed it (personally) in 14 site world wide, I can still name most of ‘em off the top of my head.

FCIS:
The data gets sliced, diced, re-hashed, re-packaged and shipped to various applications in the format required by the application. Doing it this way allowed the apps to be independent of the data collection layer.

This system was loved and hated by most, because the configuration of the database was tough, and without somebody to write some really imaginative database configuration screens, it was a difficult chore. But once the configuration was done, and all the machine controllers were cranking out stuff in whatever protocol they talked; allen-bradley, square-d, modicon, all of the host applications had their own proprietary feed protocols as well, so the FCIS not only glued itself all over the data sources, it also glued itself to the applications it fed.

JOB SECURITY, BABY!

But my point is this: I wrote that system BULLET-PROOF and this was PROVEN as a fact with trillions of transactions served thus far, and I think its still running in a few plants because its baked into the metal of the assembly line protocols.

So, bottom line is I TOTALLY DISAGREE WITH YOU on the requirement for error checking; the code is so simple, pure integer arithmetic, don’t you think I can control it?

Originally Posted by Strange
No. And the fact you don't know what this means is a clear indication of problems.
I addressed this terminology/application expertise issue above.

Originally Posted by Strange
You don't see this "roll over" as a possible problem?
This is a DISLAY COUNTER; rollover is as meaningless as your butcher’s “NEXT IN LINE” counter.

[QUOTE=Strange;1968239]

You didn't really answer my question. I meant, how large in terms of physical space, are the cells in your model? 1 light year, 1 mile, 1 cm, 1 um, 1 Planck length?

Originally Posted by Strange

Again, I really meant, how large if the physical universe you are simulating: 1 cubic millimeter? 1 cubic parsec?
The universe range is always:

2**WordSize.

The universe is always totally scalable. This also applies to:

The maximum heat level
The maximum frequency
The count of VPs in the list
The count of “live” radiation events

Is the number of universe ticks also limited to this max? Not necessarily, if the U. is open, it runs forever, but it still runs… if I have a display counter of the number of ticks the U. has tocked, does it have to be infinite while everything else is finite? No, the system doesn’t need the counter its just to let me watch it tick-tock by, it can have a ROLLOVER, and there is no probelmo.

Originally Posted by Strange

Well, as similar simulations of the evolution of the universe tend to run for months on clusters of supercomputers .... no.
How can you make such a statement? What does any arbitrary number have anything to do with “IN THEORY”?

This equation has a pre-determined end, your saying that it can never be hit, in theory ???

I hit it every time I run it right here in my living room; I run it for 4 ticks… long enough to kick of the big bang, fuse plasma creating components with mass, velocity and direction, and then they continue on due to momentum.

It does this every time when I define the particle count as 7 and the max range 33. (See the example).

The pseudocode is pure, unrestricted by any sizes whatsoever, I just say “integer”, and I mean a scalable one, OF COURSE!!!

Scalability is what the entire point of my discussion is really about… everything I have said thus far relates to WordSize, especially all of the discussions about C.

Strange, for you to keep on digging me with these questions and them use them to prove to me that I’m clueless about scaling is a bit of an insult, however, I totally understand, as described above.

Originally Posted by Strange

I'll take that non-answer to mean 32-bit integers. What does this say about the range of sizes of objects you can model? Do you mode things at the level of atoms? Or galaxies? Or?
grrrrrr

Originally Posted by Strange

No. You have written a crude simulator. You need to understand why it is not an accurate model of reality (even in principle).
I agree I have not proven the runtime activities my snippets of code to be an accurate model of reality part... but I never really came out and said for a fact I could, I said that I think it should.

HOWEVER,

The “in principle” part really bugs me though and this is very important, if we keep this discussion going (you will see why later… this discussion is just the start of a series of topics like it). But lets try to clear the air for once and for all.

HOW DO YOU KNOW what’s happening on the sub-atomic level? Nobody has a clue BY DEFINTION!

Read up on Max Tegmark and Ray Kurzweil, and THEN talk to me about it.

YOU STILL DON’T GET IT… these numbers are not a simulation of matter… matter is not matter it IS numbers!!! You need to think like I’m thinking here:

· What you believe is an atom getting hit with a photon, and you’re all worried about momentum and spin and orbital displacements,

· To me, IS JUST A SINGLE ADDITION CALCULATION on a number in a list. PERIOD.

Try to envision it this way… simple sequential data processing, period.

Originally Posted by Strange

So it is just an artefact of your model. It doesn't say anything about reality.
Like I said, only because I included an “if statement” that says “if we want to run the U. as a big bang, stop right here at the max range, until everyone is here as well”. In one dimension and in one direction, that looks like a pin-head. In one dimension and in both directions, that looks like a dumbbell. In 2-D, it looks like a square version of a Dominos’ stuffed crust pizza. In 3 – D, it’s a cube, and I spent 8 man-years and I have 2 US patents pending based upon the mathematics of dice and gaming. I found the coolest intrinsic relationship between 2 dice and 3 dice, check it out!

http://spikersystems.com/FlashNet_Po...AA_Information

BOTTOM LINE:

I believe I have found the underlying principles of operation of the universe, but I certainly did not do it from the outside in which is what most are focused upon.

Starting from the inside out, starting with a block, then a line and then the block marching along the line, I’m trying to design the universe from the inside-out. But my universe is NOT the simulation APPLICATION, my universe is the SIMULATION ITSELF, the numbers themselves REPRESENTING matter, and that is the point that I think you’re missing.

I would love it if I could somehow hook it up to a big display unit and watch it create stars from gravity acting up the components.

I would love to watch the first super-nova explode and watch the display counters of “VPs unbound” shoot up and then fall back in the next tick of time, when the force of fusion bonds them all in the next instant because it’s the next priority on the list of actions to take.

I would love to watch the first cold particle of space dust hit the sugar-coat and stick there.

I would love to watch the last one finish the dusting job as well,… that last bit of matter in the universe standing at the edge of very long, lost journey, stepping up to the very last virtual interval of distance left in the entire universe.

The last bit will step right on though, like its just another single step in the long precession of ‘em, carrying every other one of its siblings along with it back into their common next location. Back to that one little location upon which the remainder of the universe is anchored, upon which I can feel the framework to be as solid as iron, back to the beginning… location zero… to find its old and dear brother, who sat there for all eternity, single-handedly holding the entire universe in place... seperated at birth, but at last … rejoined.

How many foot pounds of force did it take to get that to occur?

I CAN TELL YOU THAT EXACTLY, EVERY STEP OF THE WAY, every movement of matter is totally accounted for. Tell me 2 things: Wordsize and Universe tick I a show you what’s there.

At least, in concept!
Last edited by Marty Wollner; 2011-Dec-12 at 11:20 PM. Reason: bullit proof?

23. Member
Join Date
Dec 2011
Posts
86
Originally Posted by glappkaeft
If you really is serious about this you either need to get yourself to a good university or plan to do some serious studying on you own.
I appreciate everyones comments and want to thank everyone for taking the time to read through all of this.

My plans right now are to re-write the program in C, unleash it, and see what happens. THEN I will come back and show everyone what I come up with in a scientific proof based upon solid evidence.

As far as everyone here insisting that I take some classses and study other peoples work before I proceed, I TOTALLY disagree. Am I'm NOT stupid; I said I have a solid reason for it and I am even more certain that this is the only approach that will be sucessfull. I truely belive this is a unique situation and that it requires thinking outside of the box. Whenever I do ready anyone's work on anything, it makes me more and more exicted becauase I have the entire thing visualized out, and when I find commonlity, for example my approach to perfect reproduceablity, I KNOW I'm onto something. The percentage of stuff that I dream up and then find to be true fact is pretty high, and I want to keep riding this wave of pure unbiased discovery all the way until it explains everything.

Check out the chapter "Marty's manic approach to discovery" in "Moments in time" for examples of my sucess using this method. It really works.

http://spikersystems.com/FlashNet_Po...#_Toc311422489

http://spikersystems.com/FlashNet_Po...entsInTime.htm

Last edited by Marty Wollner; 2011-Dec-12 at 09:04 AM. Reason: add link to moments in time

24. Member
Join Date
Dec 2011
Posts
86
Originally Posted by Shaula
Seems to rather go against Quantum Mechanics which is profoundly non-deterministic. I think there is a lot of effort required on your part to show that the current reasoning behind abandoning determinism is wrong. It has been quite extensively tested. he argument is, I guess, that you don't accept QM or in fact any physics at all because it all comes from your model. But that makes your job huge. Now not only do you have to explain the universe's appearance but its laws...
It sounds rebelious, but I dont belive in relativity and I think quantum mechanics suffers from the huge vail of fog placed upon it because of it.

Originally Posted by Shaula
It is perhaps too simple for what it claims to do. Ignoring the "this is reality" bits any model of the universe that throws out as much established physics as yours does is going to have to be studied extensively. In order for it to be interesting enough to do that it has to make some new predictions that shed light on existing problems, or are easily testable. The way it is presented now makes it easy to ignore to be honest.
I got a couple of simple tests we could perform to help support my theories. First off, BOLT the WONDER DOG!!

http://spikersystems.com/FlashNet_Po...#_Toc279374249

25. Originally Posted by Marty Wollner
It sounds rebelious, but I dont belive in relativity
Do you really think that relativity is a question of belief?

26. Member
Join Date
Dec 2011
Posts
86
Originally Posted by Perikles
Do you really think that relativity is a question of belief?
I have demonstrated in this paper why I belive that we have made a huge mistake in assuming that the universe is continuous, and that this has led us to the theory of relativity because there is no other way of explaining what we see. When we see a train moving by, we ASSUME it moves continousoly. But what if I told you that the train is not moving continuously, that in "reality" it pauses and then (somehow) instantly JUMPS into its next location.

Now, there are overlaps in timing. Now, what we observe CAN be explained in this context. Please see the above long explantions, or wait till next week, by then I'll have finished a new book named "flat-out disproving relativity in 120 minutes".

Thanks for you interest.

27. Originally Posted by Marty Wollner
When we see a train moving by, we ASSUME it moves continousoly. But what if I told you that the train is not moving continuously, that in "reality" it pauses and then (somehow) instantly JUMPS into its next location..
My first reaction would be to imagine a very uncomfortable ride for the passengers. But how does your theory improve on relativity if you still have to include a (somehow) ? Actually, I'll wait till next week.

28. Member
Join Date
Dec 2011
Posts
86
Originally Posted by Perikles
My first reaction would be to imagine a very uncomfortable ride for the passengers. But how does your theory improve on relativity if you still have to include a (somehow) ? Actually, I'll wait till next week.
I can tell you pretty fast. I didnt have to, and THAT is my point.

All I had to do was to put everything into a grid, and split time into a sequence of steps. I then let a list of numbers act as a set of pointers to locations in the grid. These rlist records each represent a bit of matter. By changing the numbers in the list, that creates motion in this virtual world by moving the bits of virtual matter into differnt locations.

I didnt have to program relativity into it because of the way I handled motion in the computer routines.

HERE is the secret.

By limiting all motion to no more than 1 grid location per computing cycle (the speed of light), I was able to confine all motion within this system, and I never can define the speed of light as an arbitrary setting. So, how can things move SLOWER than the speed of light here? They need to wait a number of compting cycles and THEN move 1 grid location.

This causes the train to pasue, but the passengers dont feel it because they are in the exact same time frame as the rest of the universe... it jumps through the chages in the list of numbers all at the same time, including the atoms that make up the brain of then people on the train trying to sense it.

29. Originally Posted by Marty Wollner
And when I’m done answering them, and when I remind your for the 7th time now that the snippets of code I’m showing are only here to demonstrate a concept, will you finally at last listen to the concept?
Maybe I took your "theory of everything" claim too literally.

If so, then certainly you must be questioning why I have any business at all even posting in this site. AM I RIGHT?
No, I don't question that at all. Hopefully it will be good opportunity for you to learn

What I would question is your confidence in your simplistic simulation telling us anything new. But if it leads you on to do wome real work in this area, that would be great.

This one paragraph paraphrases our misunderstading. I’m suggesting that a couple of these not so obvious things that we think we need to purposefully compensate for are misunderstood, and the reason why boils all the way back to the discission that you finally dropped… the one of if the U. is digital or not, yes or no.
Actually, I think it is irrelevant. Discrete simulations are used all the time to model continuous systems. It doesn't prove that fluids flow "digitally" or that analog circuits have steps in their operation.

I have yet to see anyone else take this approach. Are you aware of any? Can you point me to them, because this is something I did look for, and didn’t find anything other than speculation on the original Turing machine?
I have read of many, many such simulations for a range of things including the behaviour of a quark-gluon plasma, catalysis of chemical reactions, nanomachines, fluid flow, solar systems, galaxies, galaxy clusters, dark matter and even the evolution of the whole universe....

The misunderstanding, as I see it, is this: You want a simulation application to run in which it accounts for all of the observable data we know of. I am NOT providing that, I know I said that, (and actually believe it), but my actual demo is not in any kind of condition to show it run to completion.
So what you have is a limited simulation of unknown accuracy? And we are supposed to get excited because?

I also know I have to prove it. But how?
Well, you could start by simulating something and demonstrate you get the correct (i.e. same as the real world) results. Have you tried simulating something like the orbit of the moon around the earth. We understand this very well. Your simulation should produce a stable orbit of known period. IF you find the moon flying away or crashing into the earth you know your model is not accurate (it does not converge).

If that works, then could try something subtler and try modelling the orbit of mercury around the sun. Does your model produce the Netonian (i.e. wrong) result or does it produce the result predicted by relativity?

The try something really tricky, like a black hole - is that going to work with your crude moidel of Newtonian gravity? Will you get the predicted results for rotating/non-rotating, charged/uncharged black holes?

How do I know what these “components” will do when there are 10 gazillion of ‘em?

I CAN TELL YOUN THAT EXACTLY, EVERY STEP OF THE WAY, every movement of matter is totally accounted for. Tell me 2 things: Wordsize and Universe tick I a show you what’s there.
You need more than that. You need rules to represent gravity. You need rules to represent electromagnetic force. That is just to model the coarse-grained interactions of solid objects. If you want to model everything then you are going to have to also model the weak and strong forces. You will also need to model quantum effects such as probability functions, entanglement, etc. Unless you expect these to be emergent properties of your simulation. Now that would be impressive.

30. Member
Join Date
Dec 2011
Posts
86
Originally Posted by Strange
Unless you expect these to be emergent properties of your simulation. Now that would be impressive.
That is exactly what I expect to happen.

I have an entirely different reason to believe that this program is extermely simple, but its difficult to explain right here, and impossible to prove... this gets back to the question of who might have written the code... if its not very complex... if it can be compressed into say 50 "lines of pure code", then I can see it occuring all by itself as a sort of mathematical wave that our universe got defined as. If its really, really complex, I'm thinking that dramatically cuts down on this possibility and leaves us with an unexplained "power of God", and I don't like just accepting it on faith. I want to see it really working.

This is explained right here:

http://spikersystems.com/FlashNet_Po...#_Toc305761043

Well alright, I want to thank everyone again, especially Strange for all of your time and kind words of advice.

"I'll be BAAACK"