# Thread: Finite Theory of the Universe, Dark Matter Disproof and Faster-Than-Light Speed

1. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by Marakai
Wow. That statement is so awesome, it should become the motto of every scientific institute on the planet...

"Trial and Error - a perfectly valid scientific method."

They should put this on the headers of the Nobel awards!

</facepalm>
"You can solve many problems the same way many great discoveries have been made - by trial and error or by using gradual, systematic, steady, analytical, and judicial reasoning and logic. "

http://scientificmethod.com/i_5.htm

2. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by caveman1917
Isn't that what i just said? In the limit (ie close to the center) there is no "time contraction", yet the force doesn't behave the same as newton (which it should without time contraction). Ergo you did indeed change the force (in the wrong direction).
I deleted my last post because I forgot that a Newton was a kg*m/s^2 so the force indeed changes. In my understanding a Newton is not the most basic measurement unit and therefore can be disregarded.

So no the force of FT when x -> 0 is completely different of Newton's. But it shows how FT matches the observations.

Not that it matters, the equation speaks for itself, irrespective of whether you want to call it "time contraction" or whatever.
When you get closer to the center the time "dilates".

3. Originally Posted by philippeb8
"You can solve many problems the same way many great discoveries have been made - by trial and error or by using gradual, systematic, steady, analytical, and judicial reasoning and logic. "

http://scientificmethod.com/i_5.htm
Quoting out of context and/or consciously misunderstanding the gist of a statement isn't going to make many friends here, I would imagine.

Note that NOWHERE does it state even in that sentence that "trial and error is a scientific method". Not in the sense that you seem to use it: pick fudge factors out of thin air, write bad computer code that throws out some numbers, adjust fudge factor when numbers are pointed out by others to be utterly wrong.

4. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by Marakai
Quoting out of context and/or consciously misunderstanding the gist of a statement isn't going to make many friends here, I would imagine.

Note that NOWHERE does it state even in that sentence that "trial and error is a scientific method".
Look at the URL.

Not in the sense that you seem to use it: pick fudge factors out of thin air, write bad computer code that throws out some numbers, adjust fudge factor when numbers are pointed out by others to be utterly wrong.
Why do you say the code is bad? You complained of some spaghetti C code you had before; this one is written in C++ and couldn't be simpler.

5. Originally Posted by philippeb8
Look at the URL.
I did. You *did* notice that that page is mostly a summary of "pithy" sayings about science (and the scientific method)? It's not a page of definitions, nor a scientific tractate.

Why do you say the code is bad? You complained of some spaghetti C code you had before; this one is written in C++ and couldn't be simpler.
This may be the third or fourth time I'm pointing it out: Your code lacks test cases, calibration against real world data and observations. I'll leave criticisms of having your formulas embedded instead of putting them into separate classes and not using verified math libraries/classes for scientific calculations - others have already pointed out the potential issues with precision.

6. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by Marakai
I did. You *did* notice that that page is mostly a summary of "pithy" sayings about science (and the scientific method)? It's not a page of definitions, nor a scientific tractate.
"The scientific method can be regarded as containing an element of trial and error in its formulation and testing of hypotheses"

http://en.wikipedia.org/wiki/Trial_and_error#Examples

This may be the third or fourth time I'm pointing it out: Your code lacks test cases, calibration against real world data and observations.
I already compared the results with observations in post #335. This is how it is tested; there is nothing more to do.

I'll leave criticisms of having your formulas embedded instead of putting them into separate classes and not using verified math libraries/classes for scientific calculations
The equations are too simple to be put into a class.

- others have already pointed out the potential issues with precision.
All I would have to do is replace the "typedef long double" to a class handling relative errors.

7. Originally Posted by philippeb8
All I would have to do is replace the "typedef long double" to a class handling relative errors.
This right there is (IMO) part of your problem: what I would at best call a lackadaisical attitude towards the evidence and presentation of what you are trying to sell, which is nothing less than an overturning of a century of established scientific theory.

"All I would have to do"? It's that simple? Really? Why didn't you do it from the start?

I'm also not sure what the simplicity of the equations has to do with whether proper coding practices would see then in a separate class: it would improve error handling, testing (because that's where you'd put unit tests, for instance), encapsulation and all those wonderful things that are intended to minimise "garbage-in, garbage-out" as somebody else has already mentioned.

Just because your code doesn't make the front page of DailyWTF or CodingHorror doesn't mean it's well-designed and clean code, *especially* in regard to what your are trying to accomplish with it.

8. Originally Posted by philippeb8
"The scientific method can be regarded as containing an element of trial and error in its formulation and testing of hypotheses"

http://en.wikipedia.org/wiki/Trial_and_error#Examples
You have not presented a falsifiable hypothesis. When are you going to do so?

9. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by Marakai
This right there is (IMO) part of your problem: what I would at best call a lackadaisical attitude towards the evidence and presentation of what you are trying to sell, which is nothing less than an overturning of a century of established scientific theory.
What you see as an overturning of a century of established scientific theory I see it as a wake up call before it's too late. I couldn't care less about prestige, I just want technology to move forward and fast.

"All I would have to do"? It's that simple? Really? Why didn't you do it from the start?
It's extremely simple, I did that in chemistry. But that is not necessary for the moment.

I'm also not sure what the simplicity of the equations has to do with whether proper coding practices would see then in a separate class: it would improve error handling, testing (because that's where you'd put unit tests, for instance), encapsulation and all those wonderful things that are intended to minimise "garbage-in, garbage-out" as somebody else has already mentioned.
Adding classes for nothing is what you call "lasagna code".

Just because your code doesn't make the front page of DailyWTF or CodingHorror doesn't mean it's well-designed and clean code, *especially* in regard to what your are trying to accomplish with it.
My code couldn't be better implemented. Let's move.

10. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by Van Rijn
You have not presented a falsifiable hypothesis. When are you going to do so?
It was falsifiable in the beginning but now all tests have proven it to be working.

11. Established Member
Join Date
Aug 2008
Location
Wellington, New Zealand
Posts
386
Originally Posted by philippeb8
My code couldn't be better implemented. Let's move.
That is a matter of opinion
What cannot be argued is the basics of computer science and especially simulations: Garbage In, Garbage Out
You are using garbage as the basis of your "couldn't be better implemented" simulation. Thus whatever you get out is garbage.

12. Established Member
Join Date
Aug 2008
Location
Wellington, New Zealand
Posts
386
Originally Posted by philippeb8
It was falsifiable in the beginning but now all tests have proven it to be working.
You are wrong. So far it has failed every test unless you fake it by changing the fudge factor h. Given that you cannot even give a coherent defintion of h, this is not surprising.

13. Order of Kilopi
Join Date
Nov 2002
Posts
6,235
Originally Posted by philippeb8
It was falsifiable in the beginning but now all tests have proven it to be working.
I beg to differ. You have not provided the information I have requested below:

I'll start by pointing out that didn't even come close to providing what I asked for. You provided a mish-mash of some values, some equations, some final values. To help you out, I'll spell it out specifically. What I want for each question is the following:
A. What equation are you using for that question, and the definition of each of the variables used in each equation.
B. What are the initial values for each variable along with the source of that value.
C. Provide each of the calculations, starting with the initial values.
D. Provide the final value for each of the equations.

Note that each question below indicates what is missing from the answers, using the definitions of the letters above. This is the second time I've requested the information.

Originally Posted by philippeb8
Originally Posted by Tensor
1. What is the new value of h?
h is still either 1.35e27 kg/m or 6e26 kg/m for the solar system.
You are missing A, B, and C. Can you please provide the equation you are using to get the final value, the definition of each of the variables used in each equation, the initial values for each variable along with the source of that value, and the calculations, starting with the initial values?

Originally Posted by philippeb8
Originally Posted by Tensor
2. What are the precession values predicted by your idea for the following planets:
They don't change. From post #72:

Again, You are missing A, B, and C. Can you please provide the equation you are using to get the final value, the definition of each of the variables used in each equation, the initial values for each variable along with the source of that value, and the calculations, starting with the initial values?

Originally Posted by philippeb8
Originally Posted by Tensor
3. What is the value predicted for your idea for the Viking Relativity Experiment.
The time it takes for a light ray to travel between 2 planets is given by:
t = (m*log(|x-i|) + n*log(|x-j|) + hx) / (m/|i| + n/|j| + h) * 1/c

Where:
• the position of the observer is: 0 m
• m is mass of planet 1
• n is mass of planet 2
• i is position of center of planet 1
• j is position of center of planet 2
• h is the scaling factor for the solar system or 1.3e27 kg/m or 6e26 kg/m

Since the article is extremely imprecise I can't plug in the variables and calculate the equivalent of 2t.
You are missing B,C, and D. [/QUOTE]

Originally Posted by philippeb8
Note: I apologize this disregards the Sun. I need to plug in the Sun. Is that question really necessary?
It's very necessary, in post #290, concerning the Viking Relativity experiment, you state "Thank you, you just confirmed that the Sun's contribution is relevant. FT was already predicting that as you saw."
Your equations ARE NOT using the sun, so your claim that FT was already predicting the value is invalid. You have not shown, in question 1, where that value of h comes from.

Originally Posted by philippeb8
Originally Posted by Tensor
4. Provide the Galactic Rotation Curves for Andromeda, the Milky Way, and NGC2742.
If the equation is:
y = √(Gm/x) * (m/r + h) / (m/x + h)

For the Milky Way if the visible mass is only 30% (3.5e41 kg) then to have approximately the same curve we need to divide h by 3 or h = 8.3e21 kg/m.

But truly we have an equation with 2 unknown variables (h & m) so ideally we could make the observations fit best into the equation; just like a regression.
You are missing part of A (definition of variables), B, C and D. Can you please provide the definition of each of the variables used in each equation, the initial values for each variable along with the source of that value, the calculations, starting with the initial values, and the final value for each of the equations?

Originally Posted by philippeb8
Originally Posted by Tensor
5. What is the value predicted by your idea for the GPS gravitational time dilation? Again, provide the equations, the values used in the equations and the source for those values.
From post #191:

y = (m/|x-i| + n/|x-j| + h) / (m/|i| + n/|j| + h)

Where:
• m = 5.9736e24 kg (mass of the Earth)
• n = 1.98892e30 kg (mass of the Sun)
• i = -6371000 m (position of center of the Earth)
• j = 1.49597870691e11 m (position of the Sun)
• h = 1.3450632e27 kg/m (scaling factor of the Milky Way)
You are missing part of B(the source for the initial value), C and D. Can you please provide the source of initial values, the calculations, starting with the initial values, and the final value for each of the equations?

Finally, you have not provided A, B, and C for the Milky Way h value. Can you please provide the equation and the definition of each of the variables used in each equation, the initial values for each variable along with the source of that value, and the calculations, starting with the initial values?

14. Established Member
Join Date
Jul 2006
Posts
1,260
Originally Posted by philippeb8
It was falsifiable in the beginning but now all tests have proven it to be working.
I'm not sure you understand what the term falsifiable means in scientific theory. It means that a hypothesis is falsifiable if there are tests or observations which can possibly disprove it. If there is no way to do that, the hypothesis is called unfalsifiable, which stops it dead in its tracks. Posters have been asking you to explain how your idea is falsifiable, or in other words, what tests or observations can you suggest to meet that requirement (it has already been shown that your "simulator" does not).

In short, you want your idea to be falsifiable -- that's a good thing. That means it can be tested for viability.

15. Originally Posted by philippeb8
It's extremely simple, I did that in chemistry. But that is not necessary for the moment.
Wha...? Sorry, you're babbling. OK, I'm out, this is just getting silly.

My code couldn't be better implemented. Let's move.
You're either implying that it's the best code there is - or that YOU are incapable of writing better code. The former would be hubris, the second an admittance of failure.

Doesn't matter anymore. I'll watch this tragi-comedy from the sidelines now.

Laters...

16. Originally Posted by philippeb8
It was falsifiable in the beginning but now all tests have proven it to be working.
Luckmeister covered this nicely. All scientific hypotheses and theories are falsifiable. They never stop being falsifiable. There must be an argument that is testable through objective evidence, where a failure to meet the test would indicate the argument is false.

You have not presented a falsifiable hypothesis. You have presented some assertions, but when others have pointed out problems with them, you have been unconcerned. I've asked you what tests you would accept, where you'd agree that a failure would indicate the argument was wrong, but you have not provided any solid criteria at all. Rather, from the quote above and your earlier statements, it seems likely that you are not willing to accept any test.

Until you specify a solidly testable argument, it isn't science. Also, this discussion seems pointless if you will not accept any tests for your assertions.

17. Originally Posted by philippeb8
You're an engineer? Then you can understand that the square root algorithm is quite complex:
http://stackoverflow.com/questions/4...length-numbers

"The laws of nature are defined by additions and divisions, not square roots!" - Phil

Dark
matter, dark energy, MOND and the cosmological constant are just patches; "diff" files you apply to a bug in your code.

Really? What's the length of the hypotenuse of a right triangle with two sides of length 1? If you know the area of a circle and nothing else, how do you calculate the radius?

EDIT:

What is "The laws of nature are defined by additions and divisions, not square roots"? Is that a postulate you made up? Upon what do you base that? I use square roots every day when doing things like calculating the reactance of a load. Are you saying that I am doing it wrong? I use imaginary numbers to track amplitude and phase of electrical waveforms. i is the square root of -1. Are you saying I am wrong to do that? Did you know that none of our telecommunications equipment would work without i?

You do realize that a square root is the quotient of a division, don't you? When we find a square root all we are doing is looking for the number that will be both quotient and divisor of a dividend. And calculating a square root is not complex. It is a long calculation, but it's just arithmetic. The more accurate you want the answer, the more calculations you must do. Is this why you are saying GR and SR are wrong? Because they have square roots? You do realize that the root function is the inverse of the power function, or that division is the inverse of multiplication? How can you seriously use that as your reasoning to invalidate centuries of thought?

If DM and the cosmological constant are just 'patches', then what in the world is h? Do you understand that just because we don't know what something is it doesn't mean that it's not there? We didn't know what things like air and light were made from for thousands of years. That didn't make either one of them imaginary.

You didn't answer anything I asked. And now you also need to give me a real answer as to why the diagonal of a square with sides of 1 is not the square root of 2.
Last edited by primummobile; 2012-Aug-17 at 02:50 PM.

18. Member
Join Date
Oct 2004
Posts
85
Originally Posted by philippeb8
My code couldn't be better implemented. Let's move.
I'm not going to start a code review here, but your code wouldn't be acceptable in just about every place I've worked (which covers over 30 years software development ranging from embedded to real-time systems, including simulators, and nearly all involving maths in some way). Generally, I've found that if anyone thinks their code is "perfect", it's usually in fact far from it...

Originally Posted by philippeb8
others have already pointed out the potential issues with precision.
All I would have to do is replace the "typedef long double" to a class handling relative errors.
This kind of statement tells me you really don't understand how computers handle numerical values, and how much precision they allow. All floating point calculations are subject to error related to both the limits of precision and to the scales (i.e. relative exponents) of the variables involved. You can't just fudge away these problems easily, even with extended precision libraries (and believe me, there's a lot more involved than just "knocking up a class to handle relative errors").

Even one of you magic numbers - 1.3450632e27 - is actually held as 1345063199999999968047792128 when you check. You've already lost precision before you've even used it to calculate anything!

19. Established Member
Join Date
Feb 2012
Posts
451
Originally Posted by philippeb8
I apologize but I am not considering the mass distribution of the galaxy. An h of 6e27 kg/m can most likely be explained by the 80 millions stars within a radius of 2000 light years from the solar system:
You keep trowing out guesses for where the mass is, but never show that in fact this is enough.

Let us assume 80 Million solar masses as a distance of 1000 light years. (The actual average distance is closer to 1600 light years)

m=8e7*2e30=1.6e38 kg
r=9.5e18m
h=1.7e19

This is far short of the required h=1e27, and the largest effect you have identified is still due the Virgo Super Cluster's h=8e22, of which this is a part.

Unless you can identify where this extra mass is, this still requires some form of dark matter having 12500 parts per one part observable matter.

I will still prefer GR's 5 parts dark matter to 1 part observable matter. And the fact that GR does this without requiring a fudge factor beyond the Newtonian G and the c of Maxwell's equations is a plus.

For the record, a 5:1 ratio is supported by the observed gravitational lensing, unlike a 12500:1 ratio.

Also, you have not answered my question in post #343

20. Established Member
Join Date
Feb 2012
Posts
451
Originally Posted by Van Rijn
You have not presented a falsifiable hypothesis. When are you going to do so?
His theory rests on the falsifiable statement that his h factor can be calculated from the mass of the universe, without the need for Dark Matter.

So far, every suggestion he has made for where this mass exists has been shown to be inadequate, with the most significant component short by a factor of 12500.

Also, the velocities of the objects should appear as a change in his h locally that should appear in Lunar Laser Ranging experiments, and the variations of h with space should appear in pulsar timing data. These have been ruled out to the precision we would expect from his theory. (3.6.3)

The confrontation between GR and Experiment

Thus his theory is falsifiable, and is falsified.

21. Originally Posted by utesfan100
Thus his theory is falsifiable, and is falsified.
As I noted, philippeb8 has not seemed to be concerned about the problems shown with his assertions. I'm asking what arguments he's willing to have tested, and what specific tests he would accept where failure to meet the test criteria would indicate the argument is falsified. He seems extremely reluctant to give a solid answer to the question, which is very telling.

But until he does, what is the point of this thread? I think the problems with his assertions are obvious to other readers of the thread, but until he's willing to be tested, this thread isn't going anywhere.

22. Member
Join Date
Oct 2004
Posts
85
Originally Posted by primummobile
And calculating a square root is not complex. It is a long calculation, but it's just arithmetic. The more accurate you want the answer, the more calculations you must do...
If you want to look at some extremely interesting computer square-rootery, then google "0x5f3759df", which is the "magic inverse square root number" usually given in hexadecimal form, and equal to 1,597,463,007 in decimal. Square roots accurate to a fraction of a percent, blindingly fast, in a few lines of simple code, and yet its origins are a mystery (unless of course philippeb8 knows the answer )

OK - wandering off topic a bit, but it shows that even in the apparently logical world of computing, there are weird and wonderful things to explore.

23. Established Member
Join Date
Feb 2012
Posts
451
Originally Posted by Van Rijn
But until he does, what is the point of this thread? I think the problems with his assertions are obvious to other readers of the thread, but until he's willing to be tested, this thread isn't going anywhere.
He seems to be taking my evidence that his h is not consistent with the observed mass of the universe as a serious threat to his theory.

24. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by utesfan100
He seems to be taking my evidence that his h is not consistent with the observed mass of the universe as a serious threat to his theory.
You're right. I've been testing it on my side as well:

#include <iostream>
#include <ctime>
#include <cstdlib>

using namespace std;

double random(double lowest, double highest)
{
return lowest + (highest - lowest + 1) * rand() / (RAND_MAX + 1.0);
}

int main()
{
srand((unsigned) time(0));

double h = 0;

for (long long i = 0; i < 80000000; ++ i)
{
double mass = 1.98892e30;
double distance = 9.4605284e15 * random(4.0, 2000.0);

h += mass / distance;
}

cout << h << endl;
}

And I get:

5.23399e+19

I've been analyzing a lot of possible answers and I think my initial guess on the mass of the Milky Way of 1.5e47 kg (post #344) was more likely to be right. The galactic rotation curve needs to take into account the spin of the kernel.

25. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by molesworth
If you want to look at some extremely interesting computer square-rootery, then google "0x5f3759df", which is the "magic inverse square root number" usually given in hexadecimal form, and equal to 1,597,463,007 in decimal. Square roots accurate to a fraction of a percent, blindingly fast, in a few lines of simple code, and yet its origins are a mystery (unless of course philippeb8 knows the answer )

OK - wandering off topic a bit, but it shows that even in the apparently logical world of computing, there are weird and wonderful things to explore.
Interesting. Thanks.

"There are two kinds of people in the world - those with loaded guns and those who dig" - Clint

26. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by Van Rijn
As I noted, philippeb8 has not seemed to be concerned about the problems shown with his assertions. I'm asking what arguments he's willing to have tested, and what specific tests he would accept where failure to meet the test criteria would indicate the argument is falsified. He seems extremely reluctant to give a solid answer to the question, which is very telling.

But until he does, what is the point of this thread? I think the problems with his assertions are obvious to other readers of the thread, but until he's willing to be tested, this thread isn't going anywhere.
You'll be happy when I'll be able to explain everything then you'll see that c^2/G is not acceptable. Ask any mathematician.

27. Established Member
Join Date
Jun 2011
Posts
227
Originally Posted by utesfan100
Are you defining precession the amount of time delay per cycle in the orbit of a planet in your program?
A precession is the difference in angle of the perihelion after each cycle.

28. Established Member
Join Date
Feb 2012
Posts
451
Originally Posted by philippeb8
I've been analyzing a lot of possible answers and I think my initial guess on the mass of the Milky Way of 1.5e47 kg (post #344) was more likely to be right. The galactic rotation curve needs to take into account the spin of the kernel.
I am going to stick with 2-3e42 kg, 70% of which is dark matter.

It is up to you to show that this is off by a factor of 50,000.

Also, the Kernel is surrounded by a much larger spherical region where the velocity drops off according to the classical 1/r^2 force, not additional effects are needed near the middle of the Galaxy.

Which is interesting because this is where your h should be varying the most and we should see the most impact from your dynamics.
Last edited by utesfan100; 2012-Aug-18 at 02:53 AM. Reason: Add kernel comment

29. Established Member
Join Date
Feb 2012
Posts
451
Originally Posted by philippeb8
You'll be happy when I'll be able to explain everything then you'll see that c^2/G is not acceptable. Ask any mathematician.
Actually, using dimensional analysis, and knowing that we are dealing with gravity (introducing G) in the context of relativity (introducing c), this is exactly what I would expect to appear from the Buckingahm π theorem.

http://en.wikipedia.org/wiki/Buckingham_pi_theorem

30. Originally Posted by philippeb8
You'll be happy when I'll be able to explain everything
I will "be happy" if and when you present a falsifiable hypothesis. Without that, you have nothing of interest here.