# Thread: Hubble expansion versus quantum effect

1. Member
Join Date
Nov 2005
Posts
80

## Hubble expansion versus quantum effect

Hi, its me again hoping to establish a quantum mechanical explanation for the red shift rather than recession. My last argument failed the units test as you all correctly pointed out. (thread - Apparent expansion of space alternative) So, let me run this past you again please.
This first bit is standard physics without controversy. I just want to get the Hubble recession rate into SI units.

Hubble rate of recession 73 km s^-1 Mpc^-1. This breaks down to:

73000m s^-1/ 30.857 x 10^21 m = 2.3675 x 10^-18s

The distance to the edge of the observable universe without invoking relativity would be the speed of light divided by this value.

C/2.3675 x 10^-18 = 1.266 x 10^26 m = the edge of the observable universe.

Now back to speculation:
I think we had established that the Heisenberg Uncertainty Principle when substituting the momentum for the momentum of a photon could be rewritten as Delta E. Delta. x > hc/(4Pi) where E is the energy and x is the distance, bearing in mind we are still really talking about momentum and distance.

Now, we are going to say that there is no uncertainty in the energy measurement and that all the uncertainty is in the distance measurement. Setting Delta E to unity we now have:

Delta x > hc/(4Pi. Delta E)
So Delta x > 1.58 x 10^-26 m

In fact, in any one interaction (say with a virtual electron positron pair) the uncertainty in the distance measurement can be anything between 0 and 1.58 x 10^-26 m but no greater. So we take the average uncertainty probability by dividing by 2 and get, the average uncertainty in the distance measurement extrapolated across the universe is 7.9 x 10^-27 m.

Now for a simple, but not scientifically exact, analogy. If I have a bag of coins and I lose 1/100th of them for every metre I travel. How far will I go before losing them all? The naïve answer is the inverse = 100m.
So now the argument runs: If I lose this tiny fraction of information 7.9 x 10-27 m for every for every metre of space, how far do I go before I lose the whole? The answer is the inverse 1.265 x 10^26 m. The edge of the observable universe.
OR a graph of 1- 7.9 x 10-27x cuts the x axis at 1.265 x 10^26 m.

So the simple linear red shift in our own locality of the universe is the ratio of distance travelled compared to the distance to the edge of the observable universe:

Eg What is the red shift at 10^25 m?

10^25 m/1.26 x 10^26 m = 0.079

However, we know the coin analogy is only correct for a short distance because it should really be an exponential loss and the same applies to the universe.

So, the function that covers all distances is:

Z = e(x/1.26x10^26) -1 where x is the distance travelled in metres and this gives the slightly higher red shift as observed at very large distances.

Are these scientifically sound steps or am I just manipulating numbers to get the answer I want? When you have put yourself in a hole it’s a bit difficult to see out of it. Your advice is appreciated.
Last edited by Roy Caswell; 2008-Sep-25 at 06:34 PM.

2. Order of Kilopi
Join Date
Sep 2004
Posts
5,475
It is dpdx>h Not dEdx>h

Energy and position commute, so there is no uncertainty.

Second, you must justify making dE=1

Third, you must justify a linear loss of information.

Basically, the whole thing needs to get rethought from the start since the first bit is wrong

3. Established Member
Join Date
Oct 2003
Posts
1,527

Originally Posted by Roy
73000m s^-1/ 30.857 x 10^21 m = 2.3675 x 10^-18s
Hey, careful with those units! This should be:

73000m s^-1/ 30.857 x 10^21 m^(-1) = 2.3675 x 10^-18 s^(-1)

You later treat the result with correct units, so this is just wrongly written here.

Originally Posted by Roy
I think we had established that the Heisenberg Uncertainty Principle when substituting the momentum for the momentum of a photon could be rewritten as Delta E. Delta. x > hc/(4Pi) where E is the energy and x is the distance, bearing in mind we are still really talking about momentum and distance.
Well, actually I think this is wrong. In the first thread you did this:

Originally Posted by Roy
Dp.Dx > h/(4Pi) (Original Heisenberg Uncertainty Principle)

D (h/lamda) . Dx > h/(4Pi) for the momentum of a photon
Now, let's write down the uncertainty principle:

Dp Dx > h/(4Pi)

=> sqrt( <p^2> - <p>^2 ) * sqrt( <x^2> - <x>^2 ) > h/(4Pi)

You have suggested that Dp = D (h/lamda), but as Dp = sqrt( <p^2> - <p>^2 ), you really should say that:

Dp = sqrt( <(h/lamda)^2> - <h/lamda>^2 )

In other words, I don't think Dp is something where you can just substitute p or D with something else.

I won't go further than that for now, as this seems to be quite critical point for the rest of your derivation.

4. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Ari Jokimaki

In other words, I don't think Dp is something where you can just substitute p or D with something else.

I won't go further than that for now, as this seems to be quite critical point for the rest of your derivation.
The D, normally the capital Delta is just short for "The uncertainty in". So there is no substitution. With the p we are just putting in the correct relation ship when applied to a photon h/wavelength. Traditionally, its applied to electrons so there is no need to do this.

5. Member
Join Date
Nov 2005
Posts
80
Originally Posted by korjik
It is dpdx>h Not dEdx>h

Energy and position commute, so there is no uncertainty.

Second, you must justify making dE=1

Third, you must justify a linear loss of information.
dEdx>hc/(4Pi)

Yes making dE=1 may be suspicious. I was just thinking 1 joule is exactly 1 joule with no uncertainty.

Perhaps the uniform sea of virtual electron-positron pairs in the vacuum may be responsible and continually maintain the vacuum temperature at 2.7K. Mind you if it is a quantum effect it may be beyond visualization like the two slit experiment and others.

6. Banned
Join Date
Sep 2008
Posts
178
I don't know about anyone else, but what are you on about Roy?
Could you describe your idea simply and briefly without mathematical formula in a single post, as an introduction for people like me?
That is why does the uncertainty principle give a redshift and why is that redshift distance dependent?

7. Order of Kilopi
Join Date
May 2004
Posts
4,139
Originally Posted by Roy Caswell
dEdx>hc/(4Pi)

Yes making dE=1 may be suspicious. I was just thinking 1 joule is exactly 1 joule with no uncertainty.

Perhaps the uniform sea of virtual electron-positron pairs in the vacuum may be responsible and continually maintain the vacuum temperature at 2.7K. Mind you if it is a quantum effect it may be beyond visualization like the two slit experiment and others.
Another way to look at this is to make dE=375.65325 Joules. Why is this value less valid than the value that you selected?

8. Order of Kilopi
Join Date
Sep 2004
Posts
5,475
Originally Posted by Roy Caswell
dEdx>hc/(4Pi)

Yes making dE=1 may be suspicious. I was just thinking 1 joule is exactly 1 joule with no uncertainty.

Perhaps the uniform sea of virtual electron-positron pairs in the vacuum may be responsible and continually maintain the vacuum temperature at 2.7K. Mind you if it is a quantum effect it may be beyond visualization like the two slit experiment and others.
E != pc

E= sqrt(p^2c^2+m^2c^4)

if a joule is a joule then dE = 0 by your thinking.

There is no continually maintained 'vaccum temperature'. The 2.7K you mention is the temperature of the blackbody radiation curve of the CMB.

9. Established Member
Join Date
Oct 2003
Posts
1,527
Originally Posted by Roy
The D, normally the capital Delta is just short for "The uncertainty in". So there is no substitution. With the p we are just putting in the correct relation ship when applied to a photon h/wavelength. Traditionally, its applied to electrons so there is no need to do this.
Yes, I think you are correct. My thoughts got lost there.

Continuing...

Originally Posted by Roy
So Delta x > 1.58 x 10^-26 m

In fact, in any one interaction (say with a virtual electron positron pair) the uncertainty in the distance measurement can be anything between 0 and 1.58 x 10^-26 m but no greater.
Doesn't the equation explicitly say that Delta x is greater than that?

Like others, I also wonder about the setting Delta E to unity. Why does one joule equal to no uncertainty in energy measurement?

Originally Posted by Roy
So, the function that covers all distances is:

Z = e(x/1.26x10^26) -1 where x is the distance travelled in metres and this gives the slightly higher red shift as observed at very large distances.
Where does this last equation come from?

10. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Noble Ox
I don't know about anyone else, but what are you on about Roy?
Could you describe your idea simply and briefly without mathematical formula in a single post, as an introduction for people like me?
That is why does the uncertainty principle give a redshift and why is that redshift distance dependent?
I'll try. The red shift may not be due to recession, therefore no "big bang". All em photons may be subject to a quantum mechanical effect in the vacuum of space that systematically robs them of their energy (this gives the red shift) and contributes to the background radiation. This MAY be due to interactions with virtual electron positron pairs in the vacuum of space. The vacuum is really a uniform "sea" of these particles. In any interaction there is the probability of losing a tiny amount of energy between zero and an amount allowed for under the Heisenberg uncertainty principle. The amount lost therefore depends on the distance traveled and therefore is distance dependent. Since it is an energy loss it would really be an exponential loss, but because of the large distances involved it looks linear in our part of the universe. This lead to linear Hubble relationship which is now found to be not quite working.

11. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Fortis
Another way to look at this is to make dE=375.65325 Joules. Why is this value less valid than the value that you selected?
Both you and Korjik may have me on this one. Having looked at various examples of the application of the principle, it was just my way of saying that all the uncertainty manifests itself in the a loss of information on distance/wavelength. It may be a flaw or a gaping hole. Am I just "making it fit?"

12. Member
Join Date
Nov 2005
Posts
80
Originally Posted by korjik
There is no continually maintained 'vaccum temperature'. The 2.7K you mention is the temperature of the blackbody radiation curve of the CMB.
This is true in the "big bang" model. However, in this model. It is the energy lost that gave the red shift. It is continually reabsorbed by galaxies and maintained by this process. Some researchers are suggesting that some of the background is not truely background but also comes from the foreground & mid ground as well. If they are correct, this would explain it. One can but try.

13. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Ari Jokimaki
Doesn't the equation explicitly say that Delta x is greater than that?

Like others, I also wonder about the setting Delta E to unity. Why does one joule equal to no uncertainty in energy measurement?

Where does this last equation come from?
Yes, you have me wondering as well.

The last equation is just treating the ratio as an exponential since if the effect were an energy loss as opposed to doppler then the loss would be exponential instead of linear. However, it looks linear in our region at the start.
I attach my revised graph which I should have included in the start of this thread.

14. Banned
Join Date
Sep 2008
Posts
178
Originally Posted by Roy Caswell
I'll try. The red shift may not be due to recession, therefore no "big bang". All em photons may be subject to a quantum mechanical effect in the vacuum of space that systematically robs them of their energy (this gives the red shift) and contributes to the background radiation. This MAY be due to interactions with virtual electron positron pairs in the vacuum of space. The vacuum is really a uniform "sea" of these particles. In any interaction there is the probability of losing a tiny amount of energy between zero and an amount allowed for under the Heisenberg uncertainty principle. The amount lost therefore depends on the distance traveled and therefore is distance dependent. Since it is an energy loss it would really be an exponential loss, but because of the large distances involved it looks linear in our part of the universe. This lead to linear Hubble relationship which is now found to be not quite working.
But why do the photons always systematically lose energy in these interactions. Why can't they gain some from the uniform 'sea' of particles?
That way, on average, there would be no overall redshift.

15. Established Member
Join Date
Oct 2003
Posts
1,527
Originally Posted by Roy Caswell
The last equation is just treating the ratio as an exponential since if the effect were an energy loss as opposed to doppler then the loss would be exponential instead of linear.
My problem with it is that I don't understand how you have arrived to that exact equation. In your opening post you stated that:

Originally Posted by Roy
What is the red shift at 10^25 m?

10^25 m/1.26 x 10^26 m = 0.079
From here I take it that:

z = x / (1.26 * 10^26 m)

If this should be treated as an exponential, my next move would be this:

ez = ex / (1.26 * 10^26 m)

So, I don't know how you get to the equation you are using. It traces the Hubble line quite well, so I assume that it is more or less correct in that sense. Perhaps you just need to give me a brief mathematics lesson...

16. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Noble Ox
But why do the photons always systematically lose energy in these interactions. Why can't they gain some from the uniform 'sea' of particles?
That way, on average, there would be no overall redshift.
This is only an imaginary model: Every now and again the photon has a head on collision with a virtual electron positron pair. They instantaneously recoil, radiate and disappear. The "sea" of virtual electron positron pairs is perfectly uniform. So the probability of encounters is fixed. The energy loss is one way.
These particles have been detected by the Casimir effect (use search engine) and interestingly enough the 'hc' crops up in fourmula. The density of the virtual electron positron pairs has only been estimated, or so I believe, and then its by the use of the Heisenberg uncertainty principle.

17. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Ari Jokimaki
My problem with it is that I don't understand how you have arrived to that exact equation. In your opening post you stated that:
From here I take it that:
z = x / (1.26 * 10^26 m)
If this should be treated as an exponential, my next move would be this:
ez = ex / (1.26 * 10^26 m)
So, I don't know how you get to the equation you are using. It traces the Hubble line quite well, so I assume that it is more or less correct in that sense. Perhaps you just need to give me a brief mathematics lesson...
Me give a mathematics lesson! I often feel I could do with one myself. I'm not bad at the intuitive part but I fall over all the nuts and bolts.
The best way I can think of it is like the negative exponential in radioactive decay where the rate of decay is proportional to what you have already or the rate of discharge of a capacitor. In each case you have a constant like the radioactive decay constant which is directly connected to the gradient of the line at the very start. We could in fact do a negative exponential plot to illustrate the proportionate loss of energy with distance. E = E(0) exp(-x/1.26x10^26). With photons, as the energy goes down, the wavelength goes up so I found it more convincing to show how red shift changes with distance. the 1/1.26x10^26m can be thought of as the coefficient of wavelength change. So, the coefficient appears in the positive exponential Z = exp (x/1.26x10^26) -1. Don't ask me why you subtract 1. Its something to do with the nature of a positive exponential. Everyone else does it, so I will. See I need the lesson. Actually, I'm not sure I've answered your question. I'll go away and have a think about it. By the way, where is that superscript button you all keep using. I can't find it.

PS to earlier posts. Re making dE = 1. Perhaps I should have just said that if we are trying to accurately measure energy of photons then all the uncertainty will manifest itself in the distance measurements. If we are trying to measure distance/wavelength then all the uncertainty will manifest itself in the energy. Since for photons the energy and the wavelength are inversely linked then it appears to make no odds. If all the uncertainty manifests itself in the energy measurements then it will by nature also manifest itself in the distance/wavelength measurements.

18. Order of Kilopi
Join Date
May 2004
Posts
4,139
But why make dE=1? Why not dE=5744.78784?

19. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Fortis
But why make dE=1? Why not dE=5744.78784?
Forget dE=1. Put it down as an over enthusiastic attampt to get to the right answer dE.dx > 7.9x10^-27 Jm.

As said above, photons appear to be a special case, because the energy is inversley proportional to the wavelength. So the whole of the uncertainty introduced applies to both. If you are measuring energy, then will lose out on the distance/wavelength any amount between 0 & hc/4Pi for each metre travelled. If you are measuring the wavelength, you will lose out on the energy. But they are in effect, the same thing with them being inversely proportional. We divide by 2 again to get the average uncertainty produced per metre of travel.

I like this forum. It really helps to get your ideas sorted out whether they be misguided or not.

20. Order of Kilopi
Join Date
May 2004
Posts
4,139
Originally Posted by Roy Caswell
Forget dE=1. Put it down as an over enthusiastic attampt to get to the right answer dE.dx > 7.9x10^-27 Jm.

As said above, photons appear to be a special case, because the energy is inversley proportional to the wavelength. So the whole of the uncertainty introduced applies to both. If you are measuring energy, then will lose out on the distance/wavelength any amount between 0 & hc/4Pi for each metre travelled. If you are measuring the wavelength, you will lose out on the energy. But they are in effect, the same thing with them being inversely proportional. We divide by 2 again to get the average uncertainty produced per metre of travel.

I like this forum. It really helps to get your ideas sorted out whether they be misguided or not.
Wavelength isn't related to any positional uncertainty. Photons are actually a good example to use here as the em field amplitude is proportional to the amplitude of their wave-function. Let's take a photon with a well defined momentum. This would look like a plane wave extending from minus infinity to plus infinity. (This can be thought of as a momentum eigenstate.) Now the intensity of the wave is proportional to the probability of finding the photon at a particular location in space. We can see that though this photon has a perfectly well defined momentum, it's position is completely undefined. We can localise the photon in space by giving it a wavefunction that looks like a spike. Using Fourier analysis, it is possible to build this spike by summing together plane waves of different wavelengths. This means that it no longer has a well defined momentum, as we cannot describe it using a single wavelength. Needless to say, most photons fall somewhere between these two extremes. Once you grasp this, you should have a better feel for how the uncertainty principle works. (In many ways, it just falls out of the framework of Fourier analysis.)

21. Banned
Join Date
Sep 2008
Posts
178
Originally Posted by Roy Caswell
This is only an imaginary model: Every now and again the photon has a head on collision with a virtual electron positron pair. They instantaneously recoil, radiate and disappear. The "sea" of virtual electron positron pairs is perfectly uniform. So the probability of encounters is fixed. The energy loss is one way.
These particles have been detected by the Casimir effect (use search engine) and interestingly enough the 'hc' crops up in fourmula. The density of the virtual electron positron pairs has only been estimated, or so I believe, and then its by the use of the Heisenberg uncertainty principle.
OK Ihave yahood casimir effect and, with respect, it is not conclusive that it proves 'virtual particles' as you suggest here.
Wiki is more disrespectful but since this is not altogether reliable lets try
Particles other than the photon also contribute a small effect but only the photon force is measurable.
But accepting what you say, what is the collision cross-section for this interaction and the density of virtual particles?
This is only an imaginary model
So do you want us to take it seriously or not?

22. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Fortis
Wavelength isn't related to any positional uncertainty. Photons are actually a good example ...Once you grasp this, you should have a better feel for how the uncertainty principle works. (In many ways, it just falls out of the framework of Fourier analysis.)
You are just about taking me out of my comfort zone here. Are you saying that the proposal is scientifically impossible? We do appear to have a very good red shift predictor without having to do any direct observation. It could of course be coincidence or just number manipulation.

23. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Noble Ox
OK Ihave yahood casimir effect and, with respect, it is not conclusive that it proves 'virtual particles' as you suggest here.
Wiki is more disrespectful but since this is not altogether reliable lets try
Particles other than the photon also contribute a small effect but only the photon force is measurable.
But accepting what you say, what is the collision cross-section for this interaction and the density of virtual particles?

So do you want us to take it seriously or not?
Never having had full faith in the expanding universe/big bang scenario I have spent over 10 years looking at alternatives and this first one I stumbled across by accident appears the most convincing to me. So yes, I am very serious although being the eternal optimist I may come across otherwise.

As regards the collision cross section and the density of the virtual particles, we are on uncharted territory here. I have "An Introduction to Modern Astrophysics" by B W Carroll P 1240. On P1240 he works out the energy density of these particles in the early universe as 10^111 J m^-3 indicating their contribution to rapid inflation. He goes on to imply that the present vacuum energy density is a yet to be solved problem. It follows from this that the collision cross section is also. So I don't think it takes us any further in our understanding, unfortunately.

24. Established Member
Join Date
Oct 2003
Posts
1,527

Originally Posted by Roy
So Delta x > 1.58 x 10^-26 m

In fact, in any one interaction (say with a virtual electron positron pair) the uncertainty in the distance measurement can be anything between 0 and 1.58 x 10^-26 m but no greater.
Originally Posted by Ari
Doesn't the equation explicitly say that Delta x is greater than that?
--------------------------------------------------

On the one joule problem:

Originally Posted by Roy
Delta x > hc/(4Pi. Delta E)
One possible justification for inserting one joule here is to find out how much Delta x corresponds to one joule, which is perfectly valid thing to do. But then you still would need to explain why the value of one joule is left there permanently. I wonder how much your plot changes with different energy values...

Originally Posted by Roy
Me give a mathematics lesson! I often feel I could do with one myself. I'm not bad at the intuitive part but I fall over all the nuts and bolts.
The best way I can think of it is like the negative exponential in radioactive decay where the rate of decay is proportional to what you have already or the rate of discharge of a capacitor. In each case you have a constant like the radioactive decay constant which is directly connected to the gradient of the line at the very start. We could in fact do a negative exponential plot to illustrate the proportionate loss of energy with distance. E = E(0) exp(-x/1.26x10^26). With photons, as the energy goes down, the wavelength goes up so I found it more convincing to show how red shift changes with distance. the 1/1.26x10^26m can be thought of as the coefficient of wavelength change. So, the coefficient appears in the positive exponential Z = exp (x/1.26x10^26) -1. Don't ask me why you subtract 1. Its something to do with the nature of a positive exponential. Everyone else does it, so I will. See I need the lesson. Actually, I'm not sure I've answered your question. I'll go away and have a think about it.
I think it is important to clarify this last step, because until to this point you have a clear derivation (ignoring the "one joule" and "greater than" problems), but this last step then feels rather arbitrary if there's no clear route to it.

Originally Posted by Roy
By the way, where is that superscript button you all keep using. I can't find it.
You have to surround the text with SUP tags (or SUB for subscript), and surround the word SUP with "[" and "]". Close it by making the same thing with "/SUP". There was a help page for this but right now I couldn't find it.

25. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Ari Jokimaki

1. Doesn't the equation explicitly say that Delta x is greater than that?

2. On the one joule problem:
One possible justification for inserting one joule here is to find out how much Delta x corresponds to one joule, which is perfectly valid thing to do. But then you still would need to explain why the value of one joule is left there permanently. I wonder how much your plot changes with different energy values...

3. I think it is important to clarify this last step, because until to this point you have a clear derivation (ignoring the "one joule" and "greater than" problems), but this last step then feels rather arbitrary if there's no clear route to it.
.
1. As I understand it the expression dp.dx > h/(4Pi) is saying that the maximum uncertainty in the product of both the momentum and the distance measurement is at least as large or equal to this value. It’s a bit misleading because it sounds like it is saying will always be greater than. In an interaction then, the uncertainty in the product of both can be anywhere between 0 and this value but not a great deal above. Therefore for our derived expression for a photon dE.Dx > hc/(4Pi) the uncertainty in the product of energy and distance can be any where between 0 and hc(4Pi) but not higher.

2. dE =1. As I think I said on a previous post, I stand corrected on this issue. It was borne of desperation to get to the right answer. You have all put me right. The argument now runs as follows. Photons are a special case in that in the QM interpretation Energy and distance are inversely linked. Doubling the energy halves the wavelength and vice versa. Any uncertainty in one affects the other. So, we now say, if we consider ourselves to be measuring the energy accurately then all the uncertainty manifests itself in the distance measurement. If we consider we are measuring distance accurately, then all the uncertainty manifests itself in the energy. I think that may be a better way of looking at it.

3. The Doppler effect just implies equal changes throughout space and hence a straight line graph (Hubble line). We are saying that the change in Z is also dependent on what Z is already at a certain point in space because its caused by energy loss. Mathematically we write this as:
dZ/dx is proportional to Z (a small change in Z with a small change in distance dx is proportional to what Z is already at that point.
Therefore, dZ/dx = constant . Z
Therefore, dZ/Z = constant . dx
Now we integrate both sides. The left had side from 0 to Z and the right hand side from 0 to x where x is the distance in metres.
InZ (from Z to 0) = constant . x

Therefore Z +1= exp (constant . x)
OR Z = exp (constant . x) – 1 where the constant is the coefficient of wavelength increase 1/ 1.265 x 10^26m
Last edited by Roy Caswell; 2008-Sep-29 at 06:48 PM.

26. Order of Kilopi
Join Date
May 2004
Posts
4,139
Roy, I think we have identified one problem. The uncertainty principle says that dp.dx must always be greater than h/4pi. It doesn't say anything about the maximum product of the uncerainty. It says that the smallest that you can possibly
Make the product is h/4pi. By incompetence it bad luck, the product could be much greater than that, but it cannot be less than that.

Hope that helps.

27. Established Member
Join Date
Oct 2003
Posts
1,527
Originally Posted by Roy
1. As I understand it the expression dp.dx > h/(4Pi) is saying that the maximum uncertainty in the product of both the momentum and the distance measurement is at least as large or equal to this value. It’s a bit misleading because it sounds like it is saying will always be greater than. In an interaction then, the uncertainty in the product of both can be anywhere between 0 and this value but not a great deal above. Therefore for our derived expression for a photon dE.Dx > hc/(4Pi) the uncertainty in the product of energy and distance can be any where between 0 and hc(4Pi) but not higher.
Sorry, but this doesn't make sense to me. If it would be like you say, then surely the equation would read dp dx < h/(4Pi).

It seems to me that perhaps your selection of dE = 1J is not correct. It seems that you should select dE = 2J, it would fix things, this one especially.

Originally Posted by Roy
2. dE =1. As I think I said on a previous post, I stand corrected on this issue.
Yes, I was just thinking about it a little.

Originally Posted by Roy
3. The Doppler effect just implies equal changes throughout space and hence a straight line graph (Hubble line). We are saying that the change in Z is also dependent on what Z is already at a certain point in space because its caused by energy loss. Mathematically we write this as:
dZ/dx is proportional to Z (a small change in Z with a small change in distance dx is proportional to what Z is already at that point.
Therefore, dZ/dx = constant . Z
Therefore, dZ/Z = constant . dx
Now we integrate both sides. The left had side from 0 to Z and the right hand side from 0 to x where x is the distance in metres.
InZ (from Z to 0) = constant . x

Therefore Z +1= exp (constant . x)
OR Z = exp (constant . x) – 1 where the constant is the coefficient of wavelength increase 1/ 1.265 x 10^26m
Thanks. I tried to duplicate these steps, but I had trouble with the integrating. This is how it goes (I use "R" for your "constant", and "int" for integral):

int0-z(dz/z) = int0-x(R dx)

ln(z)-ln(0) = Rx - 0

eln(z)-ln(0) = eRx

eln(z) / eln(0) = eRx

z / 0 = eRx

Division by 0... am I doing something wrong there?

28. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Ari Jokimaki
1. Sorry, but this doesn't make sense to me. If it would be like you say, then surely the equation would read dp dx < h/(4Pi).
2. Thanks. I tried to duplicate these steps, but I had trouble with the integrating. This is how it goes (I use "R" for your "constant", and "int" for integral):
int0-z(dz/z) = int0-x(R dx)
ln(z)-ln(0) = Rx - 0
eln(z)-ln(0) = eRx
eln(z) / eln(0) = eRx
z / 0 = eRx
Division by 0... am I doing something wrong there?
1. (Including Fortis in this 1st bit)
We are either talking at cross purposes or looking at the same problem from a different point of view. It stands to reason that for any object the uncertainty in the product of its momentum and position cannot be any value, that is greater than h/(4Pi). Otherwise all sorts of bizarre effects would be occurring in the macro world. These don't happen so there has to be a maximum value below which uncertainty effects begin to manifest themselves, and above which they don't. I can only quote from an Astrophysics textbook "he demonstrated that the uncertainty in the product of a particle's position and momentum must be at least as large as h/(4Pi)
delta x. delta p greater than or equal to h/(4Pi).
This is known as H..U..P.. The equality is rarely realized in nature and the from employed in making estimates is delta x. delta p is approximately equal to (2 wavy lines) h/4Pi"

2. I may have made things unnecessarily complicated by using boundary conditions. I am by no means a mathematical wizzo. I just copy everyone else. Lets simplify things

dZ/Z = Rdx (using your notation)

Integrate

InZ = Rx + constant of integration

This can be equally written as

Z = exp(Rx) + constant of integration. When Z = 0 then x = 0. So

0 = 1 + constant of integration. The constant is therefore -1

Z = exp(Rx) - 1

29. Banned
Join Date
Sep 2008
Posts
178
Originally Posted by Roy Caswell
1. (Including Fortis in this 1st bit)
We are either talking at cross purposes or looking at the same problem from a different point of view. It stands to reason that for any object the uncertainty in the product of its momentum and position cannot be any value, that is greater than h/(4Pi). Otherwise all sorts of bizarre effects would be occurring in the macro world. These don't happen so there has to be a maximum value below which uncertainty effects begin to manifest themselves, and above which they don't. I can only quote from an Astrophysics textbook "he demonstrated that the uncertainty in the product of a particle's position and momentum must be at least as large as h/(4Pi)
delta x. delta p greater than or equal to h/(4Pi).
This is known as H..U..P.. The equality is rarely realized in nature and the from employed in making estimates is delta x. delta p is approximately equal to (2 wavy lines) h/4Pi"

2. I may have made things unnecessarily complicated by using boundary conditions. I am by no means a mathematical wizzo. I just copy everyone else. Lets simplify things

dZ/Z = Rdx (using your notation)

Integrate

InZ = Rx + constant of integration

This can be equally written as

Z = exp(Rx) + constant of integration. When Z = 0 then x = 0. So

0 = 1 + constant of integration. The constant is therefore -1

Z = exp(Rx) - 1
dZ/Z = Rdx (using your notation)

Integrate

InZ = Rx + constant of integration

This can be equally written as

Z = exp(Rx) + constant of integration. When Z = 0 then x = 0. So

0 = 1 + constant of integration. The constant is therefore -1

Z = exp(Rx) - 1
Redshift z is defined Δλ/λ ie it is a constant for a particular galaxy. So what is your justification for saying that dZ/Z = Rdx? Why should the change in redshift Δz per unit redshift,z be proportional to x in your theory?

30. Member
Join Date
Nov 2005
Posts
80
Originally Posted by Noble Ox
Redshift z is defined Δλ/λ ie it is a constant for a particular galaxy. So what is your justification for saying that dZ/Z = Rdx? Why should the change in redshift Δz per unit redshift,z be proportional to x in your theory?
In this model the redshift is also constant or fixed for a particular galaxy because x, the distance, is fixed for that galaxy. In the Doppler interpretation of red shift space should expand by equal amounts all over and hence the straight Hubble line. Recent research is giving higher redshifts than expected at larger distances (over Z = 0.2). In this model, its the energy loss due to quantum effects that produces the redshift and feeds the background radiation so as with any other loss which is distance dependent it has to be an exponential loss - not linear. Since this directly causes the redshift, that too has to show an exponential increase. The amount of redshift produced depends on how far the radiation has already traveled. But in each case the change is only a very small amount but enough to produce the required deviation from the Hubble line.
I have revamped my web page in the light of everyone's comments. This shows the energy loss graph as well as the redshift graph in much more clarity than I can reproduce here.
Big Bang or Big Illusion/Uncertainty

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•