Supernova Ia studies have become the single most important facet in the latest iteration of the big bang. In the last ten years, not only have they furnished the first “concrete evidence” time dilation actually occurs, they have been used both to “pinpoint” Hubble’s constant and provide the “acceleration” needed to address the age and CMB crisis. Most recently, the Riess team announced a ‘jerk’ has been observed in the magnitude of the most distant supernova, once again providing a major rewrite for cosmology.
We should expect the analytical techniques used by the supernova data reduction and interpretive routine to pass the highest scientific standard, within the obvious constraints of our limited observation platform.
This is far from reality. There are systemic errors in the analysis of Supernova Ia that make a correct quantitative assessment virtually impossible.
First, the High redshift supernova teams are looking for supernova with a predicted spectral signature and light curve width and not even including in the study events that do not fall within the expected ranges.
Second, the initial data reduction of supernova Ia includes, in the ‘k corrections’, corrections for time dilation and relativist magnitude reduction. If this assumption is in any degree incorrect, these assumptions impose circular parameters into every subsequent analytical step.
Third, the researchers are ignoring the most recently observe local event which have longer light curves and higher magnitudes than the previously observed supernova Ia. Although these events are classified as Ic, the spectral signature in fact appears to be more of a homogenization of Ic and Ia spectra. There is also compelling evidence this curious event involves the collision of two binary pairs of white dwarf stars, creating an intense gamma ray. More important, the light curve of these events is of the same size as the light curves of the highest redshifted supernova events, most of which are identified as supernova type Ia.
Fourth, the critical data reductions of Permutter (the Stretch Factor) normalize at a midpoint z-shift of 0.48. A similar normalization is used by Humay in calculating the Delta(15)b value. In both cases, if there is a Malmquist bias in the collection of the data, this bias will run parallel with any time dilation trend, and therefore be interpreted as time dilation.
Fifth, the combination of the first and fourth errors in this sequence magnifies any Malmquist selection bias.
Sixth, in spite of the “Malmquist Bias” burying routines mentioned above, as the number of supernova characterized at high redshift has increased, an embarrassing trend has crept into the Delta(15)b numbers: they get smaller with increasing distance. This should be interpreted as a failure of supernova Ia to completely satisfy the Wilson hypothesis: the amount of time dilation is less than predicted. In my opinion, it is because of this obvious and embarrassing trend that supernova researchers quit publishing charts which show both the z shift and the delta(15)b values.
Sixth, a careful analysis of Permutter’s ‘Stretch Factor’ methodology reveals a similar trend is emerging in the stretch factors calculated for the increasing sample at high redshift.
Seventh, the multi-colored light curve methodologies assume the high redshift events do not experience the relativistic flows characterized in the local hypernovae. They cannot, because this would nullify the assertion these light curve variations are due to time dilation.
Eighth, even though researchers claim error analysis indicates there is no Malmquist bias in the sample, the mid point normalization routines, as pointed out above, hide any potential bias as a dilation factor. If the relativistic distance modulus is correct, there should be a selection bias of at least 4%, this is potentially much greater if the true attenuation factor is greater. Good scientific practice dictates that either the researches provide plausible explanations for these fortuitous sampling quirks, or that they should reevaluate these shaky analytical techniques.
Everyone in the field of astronomy is well aware of the potential for, and consequences of failure to account for distance selection effects. It was a combination of a selection effect errors that led Hubble to originally conclude the universe was much younger and smaller than the current constraints.
Adam Riess’s disturbing acknowledgement that a potential Ia at high redshift was thrown out because the “light curve was too small” adds tremendous credence to the claim distant hypernova are being improperly identified as supernova Ia. In Tonry’s paper in late 2003, a severe shortages of supernova Ia observations also adds credence to the claim the attenuation rate of space is greater than calculated, and the distant sample of supernova Ia are really hypernova. It is also with noting that the high supernova sighting frequency is in the same magnitude as the probable occurrence of local hypernovae.
If the greater scientific community continues to accept the broad pronouncement of the supernova Ia researchers at face value, is systemically bad practices are allowed to continue while the authors make astounding claims about both the reliability and meaning of the data, this branch of science will remain in the crippling grasp of dark energy. It is already inevitable that the last fifty years will be dubbed the “Dark Ages of Astronomy”.