Accelerating Expansion of the Universe Is Based on Mistakes
This post is long (like the document). However this conclusion involves the people and their paper who were awarded the Nobel Prize in Physics 2011
"for the discovery of the accelerating expansion of the Universe through observations of distant supernovae."
There are many mistakes including redshifts and supernova light curves so the award was for a big mistake arising from many.
This very long, detailed review was worthwhile for its conclusion. Now the length is not a surprise.
The conclusion might be a surprise.
Title of the paper:
Dark Energy, and the Accelerating Universe: The Status of the Cosmological Parameters
excerpt from Introduction:
I was asked to present the status of the cosmological parameters, and in particular the status of the recent results concerning the accelerating universe—and the possible cosmological constant or dark energy that is responsible for the universe’s acceleration. This result comes most directly from the recent type Ia supernova work, so although I will mention a few of the approaches to the cosmological parameters, I will emphasize the work with the type Ia supernovae.
The type Ia supernova is the basis for the conclusions in this paper. This event is not a standard candle with a verified light curve. It is important to note at the start a supernova is defined as an explosive event.
from section 2 is titled: "Supernovae as a simple, direct cosmological measurement tool"
my comment: A supernova is an inaccurate tool and has a poorly defined process. A supernova is certainly not suitable for a direct measurement tool. I posted about this lack of definition on 01/20/2020.
The basic idea is that you want to find an object of known brightness, a “standard candle," and then plot it on the astronomer’s Hubble diagram (Fig. 1), which is a plot of brightness (magnitude) against redshift. We should interpret this graph as follows: for an object of known brightness, the fainter the object the farther away it is and the further back in time you are looking, so you can treat the y-axis as the time axis. The x-axis, the redshift, is a very direct measurement of the relative expansion of the universe, because as the universe expands the wavelengths of the photons travelling to us stretch exactly proportionately—and that is the redshift. Thus the Hubble diagram is showing you the “stretching" of the universe as a function of time. As you look farther and farther away, and further back in time, you can find the deviations in the expansion rate that are caused by the cosmological parameters.
my comment: It is a certain mistake to treat a redshift as a 'direct measurement of the expansion of the universe.'
This paper's statement about stretching simply violates thermodynamics and the conservation of energy.
Photons cannot stretch as a function of time. This is physically impossible. This is an explicit loss of energy with no transfer.
An object of known brightness will dim by distance. The rest of this section 2 is wrong.
It is difficult to tell from the paper the impact of this mistake on on subsequent analysis and conclusions.
This figure 1 is based on mistakes. It is true the hydrogen absorption line appears to increase by distance but only because it increases by the amount of hydrogen in the line of sight.
If a galaxy has more hydrogen atoms in its intergalactic medium in its line of sight then its redshift increases meaning the redshift is being used wrong when the redshift is unrelated to the galaxy.
The statement a redshift is "exactly" proportional to anything (time or distance) is certainly false.
Any deviations observed in this ratio are due to the intervening hydrogen atoms and have nothing to do with an expansion rate.
excerpt after figure 1 from section 2 on page 734:
In particular, you can think of making a measurement of a supernova explosion at one given time in history. If, for example, you found more redshift at that time than expected from the current expansion rate, that would imply that the
expansion was faster in the past and has been slowing down. This would lead you to conclude that there was a higher mass density in the universe.
my comment: These wrong conclusions are based on the redshift mistake. I posted about the critical mistakes with redshifts on 01/16/2020.
from section 3 titled "Type Ia supernovae as ‘difficult’ standard candles: A strategy to make them manageable"
For a standard candle, we use the type Ia supernova, the brightest of the supernova types. They brighten in just a few weeks and fade away within a few months,
so it is necessary to explain what I mean here when I refer to this standard candle’s brightness. Generally, we will use the magnitude at peak, which turns out to be a very consistent brightness (after a bit of calibration, as discussed below).
These are the brightest of the supernova explosion events by about a factor of six, so at high redshift most of the supernovae that we find are type Ia.
Although such a bright, standard candle should make an excellent cosmological measurement tool, the problem with using the supernovae is that they turn out to be a real “pain in the neck” for any kind of research work: one can never predict a supernova explosion and supernovae only explode a couple of times per millennium in any given galaxy.
The first statement reveals a critical problem. A supernova is an explosion. The brightness must have a drastic change or it is not a supernova. This 'brighten and fade away' is no different than just a variable star which is not unusual.
Analyzing a mix of events where none are a supernova means there is no benchmark to compare. Different types of events cannot be mixed with no basis.
The paper continued with describing their system to record 3 weeks of images of wide fields for a long duration, and then look through them for differences using digital analysis.
A number of these 3 weeks of imaging can be repeated.
The events at the end could be followed by Hubble Space Telescope.
Each difference can get a spectrum requested.
Figure 3 (on page 737) shows the difference in images for SN1998ba
According to the IAU Supernova Working Group Transient Name Server: SN 1998 ba was discovered at magnitude 22.3
Figure 4 (on page 738) has ": The redshift distribution of the first 81 high-redshift supernovae discovered by the Supernova Cosmology Project."
Figure 4 shows the number of supernovae at each redshift increment, from 0.1 to 1.2.
That range is not 'high redshift' in my opinion. The actual values might be scaled for the chart.
from section 4 titled: "The (only) two other analysis steps needed"
there are two small additional analysis steps necessary in order to compare the distant supernovae to the nearby supernovae on the same Hubble plot. First of all, although most type Ia supernovae follow a very similar light curve, there are a few outliers that are a little bit brighter or a little bit fainter. In the early 1990s, it was pointed out by Mark Phillips at Cerro Tololo Interamerican Observatory in Chile that there is an easy way to distinguish these supernovae, and recognize the slightly brighter ones and slightly fainter ones, using the timescale of the events. Phillips noted that the decline rate in the first 15 days after maximum provides a good parameterization of the timescale, and that this is a good predictor of how bright the supernova will be. Later, Riess, Press, and Kirshner showed another elegant statistical method which effectively added and subtracted shoulders on the light curve to achieve the same sort of timescale characterization.
This paper assumes a supernova type Ia is a predictable standard candle but several people know there are outliers which are brighter or fainter.
A standard candle must have a predictable maximum. This usage fails that requirement.
One person noted 'the decline rate in the first 15 days after maximum' 'is a good predictor of how bright the supernova will be.'
One might not be surprised a higher maximum affected the decline rate.
However, 'another elegant statistical method which effectively added and subtracted shoulders on the light curve to achieve the same sort of timescale characterization' is clearly data manipulation.
Finally, our group developed a third method which we call the timescale stretch factor method, in which we simply stretch or contract the timescale of the event by a linear stretch factor,s. This also predicts very nicely the brightness of the supernova: The s > 1 supernovae are the brighter ones and the s < 1 supernovae are the fainter ones.
Their new method must be without precedence and must be verified. They are stretching based on dimming. This is not a true standard candle where dimming is critical, not used for curve fitting.
We can now put together the whole range of type Ia supernovae light curves on a single plot.
The upper panel of Fig. 5 shows a sample of relatively nearby supernovae from the Calan Tololo survey, for which we can use the redshifts to give us the relative distances. Their relative brightnesses can then be compared, after adjusting for the different distances. Most of the supernovae follow the typical s = 1 light curve on this graph, but there are some brighter ones and some fainter ones. We fit the stretch of the light curve time scale, use this to predict the supernova luminosity, and then normalize the supernova’s light curve to the standard s = 1 luminosity. After also accounting for small differences in supernova reddening (as discussed below), this calibration procedure results in the remarkably tight distribution of light curves shown in the lower panel of Fig. 5. The dispersion at peak is approximately 10 to 12 percent, which makes this one of the most impressive standard candles available in astronomy.
This figure is very impressive! However for a very different reason.
There are no supernova events in this figure!
By its definition, a supernova must have an explosion.
This figure shows this survey never detected a light curve with an explosion or even a brief anomaly.
An explosion would probably be a transient peak or an abrupt stop when the light source is disrupted by the explosion.
Every light curve looks like one from a variable star, with a brightening followed by dimming, with a fairly smooth curve.
The tight distribution implies all these objects were a similar type of variable star.
This observation is confirmed in subsequent figures.
One striking cosmological effect is immediately apparent: events at a high redshift, z ∼ 0.5, last 1.5 times longer than events at low redshift. This is one of the most dramatic examples of a macroscopic time dilation that you will get to see. If you take out that (1+z) time dilation, and also remove the small variations in the stretch factor, the low redshift and high redshift composite light curves now lay right on top of each other. This shows that the supernovae are very similar across redshifts and that the K-correction does an excellent job in bringing them in line with each other.
"This is one of the most dramatic examples of a [mistake] that you will get to see [in such an important paper]"
Figure 6 is comparing the thermal radiation of two objects.
This thermal radiation is coming from the surface of a star.
"This is one of the most dramatic examples of [the kappa-mechanism] that you will get to see."
This observation of figure 6 means the time dilation claim in the paper is a mistake.
This wavelength distribution is thermal radiation with the peak indicating a temperature so curve B appears hotter than curve R, with the B peak at a shorter wavelength.
When the peaks and patterns in thermal radiation are shifted between B and R, this indicates different temperatures. They treat this shift as a time dilation.
Instead this difference in spectra is just a Doppler effect or a redshift. This perceived dilation is wrong. B has a peak wavelength at about 4000 while curve R has a peak at about 6000.
B peaking at 4000 Ang is 7244K.
R peaking at 6000 Ang is 4829K.
The calculated red shift from 4000 to 6000 (Figure 10 uses Angstroms) is z=0.5
The figure shows z=0.45 but the exact wavelengths are not provided for a better calculation here.
These spectrum curves represent a blackbody temperature curve of an object in thermal equilibrium. The actual calculated temperature is irrelevant because the value is not part of the paper's analysis.
The figure shows the surface of the star is moving at this velocity.
Any doppler effect measured is for only this heat source, the surface, This separate motion is not related to any assumed motion of either the star or the host galaxy. The redshift in the surface certainly cannot be used to calculate a distance.
A body's measured temperature can change by its motion due to the Doppler effect, as observed by this study.
excerpts from Wikipedia:
kappa–mechanism is the driving mechanism behind the changes in luminosity of many types of pulsating variable stars.
The pulsating [variable] stars swell and shrink, affecting their brightness and spectrum. Pulsations are generally split into: radial, where the entire star expands and shrinks as a whole; and non-radial, where one part of the star expands while another part shrinks.
Figure 6 shows in the spectrum the motion of the variable star's surface as described by the kappa-mechanism.
This star is not a supernova; this is just a pulsating variable star.
This is critical for this paper: There is no time dilation here; conclusions based on that mistake are wrong.
If the low redshift stars had different days for their light curve spans than the high redshift stars then the curves cannot be fitted based on time dilation.
Rather than confirming light curves are the same for low and high redshift galaxies, the study confirms they are different when the time dilation mistake is discarded.
This variation in pulsating variable stars based in the host galaxy is an interesting scenario.
One problem here is this paper never questioned whether the assumed supernovae were pulsating variable stars. Perhaps the descriptions are misleading when written in the wrong context.
Perhaps the real problem here is the galactic redshifts.
Those values essentially change by region of the sky based on variations in intergalactic hydrogen. They cannot be used as in this paper.
This study worked with data from 81 pulsating variable stars, not supernovae.
from section 5 titled: "Cosmological results from the Hubble plot"
We can now plot the low and high redshift supernovae together on the Hubble diagram and look to see which curves—representing different values of the cosmological parameters—fit best. If we find more redshift in the past this would imply that the expansion has been slowing down and hence there is more mass in the universe to slow it down. However, we now know that there are other possible cosmological parameters that can work in the other direction; for example,
a vacuum energy density can make the universe expand faster. Thus there is a degeneracy here; it is hard to tell apart a situation with more mass or less vacuum energy density.
As described above for section 2, a red shift is not a reliable indicator for velocity or time.
Everything in section 5 was based on the mistakes with redshifts and pulsating variable stars.
The mistake with redshifts resulted in all galaxies having a wrong velocity, directly leading to the mistake of universe expansion and its associated mistake of dark energy which is needed as an undefined entity responsible for the false expansion.
This paper demonstrated many mistakes with redshifts in several ways, and a drastic mistake never realizing there were no supernovae being observed. From the data in the paper they found only pulsating variable stars.
This study did not confirm an accelerating expansion of the universe.
Hit back to go to previous page in history.
Here is the list of topics in this Cosmology Topic Group .
Ctrl + for zoom in; Ctrl - for zoom out ; Ctrl 0 for no zoom;
triple-tap for zoom to fit; pinch for zoom change; pinched for no zoom