Tag Archives: cosmological constant

No Dark Energy?

Dark Energy is the dominant constituent of the universe, accounting for 2/3 of the mass-energy balance at present.

At least that is the canonical concordance cosmology, known as the ΛCDM or Lambda – Cold Dark Matter model. Here Λ is the symbol for the cosmological constant, the simplest, and apparently correct (according to most cosmologists), model for dark energy.

Models of galaxy formation and clustering use N-body simulations run on supercomputers to model the growth of structure (galaxy groups and clusters) in the universe. The cosmological parameters in these models are varied and then the models are compared to observed galaxy catalogs at various redshifts, representing different ages of the universe.

It all works pretty well except that the models assume a fully homogeneous universe on the large scale. While the universe is quite homogeneous for scales above a billion light-years, there is a great deal of filamentary web-like structure at scales above clusters, including superclusters and voids, as you can easily see in this map of our galactic neighborhood.


Galaxies and clusters in our neighborhood. IPAC/Caltech, by Thomas Jarrett“Large Scale Structure in the Local Universe: The 2MASS Galaxy Catalog”, Jarrett, T.H. 2004, PASA, 21, 396

Well why not take that structure into account when doing the modeling? It has long been known that more local inhomogeneities such as those seen here might influence the observational parameters such as the Hubble expansion rate. Thus even at the same epoch, the Hubble parameter could vary from location to location.

Now a team from Hungary and Hawaii have modeled exactly that, in a paper entitled “Concordance cosmology without dark energy” https://arxiv.org/pdf/1607.08797.pdf . They simulate structure growth while estimating the local values of expansion parameter in many regions as their model evolves.

Starting with a completely matter dominated (Einstein – de Sitter) cosmology they find that they can reasonably reproduce the average expansion history of the universe — the scale factor and the Hubble parameter — and do that somewhat better than the Planck -derived canonical cosmology.

Furthermore, they claim that they can explain the tension between the Type Ia supernovae value of the Hubble parameter (around 73 kilometers per second per Megaparsec) and that determined from the Planck satellite observations of the cosmic microwave background radiation (67 km/s/Mpc).

Future surveys of higher resolution should be able to distinguish between their model and ΛCDM, and they also acknowledge that their model needs more work to fully confirm consistency with the cosmic microwave background observations.

Meanwhile I’m not ready to give up on dark energy and the cosmological constant since supernova observations, cosmic microwave background observations and the large scale galactic distribution (labeled BAO in the figure below) collectively give a consistent result of about 70% dark energy and 30% matter. But their work is important, something that has been a nagging issue for quite a while and one looks forward to further developments.


Measurements of Dark Energy and Matter content of Universe

Dark Energy and Matter content of Universe


Galaxy Clusters Probe Dark Energy

Rich (large) clusters of galaxies are significant celestial X-ray sources. In fact, large clusters of galaxies typically contain around 10 times as much mass in the form of very hot gas as is contained in their constituent galaxies.

Moreover, the dark matter content of clusters is even greater than the gas content; typically it amounts to 80% to 90% of the cluster mass. In fact, the first detection of dark matter’s gravitational effects was made by Fritz Zwicky in the 1930s. His measurements indicated that the galaxies were moving around much faster than expected from the known galaxy masses within the cluster.


Image credit: X-ray: NASA/CXC/Univ. of Alabama/A. Morandi et al; Optical: SDSS, NASA/STScI (X-ray emission is shown in purple)

The dark matter’s gravitational field controls the evolution of a cluster. As a cluster forms via gravitational collapse, ordinary matter falling into the strong gravitational field interacts via frictional processes and shocks and thermalizes at a high temperature in the range of 10 to 100 million degrees (Kelvins). The gas is so hot, that it emits X-rays due to thermal bremsstrahlung.

Recently, Drs. Morandi and Sun at the University of Alabama have implemented a new test of dark energy using the observed X-ray emission profiles of clusters of galaxies. Since clusters are dominated by the infall of primordial gas (ordinary matter) into dark matter dominated gravitational wells, then X-ray emission profiles – especially in the outer regions of clusters – are expected to be similar, after correcting for temperature variations and the redshift distance. Their analysis also considers variation in gas fraction with redshift; this is found to be minimal.

Because of the self similar nature of the X-ray emission profiles, X-ray clusters of galaxies can serve as cosmological probes, a type of ‘standard candle’. In particular, they can be used to probe dark energy, and to look at the possibility of the variation of the strength of dark energy over multi-billion year cosmological time scales.

The reason this works is that cluster development and mass growth, and corresponding temperature increase due to stronger gravitational potential wells, are essentially a tradeoff of dark matter and dark energy. While dark matter causes a cluster to grow, dark energy inhibits further growth.

This varies with the redshift of a cluster, since dark energy is constant per unit volume as the universe expands, but dark matter was denser in the past in proportion to (1 + z)^3, where z is the cluster redshift. In the early universe, dark matter thus dominated, as it had a much higher density, but in the last several billion years, dark energy has come to dominate and impede further growth of clusters.

The table below shows the percentage of the mass-energy of the universe which is in the form of dark energy and in the form of matter (both dark and ordinary) at a given redshift, assuming constant dark energy per unit volume. This is based on the best estimate from Planck of 68% of the total mass-energy density due to dark energy at present (z = 0). Higher redshift means looking farther back in time. At z = 0.5, around 5 billion years ago, matter still dominated over dark energy, but by around z = 0.3 the two are about equal and since then (for smaller z) dark energy has dominated. It is only since after the Sun and Earth formed that the universe has entered the current dark energy dominated era.

Table: Total Matter & Dark Energy Percentages vs. z 


Dark Energy percent

Matter percent



















The authors analyzed data from a large sample consisting of 320 clusters of galaxies observed with the Chandra X-ray Observatory. The clusters ranged in redshifts from 0.056 up to 1.24 (almost 9 billion years ago), and all of the selected clusters had temperatures measured to be equal to or greater than 3 keV (above 35 million Kelvins). For such hot clusters, non-gravitational astrophysical effects, are expected to be small.

Their analysis evaluated the equation of state parameter, w, of dark energy. If dark energy adheres to the simplest model, that of the cosmological constant (Λ) found in the equations of general relativity, then w = -1 is expected.

The equation of state governs the relationship between pressure and energy density; dark energy is observed to have a negative pressure, for which w < 0, unlike for matter.

Their resulting value for the equation of state parameter is

w = -1.02 +/- 0.058,

equal to -1 within the statistical errors.

The results from combining three other experiments, namely

  1. Planck satellite cosmic microwave background (CMB) measurements
  2. WMAP satellite CMB polarization measurements
  3. optical observations of Type 1a supernovae

yield a value

w = -1.09 +/- 0.19,

also consistent with a cosmological constant. And combining both the X-ray cluster results with the CMB and optical results yields a tight constraint of

w = -1.01 +/- 0.03.

Thus a simple cosmological constant explanation for dark energy appears to be a sufficient explanation to within a few percent accuracy.

The authors were also able to constrain the evolution in w and find, for a model with

w(z) = w(0) + wa * z / (1 + z), that the evolution parameter is zero within statistical errors:

wa = -0.12 +/- 0.4.

This is a powerful test of dark energy’s existence, equation of state, and evolution, using hundreds of X-ray clusters of galaxies. There is no evidence for evolution in dark energy with redshift back to around z = 1, and a simple cosmological constant model is supported by the data from this technique as well as from other methods.


  1. Morandi, M. Sun arXiv:1601.03741v3 [astro-ph.CO] 4 Feb 2016, “Probing dark energy via galaxy cluster outskirts”
  2. http://chandra.harvard.edu/photo/2016/clusters/

Dark Sector Experiments

A dark energy experiment was recently searching for a so-called scalar “chameleon field”. Chameleon particles could be an explanation for dark energy. They would have to make the field strength vanishingly small when they are in regions of significant matter density, coupling to matter more weakly than does gravity. But in low-density regions, say between the galaxies, the chameleon particle would exert a long range force.

Chameleons can decay to photons, so that provides a way to detect them, if they actually exist.

Chameleon particles were originally suggested by Justin Khoury of the University of Pennsylvania and another physicist around 2003. Now Khoury and Holger Muller and collaborators at UC Berkeley have performed an experiment which pushed millions of cesium atoms toward an aluminum sphere in a vacuum chamber. By changing the orientation in which the experiment is performed, the researchers can correct for the effects of gravity and compare the putative chameleon field strength to gravity.

If there were a chameleon field, then the cesium atoms should accelerate at different rates depending on the orientation, but no difference was found. The level of precision of this experiment is such that only chameleons that interact very strongly with matter have been ruled out. The team is looking to increase the precision of the experiment by additional orders of magnitude.

For now the simplest explanation for dark energy is the cosmological constant (or energy of the vacuum) as Einstein proposed almost 100 years ago.


The Large Underground Xenon experiment to detect dark matter (CC BY 3.0)

Dark matter search broadens

“Dark radiation” has been hypothesized for some time by some physicists. In this scenario there would be a “dark electromagnetic” force and dark matter particles could annihilate into dark photons or other dark sector particles when two dark matter particles collide with one another. This would happen infrequently, since dark matter is much more diffusely distributed than ordinary matter.

Ordinary matter clumps since it undergoes frictional and ordinary radiation processes, emitting photons. This allows it to cool it off and to become more dense under mutual gravitational forces. Dark matter rarely decays or interacts, and does not interact electromagnetically, thus no friction or ordinary radiation occurs. Essentially dark matter helps ordinary matter clump together initially since it dominates on the large scales, but on small scales ordinary matter will be dominant in certain regions. Thus the density of dark matter in the solar system is very low.

Earthbound dark matter detectors have focused on direct interaction of dark matter with atomic nuclei for the signal. John Cherry and co-authors have suggested that dark matter may not interact directly, but rather it first annihilates to light particles, which then scatter on the atomic nuclei used as targets in the direct detection experiments.

So in this scenario dark matter particles annihilate when they encounter each other, producing dark radiation, and then the dark radiation can be detected by currently existing direct detection experiments. If this is the main channel for detection, then much lower mass dark matter particles can be observed, down to of order 10 MeV (million electron-Volts), whereas current direct detection is focused on masses of several GeV (billion electron-Volts) to 100 GeV or more. (The proton rest mass is about 1 GeV)

A Nobel Prize awaits, most likely, the first unambiguous direct detection of either dark matter, or dark energy, if it is even possible.


https://en.wikipedia.org/wiki/Chameleon_particle – Chameleon particle

http://news.sciencemag.org/physics/2015/08/tiny-fountain-atoms-sparks-big-insights-dark-energy?rss=1 – dark energy experiment

http://www.preposterousuniverse.com/blog/2008/10/29/dark-photons/ – dark photons

http://scitechdaily.com/physicists-work-on-new-approach-to-detect-dark-matter/ – article on detecting dark matter generated dark radiation

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.231303 – Cherry et al. paper in Physical Review Letters

Planck 2015 Constraints on Dark Energy and Inflation

The European Space Agency’s Planck satellite gathered data for over 4 years, and a series of 28 papers releasing the results and evaluating constraints on cosmological models have been recently released. In general, the Planck mission’s complete results confirm the canonical cosmological model, known as Lambda Cold Dark Matter, or ΛCDM. In approximate percentage terms the Planck 2015 results indicate 69% dark energy, 26% dark matter, and 5% ordinary matter as the mass-energy components of the universe (see this earlier blog:


Dark Energy

We know that dark energy is the dominant force in the universe, comprising 69% of the total energy content. And it exerts a negative pressure causing the expansion to continuously speed up. The universe is not only expanding, but the expansion is even accelerating! What dark energy is we do not know, but the simplest explanation is that it is the energy of empty space, of the vacuum. Significant departures from this simple model are not supported by observations.

The dark energy equation of state is the relation between the pressure exerted by dark energy and its energy density. Planck satellite measurements are able to constrain the dark energy equation of state significantly. Consistent with earlier measurements of this parameter, which is usually denoted as w, the Planck Consortium has determined that w = -1 to within 4 or 5 percent (95% confidence).

According to the Planck Consortium, “By combining the Planck TT+lowP+lensing data with other astrophysical data, including the JLA supernovae, the equation of state for dark energy is constrained to w = −1.006 ± 0.045 and is therefore compatible with a cosmological constant, assumed in the base ΛCDM cosmology.”

A value of -1 for w corresponds to a simple Cosmological constant model with a single parameter Λ  that is the present-day energy density of empty space, the vacuum. The Λ value measured to be 0.69 is normalized to the critical mass-energy density. Since the vacuum is permeated by various fields, its energy density is non-zero. (The critical mass-energy density is that which results in a topologically flat space-time for the universe; it is the equivalent of 5.2 proton masses per cubic meter.)

Such a model has a negative pressure, which leads to the accelerated expansion that has been observed for the universe; this acceleration was first discovered in 1998 by two teams using certain supernova as standard candle distance indicators, and measuring their luminosity as a function of redshift distance.

Modified gravity

The phrase modified gravity refers to models that depart from general relativity. To date, general relativity has passed every test thrown at it, on scales from the Earth to the universe as a whole. The Planck Consortium has also explored a number of modified gravity models with extensions to general relativity. They are able to tighten the restrictions on such models, and find that overall there is no need for modifications to general relativity to explain the data from the Planck satellite.

Primordial density fluctuations

The Planck data are consistent with a model of primordial density fluctuations that is close to, but not precisely, scale invariant. These are the fluctuations which gave rise to overdensities in dark matter and ordinary matter that eventually collapsed to form galaxies and the observed large scale structure of the universe.

The concept is that the spectrum of density fluctuations is a simple power law of the form

P(k) ∝ k**(ns−1),

where k is the wave number (the inverse of the wavelength scale). The Planck observations are well fit by such a power law assumption. The measured spectral index of the perturbations has a slight tilt away from 1, with the existence of the tilt being valid to more than 5 standard deviations of accuracy.

ns = 0.9677 ± 0.0060

The existence and amount of this tilt in the spectral index has implications for inflationary models.


The Planck Consortium authors have evaluated a wide range of potential inflationary models against the data products, including the following categories:

  • Power law
  • Hilltop
  • Natural
  • D-brane
  • Exponential
  • Spontaneously broken supersymmetry
  • Alpha attractors
  • Non-minimally coupled

Figure 12 from Constraints on InflationFigure 12 from Planck 2015 results XX Constraints on Inflation. The Planck 2015 data constraints are shown with the red and blue contours. Steeper models with  V ~ φ³ or V ~ φ² appear ruled out, whereas R² inflation looks quite attractive.

Their results appear to rule out some of these, although many models remain consistent with the data. Power law models with indices greater or equal to 2 appear to be ruled out. Simple slow roll models such as R² inflation, which is actually the first inflationary model proposed 35 years ago, appears more favored than others. Brane inflation and exponential inflation are also good fits to the data. Again, many other models still remain statistically consistent with the data.

Simple models with a few parameters characterizing the inflation suffice:

“Firstly, under the assumption that the inflaton* potential is smooth over the observable range, we showed that the simplest parametric forms (involving only three free parameters including the amplitude V (φ∗ ), no deviation from slow roll, and nearly power-law primordial spectra) are sufficient to explain the data. No high-order derivatives or deviations from slow roll are required.”

* The inflaton is the name cosmologists give to the inflation field

“Among the models considered using this approach, the R2 inflationary model proposed by Starobinsky (1980) is the most preferred. Due to its high tensor- to-scalar ratio, the quadratic model is now strongly disfavoured with respect to R² inflation for Planck TT+lowP in combination with BAO data. By combining with the BKP likelihood, this trend is confirmed, and natural inflation is also disfavoured.”

Isocurvature and tensor components

They also evaluate whether the cosmological perturbations are purely adiabatic, or include an additional isocurvature component as well. They find that an isocurvature component would be small, less than 2% of the overall perturbation strength. A single scalar inflaton field with adiabatic perturbations is sufficient to explain the Planck data.

They find that the tensor-to-scalar ratio is less than 9%, which again rules out or constrains certain models of inflation.


The simplest LambdaCDM model continues to be quite robust, with the dark energy taking the form of a simple cosmological constant. It’s interesting that one of the oldest and simplest models for inflation, characterized by a power law relating the potential to the inflaton amplitude, and dating from 35 years ago, is favored by the latest Planck results. A value for the power law index of less than 2 is favored. All things being equal, Occam’s razor should lead us to prefer this sort of simple model for the universe’s early history. Models with slow-roll evolution throughout the inflationary epoch appear to be sufficient.

The universe started simply, but has become highly structured and complex through various evolutionary processes.


Planck Consortium 2015 papers are at http://www.cosmos.esa.int/web/planck/publications – This site links to the 28 papers for the 2015 results, as well as earlier publications. Especially relevant are these – XIII Cosmological parameters, XIV Dark energy and modified gravity, and XX Constraints on inflation.

Dark Energy Survey First Light!

Last month the Dark Energy Survey project achieved first light from its remote location in Chile’s Atacama Desert. The term first light is used by astronomers to refer to the first observation by a new instrument.

And what an instrument this is! It is in fact the world’s most powerful digital camera. This Dark Energy Camera, or DECam, is a 570 Megapixel optical survey camera with a very wide field of view. The field of view is over 2 degrees, which is rather unusual in optical astronomy. And the camera requires special CCDs that are sensitive in the red and infrared parts of the spectrum. This is because distant galaxies have their light shifted toward the red and the infrared by the cosmological expansion. If the galaxy redshift is one,  the light travels for about 8 billion years and the wavelength of light that the DECam detects is doubled, relative to what it was when it was originally emitted.

Dark Energy Camera

Image: DECam, near center of image, is deployed at the focus of the 4-meter Victor M. Blanco optical telescope in Chile (Credit: Dark Energy Survey Collaboration)

The DECam has been deployed to further our understanding of dark energy through not just one experimental method, but in fact four different methods. That’s how you solve tough problems – by attacking them on multiple fronts.

It’s taken 8 years to get to this point, and there have been some delays, as normal for large projects. But now this new instrument is mounted at the focal plane of the existing 4-meter telescope of the National Science Foundation’s Cerro Tololo Inter-American observatory in Chile. It will begin its program of planned measurements of several hundred million galaxies starting in December after several weeks of testing and calibration. Each image from the camera-telescope combination can capture up to 100,000 galaxies out to distances of up to 8 billion light years. This is over halfway back to the origin of the universe almost 14 billion years ago.

In a previous blog entry I talked about the DES and the 4 methods in some detail. In brief they are based on observations of:

  1. Type 1a supernova (the method used to first detect dark energy)
  2. Very large scale spatial correlations of galaxies separated by 500 million light-years (this experiment is known as Baryon Acoustic Oscillations since the galaxy separations reflect the imprint of sound waves in the very early universe, prior to galaxy formation)
  3. The number of clusters of galaxies as a function of redshift (age of the universe)
  4. Gravitational lensing, i.e. distortion of background images by gravitational effects of foreground clusters in accordance with general relativity

NGC 1365

Image: NGC 1365, a barred spiral galaxy located in the Fornax cluster located 60 million light years from Earth (Credit: Dark Energy Survey Collaboration)

What does the Dark Energy Survey team, which has over 120 members from over 20 countries, hope to learn about dark energy? We already have a good handle on its magnitude, at around 73% presently of the universe’s total mass-energy density.

The big issue is does it behave as a cosmological constant or as something more complex? In other words, how does the dark energy vary over time and is there possibly some spatial variation as well? And what is its equation of state, or relationship between its pressure and density?

With a cosmological constant explanation the relationship is Pressure = – Energy_density, a negative pressure, which is necessary in any model of the dark energy, in order for it to drive the accelerated expansion seen for the universe. Current observations from other experiments, especially those measuring the cosmic microwave background, support an equation of state parameter within around 5% of the value -1, as represented in the equation in the previous sentence. This is consistent with the interpretation as a pressure resulting from the vacuum. Dark energy appears also to have a constant or nearly constant density per unit volume of space. It is unlike ordinary matter and dark matter, that both drop in mass density (and thus energy density) as the volume of the universe grows. Thus dark energy becomes ever more dominant over dark matter and ordinary matter as the universe continues to expand.

We can’t wait to see the first publication of results from research into the nature of dark energy using the DECam.


http://www.noao.edu/news/2012/pr1204.php – Press release from National Optical Astronomical Observatory on DECam first light


http://www.ctio.noao.edu/noao/ – Cerro Tololo Inter-American Observatory page

http://lambda.gsfc.nasa.gov/product/map/dr4/pub_papers/sevenyear/basic_results/wmap_7yr_basic_results.pdf – WMAP 7 year results on cosmic microwave background


Future of Our Runaway Universe (the next Trillion Years)

Future for our Sun: Ultraviolet image of the planetary nebula NGC 7293 also known as the Helix Nebula. It is the nearest example of what happens to a star, like our own Sun, as it approaches the end of its life when it runs out of fuel, expels gas outward and evolves into a much hotter, smaller and denser white dwarf star. Image Credit: NASA/JPL-Caltech/SSC

In the future, the average density of matter in the universe (both ordinary matter and dark matter) will continue to drop in proportion to the increasing spatial volume as the universe expands ever more rapidly. The dark energy density, however, behaves differently. Dark energy is an irreducible property of even empty space, so as new space is created, the dark energy density remains the same; it is believed to not only take the same value in all portions of space at a given time, but to also have had the same value (per unit volume) for many billions of years.

Since around 5 billion years ago, when the universe was 9 billion years old, the dark energy has dominated over both types of matter (ordinary and dark) and this dominance is only increasing with the universe’s continued expansion. Today it is 73% of the total mass-energy density and it will approach close to 100% in the future. The assumption is made that the cosmological constant or dark energy term that we measure today remains constant into the future. However it cannot be ruled out that it is changing very slowly or might change suddenly at some future date.

In the cosmological constant case, the scale factor for the size of the universe grows exponentially with time. This is known as the de Sitter solution to the equations of general relativity, and it indicates that the expansion of the universe is accelerating into a runaway condition. There is a single parameter, a timescale. Cosmological measurements indicate that the value is such that the size of the universe for each spatial dimension will double and redouble every 11 billion years (the volume will thus grow by 8 times each 11 billion years).

When the universe is 25 billion years old (now it’s 14 billion years old), distant galaxies will be about twice as far away as today (and 4 times fainter). Well before that time we’ll need to evacuate the Earth as the Sun will go into its red giant phase some 5 billion years from now, followed by a white dwarf phase – as shown in the image of the Helix planetary nebula above. When the universe is around 124 billion years old, distant galaxies on average will be 1000 times farther away from us than now. And after 234 billion years they will be an incredible million times farther away than now!

Year                                    Relative Distance                        Relative Brightness

14 billion (Now)                        1                                                1

25 billion                                    2                                                1/4

124 billion                                  1000                                         one-millionth

234 billion                                  1,000,000                               one-trillionth

The distant galaxies that we detect with the Hubble telescope and large Earth-bound telescopes will become invisible since their apparent luminosity will drop as the square of the increasing distance. For example at the time of 124 billion years, they will be 1 million times fainter (1000 squared). At the time of 234 billion years they will be a trillion times fainter (one million squared). Actually it will be worse than this since their light will be redshifted (stretched out by the cosmological expansion) by the same relative distance factor, so light emitted in the visible will be detected in the millimeter radio region when the universe is 100+ billion years old. This is without considering the evolution in their stellar populations, but only their lower mass, fainter stars will survive, further aggravating the situation.

Galaxies themselves are not changing very much in their size or in internal density, rather it is the spacing between galaxies that is on average growing rapidly. Galaxy groups and clusters that are today gravitationally bound will remain bound. Our home, the Milky Way galaxy, and its large neighbor the Andromeda galaxy, will stay together since they are gravitationally bound, and they may very well merge in several billion years due to tidal effects. All of the 40 or so galaxies and dwarf galaxies in our gravitationally bound Local Group may coalesce after 1 trillion years have passed.

Our light cone horizon, which determines which galaxies are even theoretically visible to us, is shrinking in relative terms. Sufficiently distant galaxies are already receding faster than the speed of light from our vantage point and are entirely hidden from us; if the inflationary model is correct as seems to be the case, the universe is immensely larger than what we are able to detect. This is possible and indeed happening because there are no constraints in special relativity or general relativity on the expansion rate of space itself; only the objects within space are constrained to moving at less than the speed of light relative to their local frames of reference.

An intelligent society in the very distant future, possibly our descendants who have moved to a planet in orbit around another star, would observe only one galaxy, namely their own. This would be a larger galaxy formed from the Milky Way and other members of the Local Group. All other galaxies would no longer be visible, first they would become too distant and too faint, and then they would be entirely beyond our light horizon. These descendants or other observers would believe their galaxy to be the only one in the universe, unless they had access to (and a willingness to believe in) very ancient research publications.

We are fortunate to live in this epoch – despite dark matter, dark energy, and dark gravity, the universe is young, and we are immersed in light.



The Five Ages of the Universe, Fred Adams and Greg Laughlin, Simon and Schuster, 1999

The Runaway Universe, Donald Goldsmith, Perseus Books, 2000

Dark Matter, Dark Energy, Dark Gravity, Stephen Perrenod, 2011, https://darkmatterdarkenergy.wordpress.com/where-to-find/

2011 Nobel Prize for Dark Energy Discovery

Measurements of Dark Energy and Matter content of Universe

Dark Energy and Matter content of Universe: The intersection of the supernova (SNe), cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) ellipses indicate a topologically flat universe composed 74% of dark energy (y-axis) and 26% of dark matter plus normal matter (x-axis).

The 2011 Nobel Prize in Physics, the most prestigious award given in the physics field, was announced on October 4. The winners are astronomers and astrophysicists who produced the first clear evidence of an accelerating universe. Not only is our universe as a whole expanding rapidly, it is in fact speeding up! It is not often that astronomers win the Nobel Prize since there is not a separate award for their discipline. The discovery of the acceleration in the universe’s expansion was made more or less simultaneously by two competing teams of astronomers at the end of the 20th century, in 1998, so the leaders of both teams share this Nobel Prize.

The new Nobel laureates, Drs. Saul Perlmutter, Adam Riess, and Brian Schmidt, were the leaders of the two  teams studying distant supernovae, in remote galaxies, as cosmological indicators. Cosmology is the study of the properties of the universe on the largest scales of space and time. Supernovae are exploding stars at the ends of their lives. They only occur about once each fifty to one hundred years or so in a given galaxy, thus one must study a very large number of galaxies in an automated fashion to find a sufficient number to be useful. The two teams introduced new automated search techniques to find enough supernovae and achieve their results.

During a supernova explosion, driven by rapid nuclear fusion of heavy elements, the supernova can temporarily become as bright as the entire galaxy in which it resides. The astrophysicists studied a particular type of supernova known as Type Ia. These are due to white dwarf stellar remnants exceeding a critical mass. Typically these white dwarfs would be found in binary stellar systems with another, more normal, star as a companion. If a white dwarf grabs enough material from the companion via gravitational tidal effects, that matter can “push it over the edge” and cause it to go supernova. Since all occurrences of this type of supernova event have the same mass for the exploding star (about 1.4 times the Sun’s mass), the resultant supernova has a consistent brightness or luminosity from one event to the next.

This makes them very useful as so-called standard candles. We know the absolute brightness, which we can calibrate for this class of supernova, and thus we can calculate the distance (called the luminosity distance) by comparing the observed brightness to the absolute. An alternative measure of the distance can be obtained by measuring the redshift of the companion galaxy. The redshift is due to the overall expansion of the universe, and thus the light from galaxies when it reaches us is stretched out to longer, or “redder” wavelengths. The amount of the shift provides what we call the redshift distance.

Comparing these two different distance techniques provides a cosmological test of the overall properties of the universe: the expansion rate, the shape or topology, and whether the expansion is slowing down, as was expected, or not. The big surprise is that the expansion from the original Big Bang has stopped slowing down due to gravity and has instead been accelerating in recent years! The Nobel winners did not expect such a result, thought they had made errors in their analyses and checked and rechecked. The acceleration did not go away. And when they compared the results between the two teams, they realized they had confirmed each others’ profound discovery of the reality of a dark energy driven acceleration.

The acceleration result is now well founded since it can be seen in the high spatial resolution measurements of the cosmic microwave background radiation as well. This is the radiation left over from the Big Bang event associated with the origin of our universe.

The acceleration is now increasingly important, dominating during the past 5 billion years of the 14 billion year history of the universe. Coincidentally, this is about how long our Earth and Sun have been in existence. The acceleration has to overcome the self-gravitational attraction of all the matter of the universe upon itself, and is believed to be due to a nonzero energy field known as dark energy that pervades all of space. As the universe expands to create more volume, more dark energy is also created! Empty space is not empty, due to the underlying quantum physics realities. The details, and why dark energy has the observed strength, are not yet understood.

Amazingly, Einstein had added a cosmological constant term, which acts as a dark energy, to his equations of General Relativity even before the Big Bang itself was discovered. But he later dropped the term and called it his worst blunder, after the expansion of the universe was first demonstrated by Edwin Hubble over 80 years ago. It turns out Einstein was in fact right; his simple term explains the observed data and the Perlmutter, Riess, and Schmidt measurements indicate that ¾ of the mass-energy content of the universe is found in dark energy, with only ¼ in matter.

Our universe is slated to expand in an exponential fashion for trillions of years and more, unless some other physics that we don’t yet understand kicks in. This is rather like the ever-increasing pace of modern technology and modern life and the continuing inflation of prices.

We honor the achievements of Drs. Perlmutter, Riess, and Schmidt and of their research teams in increasing our understanding of our universe and its underlying physics. Interestingly, only a few weeks ago, a very important supernova in the nearby M101 galaxy was discovered, and it is also a Type 1a. Because it is so close, only 25 million light years away, it is yielding a lot of high quality data. Perhaps this celestial fireworks display was a harbinger of their Nobel Prize?




http://www.nobelprize.org/mediaplayer/index.php?id=1633 (Telephone interview with Adam Reiss)

http://supernova.lbl.gov/ (Supernova Cosmology Project)




M101 Supernova and the Cosmic Distance Ladder

Last week, on August 24, there was a very fortuitous and major discovery by UC Berkeley and Lawrence Berkeley National Lab astronomers of a nearby Type 1a supernova, named PTF 11kly, in the nearby Pinwheel Galaxy. This galaxy in Ursa Major is also known as M101 (the 101st member of the Messier catalog). Type 1a supernovae are key to measuring the cosmological distance scale since they act as “standard candles”, that is, they all have more or less the same absolute brightness. Dark energy was first discovered through Type 1a supernovae measurements. These supernovae are due to certain white dwarf runaway thermonuclear explosions.

Supernova in M101

Supernova in M101 (Credit: Lawrence Berkeley National Laboratory, Palomar Transient Factory team)

Three photos on 3 successive nights, with the supernova not detectable on 22 August (left image), detectable (pointed to by green arrow) on 23 August (middle image) and brighter on 24 August (right image).

A white dwarf is the evolutionary end state for most stars, including our Sun eventually, after it exhausts the hydrogen and helium in its core via thermonuclear fusion. Some of the star’s outer envelope is ejected during a nova phase but the remaining portion of the star collapses dramatically, until it is only about the size of the Earth. This is due to the lack of pressure support that previously was generated by nuclear fusion at high temperatures. This is not a supernova event; it is the prior phase that forms the white dwarf. The white dwarf core is usually composed primarily of carbon and oxygen. The collapse of the core is halted by electron degeneracy pressure. The electron degenerate matter, of which a white dwarf is composed, has its pressure determined by quantum rules that require that no two electrons can occupy the same state of position and momentum.

A Type 1a supernova is formed when a white dwarf of a particular mass undergoes a supernova explosion. It was shown in the 1930s by Chandrasekhar that the maximum mass supportable in the white dwarf state is 1.38 solar masses (1.38 times our Sun’s mass). In essence, at this limit, the electrons are pushed as close together as possible. If the white dwarf is near this limit and sufficient mass is added to the white dwarf, it will ignite thermonuclear burning of carbon and oxygen nuclei during a very rapid interval of a few seconds and explode as a supernova. The explosion is catastrophic, with all or nearly all of the star’s matter being thrown out into space. The supernova at maximum is very bright, for a while perhaps as bright as an entire galaxy. The additional mass that triggers the supernova is typically supplied by tidal accretion from a companion star found in a binary system with the white dwarf.

Because they all explode with the same mass, Type 1a supernovae all have more or less the same absolute brightness. This is key to their usefulness as standard candles.

In 1998 two teams of astronomers used these Type 1a supernovae to make the most significant observational discovery in cosmology since the detection of the cosmic microwave background radiation over 30 years earlier. They searched for these standard candle supernovae in very distant galaxies in order to measure the evolution and topology of the universe. Both teams determined the need for a non-zero cosmological constant, or dark energy term, in the equations of general relativity and with their initial results and others gathered later, its strength is seen to be nearly 3/4 of the total mass-energy density of the universe. These results have been confirmed by other techniques, including via detailed studies of the cosmic microwave background.

One needs two measurements for each galaxy to perform this test: a measurement of the redshift distance and a measure of the luminosity distance. The redshift distance is determined by the amount of shift toward the red portion of major identifier lines in the host galaxy’s spectrum, due to the expansion of the universe (the host galaxy is the galaxy in which a given supernova is contained.) The apparent brightness of the supernova relative to its absolute brightness provides the luminosity distance. Basically the two teams found that the distant galaxies were further away than expected, implying a greater rate of continuing expansion – indeed an acceleration – of the universe during the past several billion years compared to what would occur without dark energy.

What is exciting about the M101 supernova discovery last week is that it is so nearby, so easy to measure, and was caught very soon after the initial explosion. By studying how bright it is each day as the supernova explosion progresses, first brightening and then fading (this is known as the light curve), it can help us tie down more tightly the determination of the distance. This in turn helps to provide further precision and confidence around the measurement of the strength of dark energy.



http://arxiv.org/abs/astro-ph/9812133  Perlmutter et al. 1999 “Measurements of Omega and Lambda from 42 High-Redshift Supernovae” Astrophys.J.517:565-586

 http://arxiv.org/abs/astro-ph/9805201  Reiss et al. 1998 “Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant” Astron.J.116:1009-1038




Dark Energy Drives Runaway Universe

Accelerating universe

Accelerating universe graphic. Credit: NASA/STSci/Ann Field

Dark energy was first introduced as a possibility as a result of the formulation of Einstein’s equations of general relativity. When he considered how the universe as a whole would behave under the general relativity description of gravity, he added a term to his equations, known as the cosmological constant. At the time the prevailing view was that the universe was static, and neither expanding nor contracting. The term was intended to balance the self-gravitational energy of the universe, and it thus acts as a repulsive force, rather than an attractive one. His basis for introduction of the cosmological constant was erroneous in two respects. The first problem is that the static solution was unstable, as if balanced on a knife edge. If you nudged it a little bit by increasing the matter density in some region slightly, that region would collapse, or if you lowered the density ever so slightly, that region would expand indefinitely. The second problem is that by 1929 Edwin Hubble had demonstrated the universe is actually expanding at a significant rate overall.

Subsequently, Einstein called the introduction of the cosmological constant his “greatest blunder”. After the realization that we live in an expanding universe, while the possibility of the cosmological constant having a non-zero value was sometimes entertained in cosmological theory, it was mostly ignored (set to zero). Over the next several decades, attention turned to better measuring the expansion rate of the universe and the inventory of matter, both ordinary matter and the dark matter, with the amount of the latter implied by long range gravitational effects seen both within galaxies and between galaxies. Was there enough matter of both types to halt the expansion? It seemed not, rather that there was only about 1/4 of the required density of matter, and that was mostly in the form of dark, not ordinary matter. Matter of either type would slow down the expansion of the universe due to its gravitational effects.

After 1980, the inflationary version of the Big Bang gained acceptance due to its ability to explain the flat topology of the universe and the homogeneity of the cosmic microwave background radiation, the relic light from the Big Bang itself. The inflationary model strongly indicated that the total energy density should be about 4 times greater than seen from the matter components alone. It is the total of energy and matter (the energy content of matter) which determines the universe’s fate, since E = mc^2.

In 1998 the astounding discovery was made that the universe’s expansion rate is accelerating! This was determined by two different teams, each of which were making measurements of distant supernovae (exploding stars). And it was confirmed by measurements of tiny fluctuations in the intensity of the microwave background radiation. The two techniques are consistent, and a third technique based on X-ray emission from clusters of galaxies, as well as a fourth technique based on very large scale measurements of relative galaxy positions, also give results consistent with the previous two techniques. The inflationary predictions are satisfied with dark energy presently three times more dominant than the rest mass energy equivalent from dark matter plus ordinary matter. Further measurements have refined our understanding of the relative strength of dark energy in comparison to dark matter and ordinary matter. The best estimates are that, today, dark energy is 74% of the universe’s total mass-energy balance.

In the cosmological constant formulation, dark energy is constant in time, while the matter density drops as the universe expands, in proportion to the cube of the scale factor. So if we consider the universe in its early days the energy contained in the dark matter would have dominated over dark energy, as the mass density would have been much greater than today. The crossover from matter dominated to dark energy dominated came after the universe was about 9 billion years old, or about 5 billion years ago. This emergence of dark energy as the dominant force, due to its nature as a repulsive property of “empty” space-time, results in an accelerating expansion of the universe, which has been called the “runaway universe”. Our universe is apparently slated to become hugely larger than its current enormous size.

Why is dark energy important then? Since five billion years ago, and on into the indefinite future, it has dominated the mass-energy content of the universe. It drives a re-acceleration of the universe. It inhibits the re-collapse (“Big Crunch”) of our entire universe or even substantial portions of the universe. Thus it naturally extends the life of the entire universe to trillions of years or much more – far beyond what would occur were the universe to be dominated by matter only and with density at the critical value or above. Dark energy thus works to maximize the available time and space for life to develop and to evolve on planets found throughout the universe.