Category Archives: Dark Energy

Emergent Gravity in the Solar System

In a prior post I outlined Erik Verlinde’s recent proposal for Emergent Gravity that may obviate the need for dark matter.

Emergent gravity is a statistical, thermodynamic phenomenon that emerges from the underlying quantum entanglement of micro states found in dark energy and in ordinary matter. Most of the entropy is in the dark energy, but the presence of ordinary baryonic matter can displace entropy in its neighborhood and the dark energy exerts a restoring force that is an additional contribution to gravity.

Emergent gravity yields both an area entropy term that reproduces general relativity (and Newtonian dynamics) and a volume entropy term that provides extra gravity. The interesting point is that this is coupled to the cosmological parameters, basically the dark energy term which now dominates our de Sitter-like universe and which acts like a cosmological constant Λ.

In a paper that appeared in last month, a trio of astronomers Hees, Famaey and Bertone claim that emergent gravity fails by seven orders of magnitude in the solar system. They look at the advance of the perihelion for six planets out through Saturn and claim that Verlinde’s formula predicts perihelion advances seven orders of magnitude larger than should be seen.


No emergent gravity needed here. Image credit: NASA GSFC

But his formula does not apply in the solar system.

“..the authors claiming that they have ruled out the model by seven orders of magnitude using solar system data. But they seem not to have taken into account that the equation they are using does not apply on solar system scales. Their conclusion, therefore, is invalid.” – Sabine Hossenfelder, theoretical physicist (quantum gravity) Forbes blog 

Why is this the case? Verlinde makes 3 main assumptions: (1) a spherically symmetric, isolated system, (2) a system that is quasi-static, and (3) a de Sitter spacetime. Well, check for (1) and check for (2) in the case of the Solar System. However, the Solar System is manifestly not a dark energy-dominated de Sitter space.

It is overwhelmingly dominated by ordinary matter. In our Milky Way galaxy the average density of ordinary matter is some 45,000 times larger than the dark energy density (which corresponds to only about 4 protons per cubic meter). And in our Solar System it is concentrated in the Sun, but on average out to the orbit of Saturn is a whopping 3.7 \cdot 10^{17} times the dark energy density.

The whole derivation of the Verlinde formula comes from looking at the incremental entropy (contained in the dark energy) that is displaced by ordinary matter. Well with over 17 orders of magnitude more energy density, one can be assured that all of the dark energy entropy was long ago displaced within the Solar System, and one is well outside of the domain of Verlinde’s formula, which only becomes relevant when acceleration drops near to or below  c * H. The Verlinde acceleration parameter takes the value of 1.1 \cdot 10^{-8}  centimeters/second/second for the observed value of the Hubble parameter. The Newtonian acceleration at Saturn is .006 centimeters/second/second or 50,000 times larger.

The conditions where dark energy is being displaced only occur when the gravity has dropped to much smaller values; his approximation is not simply a second order term that can be applied in a domain where dark energy is of no consequence.

There is no entropy left to displace, and thus the Verlinde formula is irrelevant at the orbit of Saturn, or at the orbit of Pluto, for that matter. The authors have not disproven Verlinde’s proposal for emergent gravity.







Emergent Gravity: Verlinde’s Proposal

In a previous blog entry I give some background around Erik Verlinde’s proposal for an emergent, thermodynamic basis of gravity. Gravity remains mysterious 100 years after Einstein’s introduction of general relativity – because it is so weak relative to the other main forces, and because there is no quantum mechanical description within general relativity, which is a classical theory.

One reason that it may be so weak is because it is not fundamental at all, that it represents a statistical, emergent phenomenon. There has been increasing research into the idea of emergent spacetime and emergent gravity and the most interesting proposal was recently introduced by Erik Verlinde at the University of Amsterdam in a paper “Emergent Gravity and the Dark Universe”.

A lot of work has been done assuming anti-de Sitter (AdS) spaces with negative cosmological constant Λ – just because it is easier to work under that assumption. This year, Verlinde extended this work from the unrealistic AdS model of the universe to a more realistic de Sitter (dS) model. Our runaway universe is approaching a dark energy dominated dS solution with a positive cosmological constant Λ.

The background assumption is that quantum entanglement dictates the structure of spacetime, and its entropy and information content. Quantum states of entangled particles are coherent, observing a property of one, say the spin orientation, tells you about the other particle’s attributes; this has been observed in long distance experiments, with separations exceeding 100 kilometers.

400px-SPDC_figure.pngIf space is defined by the connectivity of quantum entangled particles, then it becomes almost natural to consider gravity as an emergent statistical attribute of the spacetime. After all, we learned from general relativity that “matter tells space how to curve, curved space tells matter how to move” – John Wheeler.

What if entanglement tells space how to curve, and curved space tells matter how to move? What if gravity is due to the entropy of the entanglement? Actually, in Verlinde’s proposal, the entanglement entropy from particles is minor, it’s the entanglement of the vacuum state, of dark energy, that dominates, and by a very large factor.

One analogy is thermodynamics, which allows us to represent the bulk properties of the atmosphere that is nothing but a collection of a very large number of molecules and their micro-states. Verlinde posits that the information and entropy content of space are due to the excitations of the vacuum state, which is manifest as dark energy.

The connection between gravity and thermodynamics has been around for 3 decades, through research on black holes, and from string theory. Jacob Bekenstein and Stephen Hawking determined that a black hole possesses entropy proportional to its area divided by the gravitational constant G. String theory can derive the same formula for quantum entanglement in a vacuum. This is known as the AdS/CFT (conformal field theory) correspondence.

So in the AdS model, gravity is emergent and its strength, the acceleration at a surface, is determined by the mass density on that surface surrounding matter with mass M. This is just the inverse square law of Newton. In the more realistic dS model, the entropy in the volume, or bulk, must also be considered. (This is the Gibbs entropy relevant to excited states, not the Boltzmann entropy of a ground state configuration).

Newtonian dynamics and general relativity can be derived from the surface entropy alone, but do not reflect the volume contribution. The volume contribution adds an additional term to the equations, strengthening gravity over what is expected, and as a result, the existence of dark matter is ‘spoofed’. But there is no dark matter in this view, just stronger gravity than expected.

This is what the proponents of MOND have been saying all along. Mordehai Milgrom observed that galactic rotation curves go flat at a characteristic low acceleration scale of order 2 centimeters per second per year. MOND is phenomenological, it observes a trend in galaxy rotation curves, but it does not have a theoretical foundation.

Verlinde’s proposal is not MOND, but it provides a theoretical basis for behavior along the lines of what MOND states.

Now the volume in question turns out to be of order the Hubble volume, which is defined as c/H, where H is the Hubble parameter denoting the rate at which galaxies expand away from one another. Reminder: Hubble’s law is v = H \cdot d where v is the recession velocity and the d the distance between two galaxies. The lifetime of the universe is approximately 1/H.


The value of c / H is over 4 billion parsecs (one parsec is 3.26 light-years) so it is in galaxies, clusters of galaxies, and at the largest scales in the universe for which departures from general relativity (GR) would be expected.

Dark energy in the universe takes the form of a cosmological constant Λ, whose value is measured to be 1.2 \cdot 10^{-56} cm^{-2} . Hubble’s parameter is 2.2 \cdot 10^{-18} sec^{-1} . A characteristic acceleration is thus H²/ Λ or 4 \cdot 10^{-8}  cm per sec per sec (cm = centimeters, sec = second).

One can also define a cosmological acceleration scale simply by c \cdot H , the value for this is about 6 \cdot 10^{-8} cm per sec per sec (around 2 cm per sec per year), and is about 15 billion times weaker than Earth’s gravity at its surface! Note that the two estimates are quite similar.

This is no coincidence since we live in an approximately dS universe, with a measured  Λ ~ 0.7 when cast in terms of the critical density for the universe, assuming the canonical ΛCDM cosmology. That’s if there is actually dark matter responsible for 1/4 of the universe’s mass-energy density. Otherwise Λ could be close to 0.95 times the critical density. In a fully dS universe, \Lambda \cdot c^2 = 3 \cdot H^2 , so the two estimates should be equal to within sqrt(3) which is approximately the difference in the two estimates.

So from a string theoretic point of view, excitations of the dark energy field are fundamental. Matter particles are bound states of these excitations, particles move freely and have much lower entropy. Matter creation removes both energy and entropy from the dark energy medium. General relativity describes the response of area law entanglement of the vacuum to matter (but does not take into account volume entanglement).

Verlinde proposes that dark energy (Λ) and the accelerated expansion of the universe are due to the slow rate at which the emergent spacetime thermalizes. The time scale for the dynamics is 1/H and a distance scale of c/H is natural; we are measuring the time scale for thermalization when we measure H. High degeneracy and slow equilibration means the universe is not in a ground state, thus there should be a volume contribution to entropy.

When the surface mass density falls below c \cdot H / (8 \pi \cdot G) things change and Verlinde states the spacetime medium becomes elastic. The effective additional ‘dark’ gravity is proportional to the square root of the ordinary matter (baryon) density and also to the square root of the characteristic acceleration c \cdot H.

This dark gravity additional acceleration satisfies the equation g _D = sqrt  {(a_0 \cdot g_B / 6 )} , where g_B is the usual Newtonian acceleration due to baryons and a_0 = c \cdot H is the dark gravity characteristic acceleration. The total gravity is g = g_B + g_D . For large accelerations this reduces to the usual g_B and for very low accelerations it reduces to sqrt  {(a_0 \cdot g_B / 6 )} .

The value a_0/6 at 1 \cdot 10^{-8} cm per sec per sec derived from first principles by Verlinde is quite close to the MOND value of Milgrom, determined from galactic rotation curve observations, of 1.2 \cdot 10^{-8} cm per sec per sec.

So suppose we are in a region where g_B is only 1 \cdot 10^{-8} cm per sec per sec. Then g_D takes the same value and the gravity is just double what is expected. Since orbital velocities go as the square of the acceleration then the orbital velocity is observed to be sqrt(2) higher than expected.

In terms of gravitational potential, the usual Newtonian potential goes as 1/r, resulting in a 1/r^2 force law, whereas for very low accelerations the potential now goes as log(r) and the resultant force law is 1/r. We emphasize that while the appearance of dark matter is spoofed, there is no dark matter in this scenario, the reality is additional dark gravity due to the volume contribution to the entropy (that is displaced by ordinary baryonic matter).


Flat to rising rotation curve for the galaxy M33

Dark matter was first proposed by Swiss astronomer Fritz Zwicky when he observed the Coma Cluster and the high velocity dispersions of the constituent galaxies. He suggested the term dark matter (“dunkle materie”). Harold Babcock in 1937 measured the rotation curve for the Andromeda galaxy and it turned out to be flat, also suggestive of dark matter (or dark gravity). Decades later, in the 1970s and 1980s, Vera Rubin (just recently passed away) and others mapped many rotation curves for galaxies and saw the same behavior. She herself preferred the idea of a deviation from general relativity over an explanation based on exotic dark matter particles. One needs about 5 times more matter, or about 5 times more gravity to explain these curves.

Verlinde is also able to derive the Tully-Fisher relation by modeling the entropy displacement of a dS space. The Tully-Fisher relation is the strong observed correlation between galaxy luminosity and angular velocity (or emission line width) for spiral galaxies, L \propto v^4 .  With Newtonian gravity one would expect M \propto v^2 . And since luminosity is essentially proportional to ordinary matter in a galaxy, there is a clear deviation by a ratio of v².


 Apparent distribution of spoofed dark matter,  for a given ordinary (baryonic) matter distribution

When one moves to the scale of clusters of galaxies, MOND is only partially successful, explaining a portion, coming up shy a factor of 2, but not explaining all of the apparent mass discrepancy. Verlinde’s emergent gravity does better. By modeling a general mass distribution he can gain a factor of 2 to 3 relative to MOND and basically it appears that he can explain the velocity distribution of galaxies in rich clusters without the need to resort to any dark matter whatsoever.

And, impressively, he is able to calculate what the apparent dark matter ratio should be in the universe as a whole. The value is \Omega_D^2 = (4/3) \Omega_B where \Omega_D is the apparent mass-energy fraction in dark matter and \Omega_B is the actual baryon mass density fraction. Both are expressed normalized to the critical density determined from the square of the Hubble parameter, 8 \pi G \rho_c = 3 H^2 .

Plugging in the observed \Omega_B \approx 0.05 one obtains \Omega_D \approx 0.26 , very close to the observed value from the cosmic microwave background observations. The Planck satellite results have the proportions for dark energy, dark matter, ordinary matter as .68, .27, and .05 respectively, assuming the canonical ΛCDM cosmology.

The main approximations Verlinde makes are a fully dS universe and an isolated, static (bound) system with a spherical geometry. He also does not address the issue of galaxy formation from the primordial density perturbations. At first guess, the fact that he can get the right universal \Omega_D suggests this may not be a great problem, but it requires study in detail.

Breaking News!

Margot Brouwer and co-researchers have just published a test of Verlinde’s emergent gravity with gravitational lensing. Using a sample of over 33,000 galaxies they find that general relativity and emergent gravity can provide an equally statistically good description of the observed weak gravitational lensing. However, emergent gravity does it with essentially no free parameters and thus is a more economical model.

“The observed phenomena that are currently attributed to dark matter are the consequence of the emergent nature of gravity and are caused by an elastic response due to the volume law contribution to the entanglement entropy in our universe.” – Erik Verlinde


Erik Verlinde 2011 “On the Origin of Gravity and the Laws of Newton” arXiv:1001.0785

Stephen Perrenod, 2013, 2nd edition, “Dark Matter, Dark Energy, Dark Gravity” Amazon, provides the traditional view with ΛCDM  (read Dark Matter chapter with skepticism!)

Erik Verlinde 2016 “Emergent Gravity and the Dark Universe arXiv:1611.02269v1

Margot Brouwer et al. 2016 “First test of Verlinde’s theory of Emergent Gravity using Weak Gravitational Lensing Measurements” arXiv:1612.03034v

Dark Gravity: Is Gravity Thermodynamic?

This is the first in a series of articles on ‘dark gravity’ that look at emergent gravity and modifications to general relativity. In my book Dark Matter, Dark Energy, Dark Gravity I explained that I had picked Dark Gravity to be part of the title because of the serious limitations in our understanding of gravity. It is not like the other 3 forces; we have no well accepted quantum description of gravity. And it is some 33 orders of magnitude weaker than those other forces.
I noted that:

The big question here is ~ why is gravity so relatively weak, as compared to the other 3 forces of nature? These 3 forces are the electromagnetic force, the strong nuclear force, and the weak nuclear force. Gravity is different ~ it has a dark or hidden side. It may very well operate in extra dimensions…

My major regret with the book is that I was not aware of, and did not include a summary of, Erik Verlinde’s work on emergent gravity. In emergent gravity, gravity is not one of the fundamental forces at all.

Erik Verlinde is a leading string theorist in the Netherlands who in 2009 proposed that gravity is an emergent phenomenon, resulting from the thermodynamic entropy of the microstates of quantum fields.

 In 2009, Verlinde showed that the laws of gravity may be derived by assuming a form of the holographic principle and the laws of thermodynamics. This may imply that gravity is not a true fundamental force of nature (like e.g. electromagnetism), but instead is a consequence of the universe striving to maximize entropy. – Wikipedia article “Erik Verlinde”

This year, Verlinde extended this work from an unrealistic anti-de Sitter model of the universe to a more realistic de Sitter model. Our runaway universe is approaching a dark energy dominated deSitter solution.

He proposes that general relativity is modified at large scales in a way that mimics the phenomena that have generally been attributed to dark matter. This is in line with MOND, or Modified Newtonian Dynamics. MOND is a long standing proposal from Mordehai Milgrom, who argues that there is no dark matter, rather that gravity is stronger at large distances than predicted by general relativity and Newton’s laws.

In a recent article on cosmology and the nature of gravity Dr.Thanu Padmanabhan lays out 6 issues with the canonical Lambda-CDM cosmology based on general relativity and a homogeneous, isotropic, expanding universe. Observations are highly supportive of such a canonical model, with a very early inflation phase and with 1/3 of the mass-energy content in dark energy and 2/3 in matter, mostly dark matter.

And yet,

1. The equation of state (pressure vs. density) of the early universe is indeterminate in principle, as well as in practice.

2. The history of the universe can be modeled based on just 3 energy density parameters: i) density during inflation, ii) density at radiation – matter equilibrium, and iii) dark energy density at late epochs. Both the first and last are dark energy driven inflationary de Sitter solutions, apparently unconnected, and one very rapid, and one very long lived. (No mention of dark matter density here).

3. One can construct a formula for the information content at the cosmic horizon from these 3 densities, and the value works out to be 4π to high accuracy.

4. There is an absolute reference frame, for which the cosmic microwave background is isotropic. There is an absolute reference scale for time, given by the temperature of the cosmic microwave background.

5. There is an arrow of time, indicated by the expansion of the universe and by the cooling of the cosmic microwave background.

6. The universe has, rather uniquely for physical systems, made a transition from quantum behavior to classical behavior.

“The evolution of spacetime itself can be described in a purely thermodynamic language in terms of suitably defined degrees of freedom in the bulk and boundary of a 3-volume.”

Now in fluid mechanics one observes:

“First, if we probe the fluid at scales comparable to the mean free path, you need to take into account the discreteness of molecules etc., and the fluid description breaks down. Second, a fluid simply might not have reached local thermodynamic equilibrium at the scales (which can be large compared to the mean free path) we are interested in.”

Now it is well known that general relativity as a classical theory must break down at very small scales (very high energies). But also with such a thermodynamic view of spacetime and gravity, one must consider the possibility that the universe has not reached a statistical equilibrium at the largest scales.

One could have reached equilibrium at macroscopic scales much less than the Hubble distance scale c/H (14 billion light-years, H is the Hubble parameter) but not yet reached it at the Hubble scale. In such a case the standard equations of gravity (general relativity) would apply only for the equilibrium region and for accelerations greater than the characteristic Hubble acceleration scale of  c \cdot H (2 centimeters per second / year).

This lack of statistical equilibrium implies the universe could behave similarly to non-equilibrium thermodynamics behavior observed in the laboratory.

The information content of the expanding universe reflects that of the quantum state before inflation, and this result is 4π in natural units by information theoretic arguments similar to those used to derive the entropy of a black hole.

The black hole entropy is  S = A / (4 \cdot Lp^2) where A is the area of the black hole using the Schwarzschild radius formula and Lp is the Planck length, G \hbar / c^3 , where G is the gravitational constant, \hbar  is Planck’s constant.

This beautiful Bekenstein-Hawking entropy formula connects thermodynamics, the quantum world  and gravity.

This same value of the universe’s entropy can also be used to determine the number of e-foldings during inflation to be 6 π² or 59, consistent with the minimum value to enforce a sufficiently homogeneous universe at the epoch of the cosmic microwave background.

If inflation occurs at a reasonable ~ 10^{15}  GeV, one can derive the observed value of the cosmological constant (dark energy) from the information content value as well, argues Dr. Padmanhaban.

This provides a connection between the two dark energy driven de Sitter phases, inflation and the present day runaway universe.

The table below summarizes the 4 major phases of the universe’s history, including the matter dominated phase, which may or may not have included dark matter. Erik Verlinde in his new work, and Milgrom for over 3 decades, question the need for dark matter.

Epoch  /  Dominated  /   Ends at  /   a-t scaling  /   Size at end

Inflation /  Inflaton (dark energy) / 10^{-32} seconds / e^{Ht} (de Sitter) / 10 cm

Radiation / Radiation / 40,000 years / \sqrt t /  10 million light-years

Matter / Matter (baryons) Dark matter? /  9 billion light-years / t^{2/3} /  > 100 billion light-years

Runaway /  Dark energy (Cosmological constant) /  “Infinity” /  e^{Ht} (de Sitter) / “Infinite”

In the next article I will review the status of MOND – Modified Newtonian Dynamics, from the phenomenology and observational evidence.


E. Verlinde. “On the Origin of Gravity and the Laws of Newton”. JHEP. 2011 (04): 29

T. Padmanabhan, 2016. “Do We Really Understand the Cosmos?”

S. Perrenod, 2011.

S. Perrenod, 2011. Dark Matter, Dark Energy, Dark Gravity 2011

S. Carroll and G. Remmen, 2016,

Galaxy Clusters Probe Dark Energy

Rich (large) clusters of galaxies are significant celestial X-ray sources. In fact, large clusters of galaxies typically contain around 10 times as much mass in the form of very hot gas as is contained in their constituent galaxies.

Moreover, the dark matter content of clusters is even greater than the gas content; typically it amounts to 80% to 90% of the cluster mass. In fact, the first detection of dark matter’s gravitational effects was made by Fritz Zwicky in the 1930s. His measurements indicated that the galaxies were moving around much faster than expected from the known galaxy masses within the cluster.


Image credit: X-ray: NASA/CXC/Univ. of Alabama/A. Morandi et al; Optical: SDSS, NASA/STScI (X-ray emission is shown in purple)

The dark matter’s gravitational field controls the evolution of a cluster. As a cluster forms via gravitational collapse, ordinary matter falling into the strong gravitational field interacts via frictional processes and shocks and thermalizes at a high temperature in the range of 10 to 100 million degrees (Kelvins). The gas is so hot, that it emits X-rays due to thermal bremsstrahlung.

Recently, Drs. Morandi and Sun at the University of Alabama have implemented a new test of dark energy using the observed X-ray emission profiles of clusters of galaxies. Since clusters are dominated by the infall of primordial gas (ordinary matter) into dark matter dominated gravitational wells, then X-ray emission profiles – especially in the outer regions of clusters – are expected to be similar, after correcting for temperature variations and the redshift distance. Their analysis also considers variation in gas fraction with redshift; this is found to be minimal.

Because of the self similar nature of the X-ray emission profiles, X-ray clusters of galaxies can serve as cosmological probes, a type of ‘standard candle’. In particular, they can be used to probe dark energy, and to look at the possibility of the variation of the strength of dark energy over multi-billion year cosmological time scales.

The reason this works is that cluster development and mass growth, and corresponding temperature increase due to stronger gravitational potential wells, are essentially a tradeoff of dark matter and dark energy. While dark matter causes a cluster to grow, dark energy inhibits further growth.

This varies with the redshift of a cluster, since dark energy is constant per unit volume as the universe expands, but dark matter was denser in the past in proportion to (1 + z)^3, where z is the cluster redshift. In the early universe, dark matter thus dominated, as it had a much higher density, but in the last several billion years, dark energy has come to dominate and impede further growth of clusters.

The table below shows the percentage of the mass-energy of the universe which is in the form of dark energy and in the form of matter (both dark and ordinary) at a given redshift, assuming constant dark energy per unit volume. This is based on the best estimate from Planck of 68% of the total mass-energy density due to dark energy at present (z = 0). Higher redshift means looking farther back in time. At z = 0.5, around 5 billion years ago, matter still dominated over dark energy, but by around z = 0.3 the two are about equal and since then (for smaller z) dark energy has dominated. It is only since after the Sun and Earth formed that the universe has entered the current dark energy dominated era.

Table: Total Matter & Dark Energy Percentages vs. z 


Dark Energy percent

Matter percent



















The authors analyzed data from a large sample consisting of 320 clusters of galaxies observed with the Chandra X-ray Observatory. The clusters ranged in redshifts from 0.056 up to 1.24 (almost 9 billion years ago), and all of the selected clusters had temperatures measured to be equal to or greater than 3 keV (above 35 million Kelvins). For such hot clusters, non-gravitational astrophysical effects, are expected to be small.

Their analysis evaluated the equation of state parameter, w, of dark energy. If dark energy adheres to the simplest model, that of the cosmological constant (Λ) found in the equations of general relativity, then w = -1 is expected.

The equation of state governs the relationship between pressure and energy density; dark energy is observed to have a negative pressure, for which w < 0, unlike for matter.

Their resulting value for the equation of state parameter is

w = -1.02 +/- 0.058,

equal to -1 within the statistical errors.

The results from combining three other experiments, namely

  1. Planck satellite cosmic microwave background (CMB) measurements
  2. WMAP satellite CMB polarization measurements
  3. optical observations of Type 1a supernovae

yield a value

w = -1.09 +/- 0.19,

also consistent with a cosmological constant. And combining both the X-ray cluster results with the CMB and optical results yields a tight constraint of

w = -1.01 +/- 0.03.

Thus a simple cosmological constant explanation for dark energy appears to be a sufficient explanation to within a few percent accuracy.

The authors were also able to constrain the evolution in w and find, for a model with

w(z) = w(0) + wa * z / (1 + z), that the evolution parameter is zero within statistical errors:

wa = -0.12 +/- 0.4.

This is a powerful test of dark energy’s existence, equation of state, and evolution, using hundreds of X-ray clusters of galaxies. There is no evidence for evolution in dark energy with redshift back to around z = 1, and a simple cosmological constant model is supported by the data from this technique as well as from other methods.


  1. Morandi, M. Sun arXiv:1601.03741v3 [astro-ph.CO] 4 Feb 2016, “Probing dark energy via galaxy cluster outskirts”

Gravitational Waves and Dark Matter, Dark Energy

What does the discovery of gravitational waves imply about dark matter and dark energy?

The first detection of gravitational waves results from a pair of merging black holes, and is yet another magnificent confirmation of the theory of general relativity. Einstein’s theory of general relativity has passed every test thrown at it during the last 100 years.

While the existence of gravitational waves was fully expected to be confirmed, the discovery took several decades and represents a technological tour de force. Detected at the two LIGO sites, one in Louisiana and one in Washington State, the main event lasted only 0.2 seconds, and was seen as a change of length in the “arms” of the detector (laser interferometers) of only one part in a thousand billion billion.

LIGO signal 2

The LIGO detection of gravitational waves. The blue curve is from the Louisiana site and the red curve from the Washington state site. The two curves are shifted by 7 milliseconds to account for the speed-of-light delay between the two sites. Note that most of the power in the signal occurs within less than 0.2 seconds. The strain is a measure of proportional change in length of the detector arm and is less than 1 part in 10²¹.

Nevertheless, this is the most energetic event ever seen by mankind. The merger of two large black holes totaling over 60 times the Sun’s mass resulted in the conversion of 3 solar masses of material into gravitational wave energy. Imagine, there were 3 Suns worth of matter obliterated in the blink of an eye. During this brief period, the generated power was greater than that from the light of all of the stars of all of the galaxies in our known universe.

What the discovery of gravitational waves has to say about dark matter and dark energy is essentially that it further confirms their existence.

Although there is as of now no direct detection of dark matter, we infer the existence of dark matter by using the equations of general relativity (GR), in a number of cases, including:

  1. Gravitational lensing – Typically, a foreground cluster of galaxies distorts and magnifies the image of a background galaxy. GR is used to calculate the bending and magnification, primarily caused by the dark matter in the foreground cluster.
  2. Cosmic microwave background radiation (CMBR) – The CMBR has spatial fluctuation peaks (harmonics) and the first peak tells us about ordinary matter and the third peak about the density of dark matter. A GR-based cosmological model is used to determine the dark matter average density.

Dark matter is also inferred from the way in which galaxies rotate and from the velocities of galaxies within galaxy clusters, but general relativity is not needed to calculate the dark matter densities in such cases. However, results from these methods are consistent with results from the methods listed above.

In the case of dark energy, it turns out to be a parameter in the equations of general relativity as first formulated by Einstein. The parameter, lambda, (Λ) is known as the cosmological constant, and represents the minimum energy of the vacuum. For many years astronomers and cosmologists thought it might take the value of zero. However in 1998 multiple teams confirmed that the value is positive and not zero, and it turns out that dark energy has more than twice the energy content of dark matter. Its non-zero value is actually another stunning success for general relativity.

Thus the detection of gravitational waves indirectly provides further support for the canonical cosmological model ΛCDM, with both dark matter and dark energy, and fully consistent with general relativity.

References – ScienceMag article

B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett. 116, 061102 – Published 11 February 2016 –

NEW BOOK just released:

S. Perrenod, 2016, 72 Beautiful Galaxies (especially designed for iPad, iOS; ages 12 and up)


Eternal Inflation and the Multiverse


Figure 1 from Andrei Linde’s paper “Brief History of the Multiverse”. Each blob represents a pocket universe, occupying a different region of space, and being born at a different time during eternal inflation. A particular pocket universe may be connected to its parent by some sort of bridge, or that connection may have broken or decayed. Different pocket universes will have different physics.

“Inflationary cosmology therefore suggests that, even though the observed universe is incredibly large, it is only an infinitesimal fraction of the entire universe” states Alan Guth, the original father of the inflationary Big Bang, in his article from 2007, “Eternal inflation and its implications”.

Inflation is the very brief – yet extremely significant – period in our own universe’s history, perhaps of duration only a billionth of a trillionth of a trillionth of a second. During the inflation event, a very submicroscopic bubble of energy and space expanded tremendously, doubling in scale perhaps 100 times or more in each of the 3 spatial dimensions. That’s an increase in volume of around 90 factors of 10! This inflationary epoch drove the universe to become macroscopic in scale, and also to become highly homogeneous and topologically flat at large scales.

The inflationary Big Bang models solved a number of outstanding problems in cosmology, such as the horizon problem and the flatness problem. Basically at large scales we see a homogeneous and topologically flat universe in all directions. Without inflation, parts of the universe seen on opposite ends of the sky would never have been casually connected. However, with the inflation models, those regions were originally within each others’ casually connected ‘light cones’, prior to the inflation phase, before it pushed them out to much larger physical scale, at which point they become highly separated.

Andrei Linde is another one of the fathers of inflationary Big Bang theory, and the originator of the chaotic inflation models. Chaotic inflation, and another leading model, ‘new’ inflation, both appear to result in eternal inflation; this gives rise to the multiverse scenario. That is, inflation keeps going in most of space, while multiple universes form and separate from the inflation process.

The multiverse scenario states that our universe is only one of a very large number of universes, and in such a case, our particular universe may be referred to as a ‘mini-universe’ or ‘pocket universe’. Of course our universe is already enormously large, it’s just that the multiverse is giaganormously larger than that. With eternal inflation the multiverse keeps inflating in other regions, portions of which will later settle out into other ‘pocket universes’.

Linde has recently published a summary “A Brief History of the Multiverse” which describes the developments in inflationary Big Bang theory and models for the multiverse since 1982. I encourage those who are interested in multiverses to read his paper.

With this eternal inflation our universe was (most likely) not the first, it was just one of many and inflation has been going on for a very long time. Inflation would continue forever into the future. New mini-universes would continue to be spawned and settle out from the overall inflation. It appears that eternal inflation is not eternal into the past, however, just into the future (see Guth paper referenced below).

Each of these mini-universes could have different values of the fundamental physical parameters. This ties into string theory models which admit of a very large number of possibilities for physical parameters.

Some sets of these parameters are favorable to life, but many (most) would not be. In order to get life as we know it we need carbon and other heavy elements, formed in stars (and not during the Big Bang nucleosynthesis), and we need a long-lived mini-universe. Other mini-universes might have different values of dark matter and dark energy than in our own universe. This could lead to very short lifetimes with no chance to form galaxies and stars.

Sidebar: These models are motivated by string theory and inflationary cosmology. It makes more sense in this context to think of ‘mini-universes’ rather than ‘parallel universes’ that often get popularized in discussions of quantum physics e.g. the Many Worlds discussions. Sorry to break the news to you, but there is not another you in each of these other mini-universes, since, even though they are endless in number, they all have different physical conditions and different histories.


Guth, Alan 2007. “Eternal Inflation and its Implications”

Linde, Andrei 2015. “Brief History of the Multiverse”

Dark Sector Experiments

A dark energy experiment was recently searching for a so-called scalar “chameleon field”. Chameleon particles could be an explanation for dark energy. They would have to make the field strength vanishingly small when they are in regions of significant matter density, coupling to matter more weakly than does gravity. But in low-density regions, say between the galaxies, the chameleon particle would exert a long range force.

Chameleons can decay to photons, so that provides a way to detect them, if they actually exist.

Chameleon particles were originally suggested by Justin Khoury of the University of Pennsylvania and another physicist around 2003. Now Khoury and Holger Muller and collaborators at UC Berkeley have performed an experiment which pushed millions of cesium atoms toward an aluminum sphere in a vacuum chamber. By changing the orientation in which the experiment is performed, the researchers can correct for the effects of gravity and compare the putative chameleon field strength to gravity.

If there were a chameleon field, then the cesium atoms should accelerate at different rates depending on the orientation, but no difference was found. The level of precision of this experiment is such that only chameleons that interact very strongly with matter have been ruled out. The team is looking to increase the precision of the experiment by additional orders of magnitude.

For now the simplest explanation for dark energy is the cosmological constant (or energy of the vacuum) as Einstein proposed almost 100 years ago.


The Large Underground Xenon experiment to detect dark matter (CC BY 3.0)

Dark matter search broadens

“Dark radiation” has been hypothesized for some time by some physicists. In this scenario there would be a “dark electromagnetic” force and dark matter particles could annihilate into dark photons or other dark sector particles when two dark matter particles collide with one another. This would happen infrequently, since dark matter is much more diffusely distributed than ordinary matter.

Ordinary matter clumps since it undergoes frictional and ordinary radiation processes, emitting photons. This allows it to cool it off and to become more dense under mutual gravitational forces. Dark matter rarely decays or interacts, and does not interact electromagnetically, thus no friction or ordinary radiation occurs. Essentially dark matter helps ordinary matter clump together initially since it dominates on the large scales, but on small scales ordinary matter will be dominant in certain regions. Thus the density of dark matter in the solar system is very low.

Earthbound dark matter detectors have focused on direct interaction of dark matter with atomic nuclei for the signal. John Cherry and co-authors have suggested that dark matter may not interact directly, but rather it first annihilates to light particles, which then scatter on the atomic nuclei used as targets in the direct detection experiments.

So in this scenario dark matter particles annihilate when they encounter each other, producing dark radiation, and then the dark radiation can be detected by currently existing direct detection experiments. If this is the main channel for detection, then much lower mass dark matter particles can be observed, down to of order 10 MeV (million electron-Volts), whereas current direct detection is focused on masses of several GeV (billion electron-Volts) to 100 GeV or more. (The proton rest mass is about 1 GeV)

A Nobel Prize awaits, most likely, the first unambiguous direct detection of either dark matter, or dark energy, if it is even possible.

References – Chameleon particle – dark energy experiment – dark photons – article on detecting dark matter generated dark radiation – Cherry et al. paper in Physical Review Letters

The Supervoid

The largest known structure in the universe goes by the name of the Supervoid. It is an enormously large under-dense region about 1.8 billion light-years in extent. Voids (actually low density regions) in galaxy and cluster density have been mapped over several decades.

The cosmic microwave background radiation map from the Planck satellite and earlier experiments is extremely uniform. The temperature is about 2.7 Kelvins everywhere in the universe at present. There are small microKelvin scale fluctuations due to primordial density perturbations. The over-dense regions grow over cosmic timescales to become galaxies, groups and clusters of galaxies, and superclusters made of multiple clusters. Under-dense regions have fewer galaxies and groups per unit volume than the average.

The largest inhomogeneous region detected in the cosmic microwave background map is known as the Cold Spot and has a very slightly lower temperature by about 70 microKelvins (a microKelvin being only a millionth of a degree). It may be partly explained by a supervoid of radius 320 Megaparsecs, or around 1 billion light-years radius.

Superclusters heat cosmic microwave background photons slightly when they pass through, if there is significant dark energy in the universe. Supervoids cool the microwave background photons slightly. The reason is that, once dark energy becomes significant, during the second half of the universe’s expansion to date, it begins to smooth out superclusters and supervoids. It pushes the universe back towards greater uniformity while accelerating the overall expansion.

A photon will gain energy (blueshift) when it heads into a supercluster on its way to the Earth. This is an effect of general relativity. And as it leaves the other side of the supercluster as it continues its journey, it will lose energy (redshift) as it climbs out of the gravitational potential well. But while it is passing through the supercluster, that structure is spreading out due to the Big Bang overall expansion, and its gravitational potential is weakening. So the redshift or energy loss is smaller than the original energy gain or blueshift. So net-net, photons gain energy passing through a supercluster.

The opposite happens with a supervoid. Photons lose energy on the way in. They gain  energy on the way out, but less than they lost. Net-net photons lose energy, become colder, when passing through supervoids. Now all of this is relative to the overall redshift that all photons experience as they travel from the Big Bang last scattering surface to the Earth. During each period that the universe doubles in size, the Big Bang radiation doubles in wavelength, or halves in temperature.

In a newly published paper titled “Detection of a Supervoid aligned with the Cold Spot in the Cosmic Microwave Background”, astronomers looked at the distribution of galaxies in the direction of the well-established Cold Spot. The supervoid core redshift distance is in the range z = 0.15 to z = 0.25, corresponding to a distance of roughly 2 to 3 billion light-years from Earth.

They find a reduction in galaxy density of about 20%, and of dark matter around 14%, in the supervoid, relative to the overall average density values in the universe. The significance of the detection is high, around 5 standard deviations. The center of the low density region is well aligned with the position of the Cold Spot in the galactic Southern Hemisphere.

Both the existence of this supervoid and its alignment with the Cold Spot are highly significant. The chance of the two being closely aligned to this degree is calculated as just 1 chance in 20,000. The image below is Figure 2 from the authors’ paper and maps the density of galaxies in the left panel and the temperature differential of the microwave background radiation in the right panel. The white dot in the middle of each panel marks the center of the Cold Spot in the cosmic microwave background.


A lower density of galaxies is indicated by a blue color in the left panel. Red and orange colors denote a higher density of galaxies. The right panel shows slightly lower temperature of the cosmic microwave background in blue, and slightly higher temperature in red.

The authors have calculated the expected temperature reduction due to the supervoid; using a first-order model it is about 20 microKelvins. While this is not sufficient to explain the entire Cold Spot temperature decrease, it is a significant portion of the overall 70 microKelvin reduction.

Dark Energy is gradually smearing out the distinction between superclusters and supervoids. Dark Energy has come to dominate the universe’s mass-energy balance fairly recently, since about 5 billion years ago. If there is no change in the Dark Energy density, over many billions of years it will push all the galaxies so far apart from one another that no other galaxies will be detectable from our Milky Way.


I. Szapudi et al, 2015 M.N.R.A.S., Volume 450, Issue 1, p. 288, “Detection of a supervoid aligned with the cold spot of the cosmic microwave background” –

S. Perrenod and M. Lesser, 1980, P.A.S.P. 91:764, “A Redshift Survey of a High-Multiplicity Supercluster”


Planck 2015 Constraints on Dark Energy and Inflation

The European Space Agency’s Planck satellite gathered data for over 4 years, and a series of 28 papers releasing the results and evaluating constraints on cosmological models have been recently released. In general, the Planck mission’s complete results confirm the canonical cosmological model, known as Lambda Cold Dark Matter, or ΛCDM. In approximate percentage terms the Planck 2015 results indicate 69% dark energy, 26% dark matter, and 5% ordinary matter as the mass-energy components of the universe (see this earlier blog:

Dark Energy

We know that dark energy is the dominant force in the universe, comprising 69% of the total energy content. And it exerts a negative pressure causing the expansion to continuously speed up. The universe is not only expanding, but the expansion is even accelerating! What dark energy is we do not know, but the simplest explanation is that it is the energy of empty space, of the vacuum. Significant departures from this simple model are not supported by observations.

The dark energy equation of state is the relation between the pressure exerted by dark energy and its energy density. Planck satellite measurements are able to constrain the dark energy equation of state significantly. Consistent with earlier measurements of this parameter, which is usually denoted as w, the Planck Consortium has determined that w = -1 to within 4 or 5 percent (95% confidence).

According to the Planck Consortium, “By combining the Planck TT+lowP+lensing data with other astrophysical data, including the JLA supernovae, the equation of state for dark energy is constrained to w = −1.006 ± 0.045 and is therefore compatible with a cosmological constant, assumed in the base ΛCDM cosmology.”

A value of -1 for w corresponds to a simple Cosmological constant model with a single parameter Λ  that is the present-day energy density of empty space, the vacuum. The Λ value measured to be 0.69 is normalized to the critical mass-energy density. Since the vacuum is permeated by various fields, its energy density is non-zero. (The critical mass-energy density is that which results in a topologically flat space-time for the universe; it is the equivalent of 5.2 proton masses per cubic meter.)

Such a model has a negative pressure, which leads to the accelerated expansion that has been observed for the universe; this acceleration was first discovered in 1998 by two teams using certain supernova as standard candle distance indicators, and measuring their luminosity as a function of redshift distance.

Modified gravity

The phrase modified gravity refers to models that depart from general relativity. To date, general relativity has passed every test thrown at it, on scales from the Earth to the universe as a whole. The Planck Consortium has also explored a number of modified gravity models with extensions to general relativity. They are able to tighten the restrictions on such models, and find that overall there is no need for modifications to general relativity to explain the data from the Planck satellite.

Primordial density fluctuations

The Planck data are consistent with a model of primordial density fluctuations that is close to, but not precisely, scale invariant. These are the fluctuations which gave rise to overdensities in dark matter and ordinary matter that eventually collapsed to form galaxies and the observed large scale structure of the universe.

The concept is that the spectrum of density fluctuations is a simple power law of the form

P(k) ∝ k**(ns−1),

where k is the wave number (the inverse of the wavelength scale). The Planck observations are well fit by such a power law assumption. The measured spectral index of the perturbations has a slight tilt away from 1, with the existence of the tilt being valid to more than 5 standard deviations of accuracy.

ns = 0.9677 ± 0.0060

The existence and amount of this tilt in the spectral index has implications for inflationary models.


The Planck Consortium authors have evaluated a wide range of potential inflationary models against the data products, including the following categories:

  • Power law
  • Hilltop
  • Natural
  • D-brane
  • Exponential
  • Spontaneously broken supersymmetry
  • Alpha attractors
  • Non-minimally coupled

Figure 12 from Constraints on InflationFigure 12 from Planck 2015 results XX Constraints on Inflation. The Planck 2015 data constraints are shown with the red and blue contours. Steeper models with  V ~ φ³ or V ~ φ² appear ruled out, whereas R² inflation looks quite attractive.

Their results appear to rule out some of these, although many models remain consistent with the data. Power law models with indices greater or equal to 2 appear to be ruled out. Simple slow roll models such as R² inflation, which is actually the first inflationary model proposed 35 years ago, appears more favored than others. Brane inflation and exponential inflation are also good fits to the data. Again, many other models still remain statistically consistent with the data.

Simple models with a few parameters characterizing the inflation suffice:

“Firstly, under the assumption that the inflaton* potential is smooth over the observable range, we showed that the simplest parametric forms (involving only three free parameters including the amplitude V (φ∗ ), no deviation from slow roll, and nearly power-law primordial spectra) are sufficient to explain the data. No high-order derivatives or deviations from slow roll are required.”

* The inflaton is the name cosmologists give to the inflation field

“Among the models considered using this approach, the R2 inflationary model proposed by Starobinsky (1980) is the most preferred. Due to its high tensor- to-scalar ratio, the quadratic model is now strongly disfavoured with respect to R² inflation for Planck TT+lowP in combination with BAO data. By combining with the BKP likelihood, this trend is confirmed, and natural inflation is also disfavoured.”

Isocurvature and tensor components

They also evaluate whether the cosmological perturbations are purely adiabatic, or include an additional isocurvature component as well. They find that an isocurvature component would be small, less than 2% of the overall perturbation strength. A single scalar inflaton field with adiabatic perturbations is sufficient to explain the Planck data.

They find that the tensor-to-scalar ratio is less than 9%, which again rules out or constrains certain models of inflation.


The simplest LambdaCDM model continues to be quite robust, with the dark energy taking the form of a simple cosmological constant. It’s interesting that one of the oldest and simplest models for inflation, characterized by a power law relating the potential to the inflaton amplitude, and dating from 35 years ago, is favored by the latest Planck results. A value for the power law index of less than 2 is favored. All things being equal, Occam’s razor should lead us to prefer this sort of simple model for the universe’s early history. Models with slow-roll evolution throughout the inflationary epoch appear to be sufficient.

The universe started simply, but has become highly structured and complex through various evolutionary processes.


Planck Consortium 2015 papers are at – This site links to the 28 papers for the 2015 results, as well as earlier publications. Especially relevant are these – XIII Cosmological parameters, XIV Dark energy and modified gravity, and XX Constraints on inflation.

Planck Mission Full Results Confirm Canonical Cosmology Model

Dark Matter, Dark Energy values refined

The Planck satellite, launched by the European Space Agency, made observations of the cosmic microwave background (CMB) for a little over 4 years, beginning in August, 2009 until October, 2013.

Preliminary results based on only the data obtained over the first year and a quarter of operation, and released in 2013, established high confidence in the canonical cosmological model. This ΛCDM (Lambda-Cold Dark Matter) model is of a topologically flat universe, initiated in an inflationary Big Bang some 13.8 billion years ago and dominated by dark energy (the Λ component), and secondarily by cold dark matter (CDM). Ordinary matter, of which stars, planets and human beings are composed, is the third most important component from a mass-energy standpoint. The amount of dark energy is over twice the mass-energy equivalent of all matter combined, and the dark matter is well in excess of the ordinary matter component.


This general model had been well-established by the Wilkinson Microwave Anisotropy Probe (WMAP), but the Planck results have provided much greater sensitivity and confidence in the results.

Now a series of 28 papers have been released by the Planck Consortium detailing results from the entire mission, with over three times as much data gathered. The first paper in the series, Planck 2015 Results I, provides an overview of these results. Papers XIII and XIV detail the cosmological parameters measured and the findings on dark energy, while several additional papers examine potential departures from a canonical cosmological model and constraints on inflationary models.

In particular they find that:

Ωb*h²  = .02226 to within 1%.

In this expression Ωb is the baryon (basically ordinary matter) mass-energy fraction (fraction of total-mass energy in ordinary matter) and h = H0/100. H0 is the Hubble constant which measures the expansion rate of the universe, and indirectly, its age. The best value for H0 is 67.8 kilometers/sec/Megaparsec  (millions of parsecs, where 1 parsec = 3.26 light-years). H0 has an uncertainty of about 1.3% (two standard deviations). In this case h = .678 and the expression above becomes:

Ωb = .048, with uncertainty around 3% of its value. Thus, just under 5% of the mass-energy density in the universe is in ordinary matter.

The cold matter density is measured to be:

Ωc*h²  = .1186 with uncertainty less than 2% and with the h value substituted we have Ωc = .258 with similar uncertainty.

Since the radiation density in the universe is known to be very low, the remainder of the mass-energy fraction is from dark energy,

Ωe = 1 – .048 – .258 = .694

So in approximate percentage terms the Planck 2015 results indicate 69% dark energy, 26% dark matter, and 5% ordinary matter as the mass-energy balance of the universe. These results are essentially the same as the ratios found from the preliminary results reported in 2013. It is to be emphasized that these are present-day values of the constituents. The components evolve differently as the universe expands. Dark energy is manifested with its current energy density in every new unit of volume as the universe continues to expand, while the average dark matter and ordinary matter densities decrease inversely as the volume grows. This implies that in the past, dark energy was less important, but it will dominate more and more as the universe continues to expand.

Why is dark energy produced as the universe expands? The simplest explanation is that it is the irreducible quantum energy of empty space, of the vacuum. Empty space – space with no particles whatsoever – still has fields (scalar fields, in particular) permeating it, and these fields have a minimum energy. It also has ‘virtual’ particles popping in and out of existence very briefly. This is the cosmological constant (Λ) model for the dark energy.

This is the ultimate free lunch in nature. The dark energy works as a negative gravity; it enters into the equations of general relativity as a negative pressure which causes space to expand. And as space expands, more dark energy is created! A wonderful self-reinforcing process is in place. Since the dark energy dominates over matter, the expansion of the universe is accelerating, and has been for the last 5 billion years or so. Why wonderful? Because it adds billions upon billions of years of life to our universe.

The Planck Consortium also find the universe is topologically flat to a very high degree, with an upper limit of 1/2 of 1% deviation from flatness at large scales. This is an impressive observational result.

One of the most interesting results is Planck’s ability to constrain inflationary models. While a massive inflation almost certainly happened during the first billionth of a trillionth of a trillionth of a second as the Universe began, as indicated by the very uniformity of the CMB signal, there are many possible models of the inflationary field’s energy potential.

We’ll take a look at this in a future blog entry.