Category Archives: Big Bang & Inflation

Dark Gravity: Is Gravity Thermodynamic?

This is the first in a series of articles on ‘dark gravity’ that look at emergent gravity and modifications to general relativity. In my book Dark Matter, Dark Energy, Dark Gravity I explained that I had picked Dark Gravity to be part of the title because of the serious limitations in our understanding of gravity. It is not like the other 3 forces; we have no well accepted quantum description of gravity. And it is some 33 orders of magnitude weaker than those other forces.
I noted that:

The big question here is ~ why is gravity so relatively weak, as compared to the other 3 forces of nature? These 3 forces are the electromagnetic force, the strong nuclear force, and the weak nuclear force. Gravity is different ~ it has a dark or hidden side. It may very well operate in extra dimensions…

My major regret with the book is that I was not aware of, and did not include a summary of, Erik Verlinde’s work on emergent gravity. In emergent gravity, gravity is not one of the fundamental forces at all.

Erik Verlinde is a leading string theorist in the Netherlands who in 2009 proposed that gravity is an emergent phenomenon, resulting from the thermodynamic entropy of the microstates of quantum fields.

 In 2009, Verlinde showed that the laws of gravity may be derived by assuming a form of the holographic principle and the laws of thermodynamics. This may imply that gravity is not a true fundamental force of nature (like e.g. electromagnetism), but instead is a consequence of the universe striving to maximize entropy. – Wikipedia article “Erik Verlinde”

This year, Verlinde extended this work from an unrealistic anti-de Sitter model of the universe to a more realistic de Sitter model. Our runaway universe is approaching a dark energy dominated deSitter solution.

He proposes that general relativity is modified at large scales in a way that mimics the phenomena that have generally been attributed to dark matter. This is in line with MOND, or Modified Newtonian Dynamics. MOND is a long standing proposal from Mordehai Milgrom, who argues that there is no dark matter, rather that gravity is stronger at large distances than predicted by general relativity and Newton’s laws.

In a recent article on cosmology and the nature of gravity Dr.Thanu Padmanabhan lays out 6 issues with the canonical Lambda-CDM cosmology based on general relativity and a homogeneous, isotropic, expanding universe. Observations are highly supportive of such a canonical model, with a very early inflation phase and with 1/3 of the mass-energy content in dark energy and 2/3 in matter, mostly dark matter.

And yet,

1. The equation of state (pressure vs. density) of the early universe is indeterminate in principle, as well as in practice.

2. The history of the universe can be modeled based on just 3 energy density parameters: i) density during inflation, ii) density at radiation – matter equilibrium, and iii) dark energy density at late epochs. Both the first and last are dark energy driven inflationary de Sitter solutions, apparently unconnected, and one very rapid, and one very long lived. (No mention of dark matter density here).

3. One can construct a formula for the information content at the cosmic horizon from these 3 densities, and the value works out to be 4π to high accuracy.

4. There is an absolute reference frame, for which the cosmic microwave background is isotropic. There is an absolute reference scale for time, given by the temperature of the cosmic microwave background.

5. There is an arrow of time, indicated by the expansion of the universe and by the cooling of the cosmic microwave background.

6. The universe has, rather uniquely for physical systems, made a transition from quantum behavior to classical behavior.

“The evolution of spacetime itself can be described in a purely thermodynamic language in terms of suitably defined degrees of freedom in the bulk and boundary of a 3-volume.”

Now in fluid mechanics one observes:

“First, if we probe the fluid at scales comparable to the mean free path, you need to take into account the discreteness of molecules etc., and the fluid description breaks down. Second, a fluid simply might not have reached local thermodynamic equilibrium at the scales (which can be large compared to the mean free path) we are interested in.”

Now it is well known that general relativity as a classical theory must break down at very small scales (very high energies). But also with such a thermodynamic view of spacetime and gravity, one must consider the possibility that the universe has not reached a statistical equilibrium at the largest scales.

One could have reached equilibrium at macroscopic scales much less than the Hubble distance scale c/H (14 billion light-years, H is the Hubble parameter) but not yet reached it at the Hubble scale. In such a case the standard equations of gravity (general relativity) would apply only for the equilibrium region and for accelerations greater than the characteristic Hubble acceleration scale of  c \cdot H (2 centimeters per second / year).

This lack of statistical equilibrium implies the universe could behave similarly to non-equilibrium thermodynamics behavior observed in the laboratory.

The information content of the expanding universe reflects that of the quantum state before inflation, and this result is 4π in natural units by information theoretic arguments similar to those used to derive the entropy of a black hole.

The black hole entropy is  S = A / (4 \cdot Lp^2) where A is the area of the black hole using the Schwarzschild radius formula and Lp is the Planck length, G \hbar / c^3 , where G is the gravitational constant, \hbar  is Planck’s constant.

This beautiful Bekenstein-Hawking entropy formula connects thermodynamics, the quantum world  and gravity.

This same value of the universe’s entropy can also be used to determine the number of e-foldings during inflation to be 6 π² or 59, consistent with the minimum value to enforce a sufficiently homogeneous universe at the epoch of the cosmic microwave background.

If inflation occurs at a reasonable ~ 10^{15}  GeV, one can derive the observed value of the cosmological constant (dark energy) from the information content value as well, argues Dr. Padmanhaban.

This provides a connection between the two dark energy driven de Sitter phases, inflation and the present day runaway universe.

The table below summarizes the 4 major phases of the universe’s history, including the matter dominated phase, which may or may not have included dark matter. Erik Verlinde in his new work, and Milgrom for over 3 decades, question the need for dark matter.

Epoch  /  Dominated  /   Ends at  /   a-t scaling  /   Size at end

Inflation /  Inflaton (dark energy) / 10^{-32} seconds / e^{Ht} (de Sitter) / 10 cm

Radiation / Radiation / 40,000 years / \sqrt t /  10 million light-years

Matter / Matter (baryons) Dark matter? /  9 billion light-years / t^{2/3} /  > 100 billion light-years

Runaway /  Dark energy (Cosmological constant) /  “Infinity” /  e^{Ht} (de Sitter) / “Infinite”

In the next article I will review the status of MOND – Modified Newtonian Dynamics, from the phenomenology and observational evidence.


E. Verlinde. “On the Origin of Gravity and the Laws of Newton”. JHEP. 2011 (04): 29

T. Padmanabhan, 2016. “Do We Really Understand the Cosmos?”

S. Perrenod, 2011.

S. Perrenod, 2011. Dark Matter, Dark Energy, Dark Gravity 2011

S. Carroll and G. Remmen, 2016,


WIMPs or MACHOs or Primordial Black Holes

A decade or more ago, the debate about dark matter was, is it due to WIMPs (weakly interacting massive particles) or MACHOs (massive compact halo objects)? WIMPs would be new exotic particles, while MACHOs are objects formed from ordinary matter but very hard to detect due to their limited electromagnetic radiation emission.


Schwarzenegger (MACHO), not Schwarzschild (Black Holes)

Image credit: Georges Biard, CC BY-SA 3.0

Candidates in the MACHO category such as white dwarf or brown dwarf stars have been ruled out by observational constraints. Black holes formed in the very early universe, dubbed primordial black holes, were thought by many to have been ruled out as well, at least across many mass ranges, such as between the mass of the Moon and the mass of the Sun.

The focus during recent years, and most of the experimental searches, has shifted to WIMPs or other exotic particles (axions or sterile neutrinos primarily). But the WIMPs, which were motivated by supersymmetric extensions to the Standard Model of particle physics, have remained elusive. Most experiments have only placed stricter and stricter limits on their possible abundance and interaction cross-sections. The Large Hadron Collider has not yet found any evidence for supersymmetric particles.

Have primordial black holes (PBHs) as the explanation for dark matter been given short shrift? The recent detections by the LIGO instruments of two gravitational wave events, well explained by black hole mergers, have sparked new interest. A previous blog entry addressed this possibility:

The black holes observed in these events have masses in a range from about 8 to about 36 solar masses, and they could well be primordial.

There are a number of mechanisms to create PBHs in the early universe, prior to the very first second and the beginning of Big Bang nucleosynthesis. At any era, if there is a total mass M confined within a radius R, such that

2*GM/R > c^2 ,

then a black hole will form. The above equation defines the Schwarzschild limit (G is the gravitational constant and c the speed of light). A PBH doesn’t even have to be formed from matter whether ordinary or exotic; if the energy and radiation density is high enough in a region, it can also result in collapse to a black hole.


Cosmic Strings

Image credit: David Daverio, Université de Genève, CSCS supercomputer simulation data

The mechanisms for PBH creation include:

  1. Cosmic string loops – If string theory is correct the very early universe had very long strings and many short loops of strings. These topological defects intersect and form black holes due to the very high density at their intersection points. The black holes could have a broad range of masses.
  2. Bubble collisions from symmetry breaking – As the very early universe expanded and cooled, the strong force, weak force and electromagnetic force separated out. Bubbles would nucleate at the time of symmetry breaking as the phase of the universe changed, just as bubbles form in water as it boils to the surface. Collisions of bubbles could lead to high density regions and black hole formation. Symmetry breaking at the GUT scale (for the strong force separation) would yield BHs of mass around 100 kilograms. Symmetry breaking of the weak force from the electromagnetic force would yield BHs with a mass of around our Moon’s mass ~ 10^25 kilograms.
  3. Density perturbations – These would be a natural result of the mechanisms in #1 and #2, in any case. When observing the cosmic microwave background radiation, which dates from a time when the universe was only 380,000 years old, we see density perturbations at various scales, with amplitudes of only a few parts in a million. Nevertheless these serve as the seeds for the formation of the first galaxies when the universe was only a few hundred million years old. Some perturbations could be large enough on smaller distance scales to form PBHs ranging from above a solar mass to as high as 100,000 solar masses.

For a PBH to be an effective dark matter contributor, it must have a lifetime longer than the age of the universe. BHs radiate due to Hawking radiation, and thus have finite lifetimes. For stellar mass BHs, the lifetimes are incredibly long, but for smaller BHs the lifetimes are much shorter since the lifetime is proportional to the cube of the BH mass. Thus a minimum mass for PBHs surviving to the present epoch is around a trillion kilograms (a billion tons).

Carr et al. (paper referenced below) summarized the constraints on what fraction of the matter content of the universe could be in the form of black holes. Traditional black holes, of several solar masses, created by stellar collapse and detectable due to their accretion disks, do not provide enough matter density. Neither do supermassive black holes of over a million solar masses found at the centers of most galaxies. PBHs may be important in seeding the formation of the supermassive black holes, however.

Limits on the PBH abundance in our galaxy and its halo (which is primarily composed of dark matter) are obtained from:

  1. Cosmic microwave background measurements
  2. Microlensing measurements (gravitational lensing)
  3. Gamma-ray background limits
  4. Neutral hydrogen clouds in the early universe
  5. Wide binaries (disruption limits)

Microlensing surveys such as MACHO and EROS have searched for objects in our galactic halo that act as gravitational lenses for light originating from background stars in the Magellanic Clouds or the Andromeda galaxy. The galactic halo is composed primarily of dark matter.

A couple of dozen of objects with less than a solar mass have been detected.  Based on these surveys the fraction of dark matter which can be PBHs with less than a solar mass is 10% at most. The constraints from 1 solar mass up to 30 solar masses are weaker, and a PBH explanation for most of the galactic halo mass remains possible.

Similar studies conducted toward distant quasars and compact radio sources address the constraint in the supermassive black hole domain, apparently ruling out an explanation due to PBHs with from 1 million to 100 million solar masses.

Lyman-alpha clouds are neutral hydrogen clouds (Lyman-alpha is an important ultraviolet absorption line for hydrogen) that are found in the early universe at redshifts above 4. Simulations of the effect of PBH number density fluctuations on the distribution of Lyman-alpha clouds appear to limit the PBH contribution to dark matter for a characteristic PBH mass above 10,000 solar masses.

Distortions in the cosmic microwave background are expected if PBHs above 10 solar masses contributed substantially to the dark matter component. However these limits assume that PBH masses do not change. Merging and accretion events after the recombination era, when the cosmic microwave background was emitted, can allow a spectrum of PBH masses that were initially less than a solar mass before recombination evolve to one dominated by PBHs of tens, hundreds and thousands of solar masses today. This could be a way around some of the limits that appear to be placed by the cosmic microwave background temperature fluctuations.

Thus it appears could be a window in the region 30 to several thousand solar masses for PBHs as an explanation of cold dark matter.

As the Advanced LIGO gravitational wave detectors come on line, we expect many more black hole merger discoveries that will help to elucidate the nature of primordial black holes and the possibility that they contribute substantially to the dark matter component of our Milky Way galaxy and the universe.


B. Carr, K. Kohri, Y. Sendouda, J. Yokoyama, 2010 “New cosmological constraints on primordial black holes”

S. Cleese and J. Garcia-Bellido, 2015 “Massive Primordial Black Holes from Hybrid Inflation as Dark Matter and the Seeds of Galaxies”

P. Frampton, 2015 “The Primordial Black Hole Mass Range”

P. Frampton, 2016 “Searching for Dark Matter Constituents with Many Solar Masses”

Green, A., 2011 “Primordial Black Hole Formation”

P. Pani, and A. Loeb, 2014 “Exclusion of the remaining mass window for primordial black holes as the dominant constituent of dark matter”

S. Perrenod, 2016

NEW BOOK just released:

S. Perrenod, 2016, 72 Beautiful Galaxies (especially designed for iPad, iOS; ages 12 and up)


Primordial Black Holes as Dark Matter?

LIGO Gravitational Wave Detection Postulated to be Due to Primordial Black Holes

Dark matter remains elusive, with overwhelming evidence for its gravitational effects, but no confirmed direct detection of exotic dark matter particles.

Another possibility which is being re-examined as an explanation for dark matter is that of black holes that formed in the very early universe, which in principle could be of very small mass, or quite large mass. And they may have initially formed at smaller masses and then aggregated gravitationally to form larger black holes.

Recently gravitational waves were discovered for the first time, by both of the LIGO instruments, located in Louisiana and in Washington State. The gravitational wave signal (GW150914) indicates that the source was a pair of black holes, of about 29 and 36 solar masses respectively, spiraling together into a single black hole of about 62 solar masses. A full 3 solar masses’ worth of gravitational energy was radiated way in the merger. Breaking news: LIGO has just this month announced gravitational waves from a second black hole binary of 22 solar masses total. One solar mass of energy was radiated away in the merger.


Image credit: NASA/JPL,

Most of the black holes that we detect (indirectly, from their accretion disks) are stellar-sized in the range of 10 to 100 solar masses and are believed to be the evolutionary endpoints of massive stars. We detect them when they are surrounded by accretion disks of hot luminous matter outside of their event horizons. The other main category of black holes exceeds a million solar masses and can even be more than a billion solar masses, and are known as supermassive black holes.

It is possible that some of the stellar-sized and even elusive intermediate black holes were formed in the Big Bang. Such black holes are referred to as primordial black holes. There are a variety of theoretical formation mechanisms, such as cosmic strings whose loops in all dimensions are contained within the event horizon radius (Schwarzschild radius). In general such primordial black holes (PBHs) would be distributed in a galaxy’s halo, would interact rarely and not have accretion disks and thus would not be detectable due to electromagnetic radiation. That is, they would behave as dark matter.

Dr. Simon Bird and coauthors have recently proposed that the gravitational wave event (GW150914) could be due to two primordial black holes encountering each other in a galactic halo, radiate enough of their kinetic energy away in gravity waves to become bound to each other and inspiral to a single black hole with a final burst of gravitational radiation. The frequency of events is estimated to be of order a few per year per cubic Gigaparsec (a Gigaparsec is 3.26 billion light years), if the dark matter abundance is dominated by PBHs.

While low-mass PBHs have been ruled out for the most part, except of a window around one one-hundred millionth of a solar mass, the authors suggest a window also remains for PBHs in the range from 20 to 100 solar masses.

Dr. A. Kashlinsky has gone further to suggest that the cosmic infrared background (CIB) of unresolved 2 to 5 micron near-infrared sources is due to PBHs. In this case the PBHs would be the dominant dark matter component in galactic halos and would mediate early star and galaxy formation. Furthermore there is an unresolved soft cosmic X-ray background which appears to be correlated with the CIB.

This would be a trifecta, with PBHs explaining much or most of the dark matter, the CIB and the soft-X-Ray CXB! But at this point it’s all rather speculative.

The LIGO instruments are now upgraded to Advanced LIGO and as more gravitational wave events are detected due to black holes, we can gain further insight into this possible explanation for dark matter, in whole or in part. Improved satellite born experiments to further resolve the CIB and CXB will also help to explore this possibility of PBHs as a major component to dark matter.


S. Bird et al. arXiv:1603.00464v2 “Did LIGO detect Dark Matter”

A. Kashlinksy arXiv:1605.04023v1 “LIGO gravitational wave detection, primordial black holes and the near-IR cosmic infrared background anisotropies” – “It’s Confirmed! Black Holes Do Come in Medium Sizes”

Video (artist’s representation) of inspiral and merger of binary black hole GW151226 (second gravitational wave detection):

NEW BOOK just released:

S. Perrenod, 2016, 72 Beautiful Galaxies (especially designed for iPad, iOS; ages 12 and up)


Eternal Inflation and the Multiverse


Figure 1 from Andrei Linde’s paper “Brief History of the Multiverse”. Each blob represents a pocket universe, occupying a different region of space, and being born at a different time during eternal inflation. A particular pocket universe may be connected to its parent by some sort of bridge, or that connection may have broken or decayed. Different pocket universes will have different physics.

“Inflationary cosmology therefore suggests that, even though the observed universe is incredibly large, it is only an infinitesimal fraction of the entire universe” states Alan Guth, the original father of the inflationary Big Bang, in his article from 2007, “Eternal inflation and its implications”.

Inflation is the very brief – yet extremely significant – period in our own universe’s history, perhaps of duration only a billionth of a trillionth of a trillionth of a second. During the inflation event, a very submicroscopic bubble of energy and space expanded tremendously, doubling in scale perhaps 100 times or more in each of the 3 spatial dimensions. That’s an increase in volume of around 90 factors of 10! This inflationary epoch drove the universe to become macroscopic in scale, and also to become highly homogeneous and topologically flat at large scales.

The inflationary Big Bang models solved a number of outstanding problems in cosmology, such as the horizon problem and the flatness problem. Basically at large scales we see a homogeneous and topologically flat universe in all directions. Without inflation, parts of the universe seen on opposite ends of the sky would never have been casually connected. However, with the inflation models, those regions were originally within each others’ casually connected ‘light cones’, prior to the inflation phase, before it pushed them out to much larger physical scale, at which point they become highly separated.

Andrei Linde is another one of the fathers of inflationary Big Bang theory, and the originator of the chaotic inflation models. Chaotic inflation, and another leading model, ‘new’ inflation, both appear to result in eternal inflation; this gives rise to the multiverse scenario. That is, inflation keeps going in most of space, while multiple universes form and separate from the inflation process.

The multiverse scenario states that our universe is only one of a very large number of universes, and in such a case, our particular universe may be referred to as a ‘mini-universe’ or ‘pocket universe’. Of course our universe is already enormously large, it’s just that the multiverse is giaganormously larger than that. With eternal inflation the multiverse keeps inflating in other regions, portions of which will later settle out into other ‘pocket universes’.

Linde has recently published a summary “A Brief History of the Multiverse” which describes the developments in inflationary Big Bang theory and models for the multiverse since 1982. I encourage those who are interested in multiverses to read his paper.

With this eternal inflation our universe was (most likely) not the first, it was just one of many and inflation has been going on for a very long time. Inflation would continue forever into the future. New mini-universes would continue to be spawned and settle out from the overall inflation. It appears that eternal inflation is not eternal into the past, however, just into the future (see Guth paper referenced below).

Each of these mini-universes could have different values of the fundamental physical parameters. This ties into string theory models which admit of a very large number of possibilities for physical parameters.

Some sets of these parameters are favorable to life, but many (most) would not be. In order to get life as we know it we need carbon and other heavy elements, formed in stars (and not during the Big Bang nucleosynthesis), and we need a long-lived mini-universe. Other mini-universes might have different values of dark matter and dark energy than in our own universe. This could lead to very short lifetimes with no chance to form galaxies and stars.

Sidebar: These models are motivated by string theory and inflationary cosmology. It makes more sense in this context to think of ‘mini-universes’ rather than ‘parallel universes’ that often get popularized in discussions of quantum physics e.g. the Many Worlds discussions. Sorry to break the news to you, but there is not another you in each of these other mini-universes, since, even though they are endless in number, they all have different physical conditions and different histories.


Guth, Alan 2007. “Eternal Inflation and its Implications”

Linde, Andrei 2015. “Brief History of the Multiverse”

Planck 2015 Constraints on Dark Energy and Inflation

The European Space Agency’s Planck satellite gathered data for over 4 years, and a series of 28 papers releasing the results and evaluating constraints on cosmological models have been recently released. In general, the Planck mission’s complete results confirm the canonical cosmological model, known as Lambda Cold Dark Matter, or ΛCDM. In approximate percentage terms the Planck 2015 results indicate 69% dark energy, 26% dark matter, and 5% ordinary matter as the mass-energy components of the universe (see this earlier blog:

Dark Energy

We know that dark energy is the dominant force in the universe, comprising 69% of the total energy content. And it exerts a negative pressure causing the expansion to continuously speed up. The universe is not only expanding, but the expansion is even accelerating! What dark energy is we do not know, but the simplest explanation is that it is the energy of empty space, of the vacuum. Significant departures from this simple model are not supported by observations.

The dark energy equation of state is the relation between the pressure exerted by dark energy and its energy density. Planck satellite measurements are able to constrain the dark energy equation of state significantly. Consistent with earlier measurements of this parameter, which is usually denoted as w, the Planck Consortium has determined that w = -1 to within 4 or 5 percent (95% confidence).

According to the Planck Consortium, “By combining the Planck TT+lowP+lensing data with other astrophysical data, including the JLA supernovae, the equation of state for dark energy is constrained to w = −1.006 ± 0.045 and is therefore compatible with a cosmological constant, assumed in the base ΛCDM cosmology.”

A value of -1 for w corresponds to a simple Cosmological constant model with a single parameter Λ  that is the present-day energy density of empty space, the vacuum. The Λ value measured to be 0.69 is normalized to the critical mass-energy density. Since the vacuum is permeated by various fields, its energy density is non-zero. (The critical mass-energy density is that which results in a topologically flat space-time for the universe; it is the equivalent of 5.2 proton masses per cubic meter.)

Such a model has a negative pressure, which leads to the accelerated expansion that has been observed for the universe; this acceleration was first discovered in 1998 by two teams using certain supernova as standard candle distance indicators, and measuring their luminosity as a function of redshift distance.

Modified gravity

The phrase modified gravity refers to models that depart from general relativity. To date, general relativity has passed every test thrown at it, on scales from the Earth to the universe as a whole. The Planck Consortium has also explored a number of modified gravity models with extensions to general relativity. They are able to tighten the restrictions on such models, and find that overall there is no need for modifications to general relativity to explain the data from the Planck satellite.

Primordial density fluctuations

The Planck data are consistent with a model of primordial density fluctuations that is close to, but not precisely, scale invariant. These are the fluctuations which gave rise to overdensities in dark matter and ordinary matter that eventually collapsed to form galaxies and the observed large scale structure of the universe.

The concept is that the spectrum of density fluctuations is a simple power law of the form

P(k) ∝ k**(ns−1),

where k is the wave number (the inverse of the wavelength scale). The Planck observations are well fit by such a power law assumption. The measured spectral index of the perturbations has a slight tilt away from 1, with the existence of the tilt being valid to more than 5 standard deviations of accuracy.

ns = 0.9677 ± 0.0060

The existence and amount of this tilt in the spectral index has implications for inflationary models.


The Planck Consortium authors have evaluated a wide range of potential inflationary models against the data products, including the following categories:

  • Power law
  • Hilltop
  • Natural
  • D-brane
  • Exponential
  • Spontaneously broken supersymmetry
  • Alpha attractors
  • Non-minimally coupled

Figure 12 from Constraints on InflationFigure 12 from Planck 2015 results XX Constraints on Inflation. The Planck 2015 data constraints are shown with the red and blue contours. Steeper models with  V ~ φ³ or V ~ φ² appear ruled out, whereas R² inflation looks quite attractive.

Their results appear to rule out some of these, although many models remain consistent with the data. Power law models with indices greater or equal to 2 appear to be ruled out. Simple slow roll models such as R² inflation, which is actually the first inflationary model proposed 35 years ago, appears more favored than others. Brane inflation and exponential inflation are also good fits to the data. Again, many other models still remain statistically consistent with the data.

Simple models with a few parameters characterizing the inflation suffice:

“Firstly, under the assumption that the inflaton* potential is smooth over the observable range, we showed that the simplest parametric forms (involving only three free parameters including the amplitude V (φ∗ ), no deviation from slow roll, and nearly power-law primordial spectra) are sufficient to explain the data. No high-order derivatives or deviations from slow roll are required.”

* The inflaton is the name cosmologists give to the inflation field

“Among the models considered using this approach, the R2 inflationary model proposed by Starobinsky (1980) is the most preferred. Due to its high tensor- to-scalar ratio, the quadratic model is now strongly disfavoured with respect to R² inflation for Planck TT+lowP in combination with BAO data. By combining with the BKP likelihood, this trend is confirmed, and natural inflation is also disfavoured.”

Isocurvature and tensor components

They also evaluate whether the cosmological perturbations are purely adiabatic, or include an additional isocurvature component as well. They find that an isocurvature component would be small, less than 2% of the overall perturbation strength. A single scalar inflaton field with adiabatic perturbations is sufficient to explain the Planck data.

They find that the tensor-to-scalar ratio is less than 9%, which again rules out or constrains certain models of inflation.


The simplest LambdaCDM model continues to be quite robust, with the dark energy taking the form of a simple cosmological constant. It’s interesting that one of the oldest and simplest models for inflation, characterized by a power law relating the potential to the inflaton amplitude, and dating from 35 years ago, is favored by the latest Planck results. A value for the power law index of less than 2 is favored. All things being equal, Occam’s razor should lead us to prefer this sort of simple model for the universe’s early history. Models with slow-roll evolution throughout the inflationary epoch appear to be sufficient.

The universe started simply, but has become highly structured and complex through various evolutionary processes.


Planck Consortium 2015 papers are at – This site links to the 28 papers for the 2015 results, as well as earlier publications. Especially relevant are these – XIII Cosmological parameters, XIV Dark energy and modified gravity, and XX Constraints on inflation.

BICEP2 Apparently Detects Quantum Nature of Gravity and Supports Inflationary Big Bang

Can a single experiment do all of the following?

  1. Provide significant confirmation of the inflationary version of the Big Bang model (and help constrain which model of inflation is correct)
  2. Confirm the existence of gravitational waves
  3. Support the quantum nature of gravity (at very high energies)
  4. Provide the first direct insight into the highest energy levels imagined by physicists – 10^16 GeV (10,000 trillion GeV) – 12 orders of magnitude beyond the LHC

Apparently it can! BICEP2 is a radio telescope experiment located at the South Pole, taking advantage of the very cold, dry air at that remote location for greater sensitivity. It is focused on measuring polarization of the cosmic microwave background radiation that is a remnant of the hot Big Bang of the early universe. (BICEP is an abbreviation of Background Imaging of Cosmic Extragalactic Polarization; this is the second version of the experiment).

The results announced by the BICEP2 team on March 17 at the Harvard-Smithsonian Center for Astrophysics, if they have been correctly interpreted, are the most important in cosmology in the 21st century to date. They are of such enormous significance that a Nobel Prize in Physics is highly likely, if the results and interpretation are confirmed.

We infer from a number of previous observations that there was likely an inflationary period very early on in the universes’s history. We are talking very, very, early – in the first billionth of a trillionth of a trillionth of a second. See this earlier post of mine here: This new result from BICEP2 is very supportive of inflationary Big Bang models, and that includes very simple models for inflation.

What is the observation? It is B-mode polarization in the cosmic microwave background radiation. The cosmic microwave background (CMB) is the thermal radiation left over from a time when the universe became transparent, at age 380,000 years, almost 14 billion years ago. There are two polarization modes for alignment of CMB radiation, the E-mode and the B-mode. The B-mode measures the amount of “curliness” in the alignment of CMB microwave photons (as you can easily see in the image below).

There are two known causes of B-mode polarization for the CMB. The first, also detected by the BICEP2 experiment, is due to intervening clusters of galaxies along the line of sight. These clusters bend the light paths due to their immense masses, in accordance with general relativity. These effects are seen at smaller spatial scales. At larger spatial scales, we have the more significant effect, whereby the gravitational waves generated during the inflation epoch imprint the polarization.




You can easily see the curly B-mode polarization with a quick glance at BICEP2’s results. (


What is being seen in today’s CMB is due to this second and more profound cause, which is nothing less than quantum fluctuations in space-time in the very, very early universe revealing themselves due to the gravitational waves that they generated. And these gravitational waves in turn caused a small curling effect on the cosmic microwave background, until the time of decoupling of radiation and matter. This is seen in the image at angular scales of a few degrees.

At age 380,000 years the universe became transparent to the CMB radiation and it traveled for another 13.8 billion years and underwent a redshift by a factor of 1500 as the universe expanded. So what was optical radiation at that time, becomes microwave radiation today, with a characteristic temperature of 2.7 Kelvins (degrees above absolute zero), while still retaining the curly pattern seen by BICEP2.

This is the first observation that provides some direct insight into extremely high energy scales within the context of a single experiment. We are talking here of approximately a trillion times higher energy than the Large Hadron Collider, the world’s most powerful particle accelerator, where the Higgs boson was discovered.

The BICEP2 results are a single experiment that for the first time apparently ties quantum mechanics and gravity together. It supports the quantum nature of gravity, which occurs at very high energy scales. The Planck scale at which space-time would be quantized corresponds to an energy level of 10^18 or 10^19 GeV (ten million trillion GeV), and inflation in many models begins when the universe has an energy level somewhat lower, at 10^16 GeV (ten thousand trillion GeV, where 1 GeV is a little more than the rest mass-energy of a proton).

And take a look at this interview of Sean Carroll by PBS’s Gwen Ifill to get some more context around this (hopefully correct!) universe-expanding discovery. Other astronomers are already racing to confirm it.


References: – BICEP2 web site at Harvard-Smithsonian Center for Astrophysics

More Dark Matter: First Planck Results


Credit: European Space Agency and Planck Collaboration 

Map of CMB temperature fluctuations with slightly colder areas in blue, and hotter areas in red.


The first results from the European Space Agency’s Planck satellite have provided excellent confirmation for the Lambda-CDM (Dark Energy and Cold Dark Matter) model. The results also indicate somewhat more dark matter, and somewhat less dark energy, than previously thought. These are the most sensitive and accurate measurements of fluctuations in the cosmic microwave background (CMB) radiation to date.

Results from Planck’s first 1 year and 3 months of observations were released in March, 2013. The new proportions for mass-energy density in the current universe are:

  • Ordinary matter 5%
  • Dark matter 27%
  • Dark energy 68%


Credit: European Space Agency and Planck Collaboration

The prior best estimate for dark matter primarily from the NASA WMAP satellite observations, was 23%. So the dark matter fraction is higher, and the dark energy fraction correspondingly lower, than WMAP measurements had indicated.

Dark energy still dominates by a very considerable degree, although somewhat less than had been thought prior to the Planck results. This dark energy – Lambda – drives the universe’s expansion to speed up, which is known as the runaway universe. At one time dark matter dominated, but for the last 5 billion years, dark energy has been dominant, and it grows in importance as the universe continues to expand.

The Planck results also added a little bit to the age of the universe, which is measured to be about 13.8 billion years, about 3 times the age of the earth. The CMB radiation itself, was emitted when the universe was only 380,000 years old. It was originally in the infrared and optical portions of the spectrum, but has been massively red-shifted, by around 1500 times, due to the expansion of the universe.

There are many other science results from the Planck Science team in cosmology and astrophysics. These include initial support indicated for relatively simple models of “slow roll” inflation in the extremely early universe. You can find details at the ESA web sites referenced below, and in the large collection of papers from the 47th ESlab Conference link.

References: – news article at ESA site – runaway universe blog – Planck Science Team site – 47th ESlab Conference presentations on Planck science results

Why the Higgs Boson is not Dark Matter

The Higgs boson is considered a necessary part of the Standard Model of particle physics. In the Standard Model there are 3 main forces of nature: the electromagnetic force, the weak nuclear force, and the strong nuclear force. The Standard Model does not address gravity and we do not yet have a proven theory for the unification of gravity with the other 3 forces.

On July 4th CERN, the European particle physics lab near Geneva, announced that two experiments using the Large Hadron Collider accelerator, ATLAS and CMS, have both amassed strong statistical evidence (around 5 sigma) for a new particle. This new particle has a mass of about 126 GeV* and “smells” very much like it is the long sought after, and elusive, Higgs boson. The prediction of the Higgs dates from 1964. For comparison, the proton mass is about 0.94 GeV, so the Higgs is around 134 times more massive. Further work is necessary to determine all of its properties, but at this point it looks as if the new particle decays into other particles in the expected manner. It is these decay products that are actually detected.

This decades-long search has proceeded in fits and starts, principally at CERN in Europe and Fermilab in the U.S., with different accelerators and detectors. Over time the experiments were able to exclude possible masses for the Higgs, since the rate of creation of different decay products varies for different putative masses. By the end of 2011 it looked like there was a preliminary signal, not yet of sufficient statistical strength, but that the mass would have to be in the range of about 115 to 130 GeV.


The CMS detector at the Large Hadron Collider. Credit: Mark Thiessen/National Geographic Society/Corbis

One of my professors, Steven Weinberg, won the Nobel Prize in Physics years ago for his work on unifying the electromagnetic force and the weak force. While the Standard Model and the body of work in particle physics provides a theoretical underpinning for all of the particles which we observe, and their quantum properties, and describes a unification of the strong force (which holds together the quarks inside a proton or neutron) with these other two forces, it also requires an additional mechanism to explain why most particles have non-zero masses.

The Higgs mechanism is the favored explanation, and it predicts a particle as the mediator to provide masses to other particles. The Higgs mechanism is theorized as an all-pervasive Higgs field, which slows down particles as they move through it. As you swim through water you feel a drag that slows you down. A fish with a very hydrodynamic design will feel less drag. In the particle world, more massive ones slow down more than the lighter ones, since they interact more strongly with the Higgs field.

The particle corresponding to this mechanism is known as the Higgs boson. Particles can have quantum spin that is a multiple of ½ or an integer multiple. Bosons have integer multiple spins. Actually the spin of the Higgs boson is zero. All of the force mediator particles such as the photon (spin = 1), which mediates electromagnetism, are bosons.

The Large Hadron Collider is in some sense recreating the conditions of the very early universe by smashing particles together at 7000 GeV, or 7 TeV. The Higgs originally would have been created in Nature in the very early part of the Big Bang, around the first one-trillionth of a second. The appearance of the Higgs broke the unification, or symmetry, between the electromagnetic and weak forces that Steven Weinberg demonstrated are one at very high energies. And the Higgs gave mass to particles.

Without the Higgs mechanism, all particles would be massless, and thus travelling at the speed of light, and structure in the universe – stars, planets, galaxies, human beings, would not be possible. Even the existence of the proton itself requires that quarks have mass, although most of the proton mass comes from the energy of the gluons (strong force mediation particle) and ‘virtual’ quark-antiquark pairs found inside it.

The Higgs boson cannot be the explanation for dark matter for a very simple reason. Dark matter must be stable with a very long lifetime, persisting over the universe’s present age of 14 billion years. It mostly sits in space doing nothing except providing additional gravitational interaction with ordinary matter. The favored candidate for dark matter is the least massive supersymmetric particle; being the least massive, it would have nothing to decay into. Supersymmetry is a theoretical extension beyond the Standard Model. No supersymmetric particles are detected as of yet, but the theory has a lot of support and has the benefit of stabilizing the mass of the Higgs itself.

The Higgs boson, on the other hand, decays very rapidly. There are various decay channels, including into quarks, W/Z bosons, leptons or photons, producing these in pairs (two Zs, two top quarks etc.). Sometimes even four particles are produced from a single Higgs decay. It is these decay products that are actually detected in the Large Hadron Collider at CERN.

There are a few experiments that are claiming to have directly detected dark matter. The favored mass range from the COGENT and DAMA/LIBRA experiments is around 10 GeV for dark matter, much more than a proton, but less than 10% of the Higgs’ mass. Now that the Higgs appears to have been found, work will proceed on confirming and elucidating its properties. And the next great hunt for particle physics may be the direct detection of dark matter particles and the beginning of a determination if supersymmetry is real.

* GeV = Giga-electronVolt or 1 billion electron Volts. 1 TeV (Tera-electronVolt) = 1000 GeV

References: – Don Lincoln of Fermilab on how we search for the Higgs at particle accelerators – BBC documentary

2011 Nobel Prize for Dark Energy Discovery

Measurements of Dark Energy and Matter content of Universe

Dark Energy and Matter content of Universe: The intersection of the supernova (SNe), cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) ellipses indicate a topologically flat universe composed 74% of dark energy (y-axis) and 26% of dark matter plus normal matter (x-axis).

The 2011 Nobel Prize in Physics, the most prestigious award given in the physics field, was announced on October 4. The winners are astronomers and astrophysicists who produced the first clear evidence of an accelerating universe. Not only is our universe as a whole expanding rapidly, it is in fact speeding up! It is not often that astronomers win the Nobel Prize since there is not a separate award for their discipline. The discovery of the acceleration in the universe’s expansion was made more or less simultaneously by two competing teams of astronomers at the end of the 20th century, in 1998, so the leaders of both teams share this Nobel Prize.

The new Nobel laureates, Drs. Saul Perlmutter, Adam Riess, and Brian Schmidt, were the leaders of the two  teams studying distant supernovae, in remote galaxies, as cosmological indicators. Cosmology is the study of the properties of the universe on the largest scales of space and time. Supernovae are exploding stars at the ends of their lives. They only occur about once each fifty to one hundred years or so in a given galaxy, thus one must study a very large number of galaxies in an automated fashion to find a sufficient number to be useful. The two teams introduced new automated search techniques to find enough supernovae and achieve their results.

During a supernova explosion, driven by rapid nuclear fusion of heavy elements, the supernova can temporarily become as bright as the entire galaxy in which it resides. The astrophysicists studied a particular type of supernova known as Type Ia. These are due to white dwarf stellar remnants exceeding a critical mass. Typically these white dwarfs would be found in binary stellar systems with another, more normal, star as a companion. If a white dwarf grabs enough material from the companion via gravitational tidal effects, that matter can “push it over the edge” and cause it to go supernova. Since all occurrences of this type of supernova event have the same mass for the exploding star (about 1.4 times the Sun’s mass), the resultant supernova has a consistent brightness or luminosity from one event to the next.

This makes them very useful as so-called standard candles. We know the absolute brightness, which we can calibrate for this class of supernova, and thus we can calculate the distance (called the luminosity distance) by comparing the observed brightness to the absolute. An alternative measure of the distance can be obtained by measuring the redshift of the companion galaxy. The redshift is due to the overall expansion of the universe, and thus the light from galaxies when it reaches us is stretched out to longer, or “redder” wavelengths. The amount of the shift provides what we call the redshift distance.

Comparing these two different distance techniques provides a cosmological test of the overall properties of the universe: the expansion rate, the shape or topology, and whether the expansion is slowing down, as was expected, or not. The big surprise is that the expansion from the original Big Bang has stopped slowing down due to gravity and has instead been accelerating in recent years! The Nobel winners did not expect such a result, thought they had made errors in their analyses and checked and rechecked. The acceleration did not go away. And when they compared the results between the two teams, they realized they had confirmed each others’ profound discovery of the reality of a dark energy driven acceleration.

The acceleration result is now well founded since it can be seen in the high spatial resolution measurements of the cosmic microwave background radiation as well. This is the radiation left over from the Big Bang event associated with the origin of our universe.

The acceleration is now increasingly important, dominating during the past 5 billion years of the 14 billion year history of the universe. Coincidentally, this is about how long our Earth and Sun have been in existence. The acceleration has to overcome the self-gravitational attraction of all the matter of the universe upon itself, and is believed to be due to a nonzero energy field known as dark energy that pervades all of space. As the universe expands to create more volume, more dark energy is also created! Empty space is not empty, due to the underlying quantum physics realities. The details, and why dark energy has the observed strength, are not yet understood.

Amazingly, Einstein had added a cosmological constant term, which acts as a dark energy, to his equations of General Relativity even before the Big Bang itself was discovered. But he later dropped the term and called it his worst blunder, after the expansion of the universe was first demonstrated by Edwin Hubble over 80 years ago. It turns out Einstein was in fact right; his simple term explains the observed data and the Perlmutter, Riess, and Schmidt measurements indicate that ¾ of the mass-energy content of the universe is found in dark energy, with only ¼ in matter.

Our universe is slated to expand in an exponential fashion for trillions of years and more, unless some other physics that we don’t yet understand kicks in. This is rather like the ever-increasing pace of modern technology and modern life and the continuing inflation of prices.

We honor the achievements of Drs. Perlmutter, Riess, and Schmidt and of their research teams in increasing our understanding of our universe and its underlying physics. Interestingly, only a few weeks ago, a very important supernova in the nearby M101 galaxy was discovered, and it is also a Type 1a. Because it is so close, only 25 million light years away, it is yielding a lot of high quality data. Perhaps this celestial fireworks display was a harbinger of their Nobel Prize?

References: (Telephone interview with Adam Reiss) (Supernova Cosmology Project)



History of the Universe - WMAP

Graphic for History of the Universe (Credit: NASA/WMAP Science Team)

The Big Bang theory found great success explaining the general features of the universe, including the approximate age, the expansion history after the first second, the relative atomic abundances from cosmic nucleosynthesis, and of course the cosmic microwave background radiation. And it required only general relativity, a smooth initial state, and some well-understood atomic and nuclear physics. It assumed matter, both seen and unseen, was dominating and slowing the expansion via gravity. In this model the universe could expand forever, or recollapse on itself, depending on whether the average density was less than or greater than a certain value determined only by the present value of the Hubble constant.

However, during the late 20th century there remained some limitations and concerns with the standard Big Bang. Why is today’s density so relatively close to this critical value for recollapse, since it would have had to be within 1 part in 1000 trillion of the critical density at the time of the microwave background to yield that state? How did galaxies form given only the tiny density fluctuations observed in the microwave background emitted at the age of 380,000 years for the universe? And why was the microwave background so uniform anyway? In the standard Big Bang model, regions only a few degrees away from each other would not be casually connected (no communication even with light between the regions would be possible).

There are four known fundamental forces of nature. These are electromagnetism and gravity and two types of nuclear forces, known as the strong force and the weak force. Physicists believe all the forces but gravity unify at energies around  10,000 trillion times the rest mass-energy (using E = mc^2) of the proton (1 Giga-electron-Volt). At some point very early in the life of the universe, at even higher energies equal to the Planck energy of 10 million trillion times the proton mass, all of the four forces would have been unified as a single force or interaction. Gravity would separate from the others first as the universe’s expansion began and the effective temperature dropped, and next the strong force would decouple.

We also must consider the vacuum field, that represents the non-zero energy of empty space. Even empty space is filled with virtual particles, and thus energy. At very early times the energy density of the vacuum would be expected to be very high. During the very earliest period of the development of the universe, it could have decayed to a lower energy state in conjunction with the decoupling of the strong force from the unified single force, and this would also have driven an enormous expansion of space and deposited a large amount of energy into the creation of matter.

In the inflationary Big Bang model postulated by Alan Guth and others, the decay of the vacuum field would release massive amounts of energy and drive an enormous inflation (hyperinflation really) during a very short period of time. The inflation might have started one trillionth of one trillionth of one trillionth of a second after the beginning. And it might have lasted until only the time of one billionth of one trillionth of one trillionth of a second. But it would have driven the size of the entire universe to grow from an extremely microscopic scale up to the macroscopic scale. At the end of the inflation, what was originally a tiny bubble of space-time would have grown to perhaps one meter in size. And at the end of the inflationary period, the universe would have been filled with radiation and matter in the form of a quark-gluon plasma. Quarks are the constituent particles of ordinary matter such as protons and neutrons and gluons carry the strong force.

The doubling time was extremely short, so during this one billionth of one trillionth of one trillionth of a second the universe doubled around 100 times. In each of the 3 spatial dimensions it grew by roughly one million times one trillion times one trillion in size! This is much greater than even Zimbabwe’s inflation and happens in a nearly infinitesimal time! The inflationary period drove the universe to be very flat topologically, which is observed. And it implies that the little corner of the universe we can observe, and think of as our own, is only one trillionth of one trillionth of the entire universe, or less. There is good observational support for the inflationary Big Bang model from the latest observations concerning the flatness of the universe, given that the mass-energy density is so close to the critical value, and also from the weight of the evidence concerning the growth of original density fluctuations to form stars and galaxies.