Tag Archives: Dark energy

Dark Gravity: Is Gravity Thermodynamic?

This is the first in a series of articles on ‘dark gravity’ that look at emergent gravity and modifications to general relativity. In my book Dark Matter, Dark Energy, Dark Gravity I explained that I had picked Dark Gravity to be part of the title because of the serious limitations in our understanding of gravity. It is not like the other 3 forces; we have no well accepted quantum description of gravity. And it is some 33 orders of magnitude weaker than those other forces.
I noted that:

The big question here is ~ why is gravity so relatively weak, as compared to the other 3 forces of nature? These 3 forces are the electromagnetic force, the strong nuclear force, and the weak nuclear force. Gravity is different ~ it has a dark or hidden side. It may very well operate in extra dimensions… http://amzn.to/2gKwErb

My major regret with the book is that I was not aware of, and did not include a summary of, Erik Verlinde’s work on emergent gravity. In emergent gravity, gravity is not one of the fundamental forces at all.

Erik Verlinde is a leading string theorist in the Netherlands who in 2009 proposed that gravity is an emergent phenomenon, resulting from the thermodynamic entropy of the microstates of quantum fields.

 In 2009, Verlinde showed that the laws of gravity may be derived by assuming a form of the holographic principle and the laws of thermodynamics. This may imply that gravity is not a true fundamental force of nature (like e.g. electromagnetism), but instead is a consequence of the universe striving to maximize entropy. – Wikipedia article “Erik Verlinde”

This year, Verlinde extended this work from an unrealistic anti-de Sitter model of the universe to a more realistic de Sitter model. Our runaway universe is approaching a dark energy dominated deSitter solution.

He proposes that general relativity is modified at large scales in a way that mimics the phenomena that have generally been attributed to dark matter. This is in line with MOND, or Modified Newtonian Dynamics. MOND is a long standing proposal from Mordehai Milgrom, who argues that there is no dark matter, rather that gravity is stronger at large distances than predicted by general relativity and Newton’s laws.

In a recent article on cosmology and the nature of gravity Dr.Thanu Padmanabhan lays out 6 issues with the canonical Lambda-CDM cosmology based on general relativity and a homogeneous, isotropic, expanding universe. Observations are highly supportive of such a canonical model, with a very early inflation phase and with 1/3 of the mass-energy content in dark energy and 2/3 in matter, mostly dark matter.

And yet,

1. The equation of state (pressure vs. density) of the early universe is indeterminate in principle, as well as in practice.

2. The history of the universe can be modeled based on just 3 energy density parameters: i) density during inflation, ii) density at radiation – matter equilibrium, and iii) dark energy density at late epochs. Both the first and last are dark energy driven inflationary de Sitter solutions, apparently unconnected, and one very rapid, and one very long lived. (No mention of dark matter density here).

3. One can construct a formula for the information content at the cosmic horizon from these 3 densities, and the value works out to be 4π to high accuracy.

4. There is an absolute reference frame, for which the cosmic microwave background is isotropic. There is an absolute reference scale for time, given by the temperature of the cosmic microwave background.

5. There is an arrow of time, indicated by the expansion of the universe and by the cooling of the cosmic microwave background.

6. The universe has, rather uniquely for physical systems, made a transition from quantum behavior to classical behavior.

“The evolution of spacetime itself can be described in a purely thermodynamic language in terms of suitably defined degrees of freedom in the bulk and boundary of a 3-volume.”

Now in fluid mechanics one observes:

“First, if we probe the fluid at scales comparable to the mean free path, you need to take into account the discreteness of molecules etc., and the fluid description breaks down. Second, a fluid simply might not have reached local thermodynamic equilibrium at the scales (which can be large compared to the mean free path) we are interested in.”

Now it is well known that general relativity as a classical theory must break down at very small scales (very high energies). But also with such a thermodynamic view of spacetime and gravity, one must consider the possibility that the universe has not reached a statistical equilibrium at the largest scales.

One could have reached equilibrium at macroscopic scales much less than the Hubble distance scale c/H (14 billion light-years, H is the Hubble parameter) but not yet reached it at the Hubble scale. In such a case the standard equations of gravity (general relativity) would apply only for the equilibrium region and for accelerations greater than the characteristic Hubble acceleration scale of  c \cdot H (2 centimeters per second / year).

This lack of statistical equilibrium implies the universe could behave similarly to non-equilibrium thermodynamics behavior observed in the laboratory.

The information content of the expanding universe reflects that of the quantum state before inflation, and this result is 4π in natural units by information theoretic arguments similar to those used to derive the entropy of a black hole.

The black hole entropy is  S = A / (4 \cdot Lp^2) where A is the area of the black hole using the Schwarzschild radius formula and Lp is the Planck length, G \hbar / c^3 , where G is the gravitational constant, \hbar  is Planck’s constant.

This beautiful Bekenstein-Hawking entropy formula connects thermodynamics, the quantum world  and gravity.

This same value of the universe’s entropy can also be used to determine the number of e-foldings during inflation to be 6 π² or 59, consistent with the minimum value to enforce a sufficiently homogeneous universe at the epoch of the cosmic microwave background.

If inflation occurs at a reasonable ~ 10^{15}  GeV, one can derive the observed value of the cosmological constant (dark energy) from the information content value as well, argues Dr. Padmanhaban.

This provides a connection between the two dark energy driven de Sitter phases, inflation and the present day runaway universe.

The table below summarizes the 4 major phases of the universe’s history, including the matter dominated phase, which may or may not have included dark matter. Erik Verlinde in his new work, and Milgrom for over 3 decades, question the need for dark matter.

Epoch  /  Dominated  /   Ends at  /   a-t scaling  /   Size at end

Inflation /  Inflaton (dark energy) / 10^{-32} seconds / e^{Ht} (de Sitter) / 10 cm

Radiation / Radiation / 40,000 years / \sqrt t /  10 million light-years

Matter / Matter (baryons) Dark matter? /  9 billion light-years / t^{2/3} /  > 100 billion light-years

Runaway /  Dark energy (Cosmological constant) /  “Infinity” /  e^{Ht} (de Sitter) / “Infinite”

In the next article I will review the status of MOND – Modified Newtonian Dynamics, from the phenomenology and observational evidence.

References

E. Verlinde. “On the Origin of Gravity and the Laws of Newton”. JHEP. 2011 (04): 29 http://arXiv.org/abs/1001.0785

T. Padmanabhan, 2016. “Do We Really Understand the Cosmos?” http://arxiv.org/abs/1611.03505v1

S. Perrenod, 2011. https://darkmatterdarkenergy.com/2011/07/04/dark-energy-drives-a-runaway-universe/

S. Perrenod, 2011. Dark Matter, Dark Energy, Dark Gravity 2011  http://amzn.to/2gKwErb

S. Carroll and G. Remmen, 2016, http://www.preposterousuniverse.com/blog/2016/02/08/guest-post-grant-remmen-on-entropic-gravity/


Galaxy Clusters Probe Dark Energy

Rich (large) clusters of galaxies are significant celestial X-ray sources. In fact, large clusters of galaxies typically contain around 10 times as much mass in the form of very hot gas as is contained in their constituent galaxies.

Moreover, the dark matter content of clusters is even greater than the gas content; typically it amounts to 80% to 90% of the cluster mass. In fact, the first detection of dark matter’s gravitational effects was made by Fritz Zwicky in the 1930s. His measurements indicated that the galaxies were moving around much faster than expected from the known galaxy masses within the cluster.

clusters_1280.abell1835.jpg

Image credit: X-ray: NASA/CXC/Univ. of Alabama/A. Morandi et al; Optical: SDSS, NASA/STScI (X-ray emission is shown in purple)

The dark matter’s gravitational field controls the evolution of a cluster. As a cluster forms via gravitational collapse, ordinary matter falling into the strong gravitational field interacts via frictional processes and shocks and thermalizes at a high temperature in the range of 10 to 100 million degrees (Kelvins). The gas is so hot, that it emits X-rays due to thermal bremsstrahlung.

Recently, Drs. Morandi and Sun at the University of Alabama have implemented a new test of dark energy using the observed X-ray emission profiles of clusters of galaxies. Since clusters are dominated by the infall of primordial gas (ordinary matter) into dark matter dominated gravitational wells, then X-ray emission profiles – especially in the outer regions of clusters – are expected to be similar, after correcting for temperature variations and the redshift distance. Their analysis also considers variation in gas fraction with redshift; this is found to be minimal.

Because of the self similar nature of the X-ray emission profiles, X-ray clusters of galaxies can serve as cosmological probes, a type of ‘standard candle’. In particular, they can be used to probe dark energy, and to look at the possibility of the variation of the strength of dark energy over multi-billion year cosmological time scales.

The reason this works is that cluster development and mass growth, and corresponding temperature increase due to stronger gravitational potential wells, are essentially a tradeoff of dark matter and dark energy. While dark matter causes a cluster to grow, dark energy inhibits further growth.

This varies with the redshift of a cluster, since dark energy is constant per unit volume as the universe expands, but dark matter was denser in the past in proportion to (1 + z)^3, where z is the cluster redshift. In the early universe, dark matter thus dominated, as it had a much higher density, but in the last several billion years, dark energy has come to dominate and impede further growth of clusters.

The table below shows the percentage of the mass-energy of the universe which is in the form of dark energy and in the form of matter (both dark and ordinary) at a given redshift, assuming constant dark energy per unit volume. This is based on the best estimate from Planck of 68% of the total mass-energy density due to dark energy at present (z = 0). Higher redshift means looking farther back in time. At z = 0.5, around 5 billion years ago, matter still dominated over dark energy, but by around z = 0.3 the two are about equal and since then (for smaller z) dark energy has dominated. It is only since after the Sun and Earth formed that the universe has entered the current dark energy dominated era.

Table: Total Matter & Dark Energy Percentages vs. z 

Redshift

Dark Energy percent

Matter percent

0

68

32

0.25

52

48

0.5

39

61

0.75

28

72

1.0

21

79

1.5

12

88

The authors analyzed data from a large sample consisting of 320 clusters of galaxies observed with the Chandra X-ray Observatory. The clusters ranged in redshifts from 0.056 up to 1.24 (almost 9 billion years ago), and all of the selected clusters had temperatures measured to be equal to or greater than 3 keV (above 35 million Kelvins). For such hot clusters, non-gravitational astrophysical effects, are expected to be small.

Their analysis evaluated the equation of state parameter, w, of dark energy. If dark energy adheres to the simplest model, that of the cosmological constant (Λ) found in the equations of general relativity, then w = -1 is expected.

The equation of state governs the relationship between pressure and energy density; dark energy is observed to have a negative pressure, for which w < 0, unlike for matter.

Their resulting value for the equation of state parameter is

w = -1.02 +/- 0.058,

equal to -1 within the statistical errors.

The results from combining three other experiments, namely

  1. Planck satellite cosmic microwave background (CMB) measurements
  2. WMAP satellite CMB polarization measurements
  3. optical observations of Type 1a supernovae

yield a value

w = -1.09 +/- 0.19,

also consistent with a cosmological constant. And combining both the X-ray cluster results with the CMB and optical results yields a tight constraint of

w = -1.01 +/- 0.03.

Thus a simple cosmological constant explanation for dark energy appears to be a sufficient explanation to within a few percent accuracy.

The authors were also able to constrain the evolution in w and find, for a model with

w(z) = w(0) + wa * z / (1 + z), that the evolution parameter is zero within statistical errors:

wa = -0.12 +/- 0.4.

This is a powerful test of dark energy’s existence, equation of state, and evolution, using hundreds of X-ray clusters of galaxies. There is no evidence for evolution in dark energy with redshift back to around z = 1, and a simple cosmological constant model is supported by the data from this technique as well as from other methods.

References:

  1. Morandi, M. Sun arXiv:1601.03741v3 [astro-ph.CO] 4 Feb 2016, “Probing dark energy via galaxy cluster outskirts”
  2. http://chandra.harvard.edu/photo/2016/clusters/

Dark Sector Experiments

A dark energy experiment was recently searching for a so-called scalar “chameleon field”. Chameleon particles could be an explanation for dark energy. They would have to make the field strength vanishingly small when they are in regions of significant matter density, coupling to matter more weakly than does gravity. But in low-density regions, say between the galaxies, the chameleon particle would exert a long range force.

Chameleons can decay to photons, so that provides a way to detect them, if they actually exist.

Chameleon particles were originally suggested by Justin Khoury of the University of Pennsylvania and another physicist around 2003. Now Khoury and Holger Muller and collaborators at UC Berkeley have performed an experiment which pushed millions of cesium atoms toward an aluminum sphere in a vacuum chamber. By changing the orientation in which the experiment is performed, the researchers can correct for the effects of gravity and compare the putative chameleon field strength to gravity.

If there were a chameleon field, then the cesium atoms should accelerate at different rates depending on the orientation, but no difference was found. The level of precision of this experiment is such that only chameleons that interact very strongly with matter have been ruled out. The team is looking to increase the precision of the experiment by additional orders of magnitude.

For now the simplest explanation for dark energy is the cosmological constant (or energy of the vacuum) as Einstein proposed almost 100 years ago.

Large_Underground_Xenon_detector_inside_watertank

The Large Underground Xenon experiment to detect dark matter (CC BY 3.0)

Dark matter search broadens

“Dark radiation” has been hypothesized for some time by some physicists. In this scenario there would be a “dark electromagnetic” force and dark matter particles could annihilate into dark photons or other dark sector particles when two dark matter particles collide with one another. This would happen infrequently, since dark matter is much more diffusely distributed than ordinary matter.

Ordinary matter clumps since it undergoes frictional and ordinary radiation processes, emitting photons. This allows it to cool it off and to become more dense under mutual gravitational forces. Dark matter rarely decays or interacts, and does not interact electromagnetically, thus no friction or ordinary radiation occurs. Essentially dark matter helps ordinary matter clump together initially since it dominates on the large scales, but on small scales ordinary matter will be dominant in certain regions. Thus the density of dark matter in the solar system is very low.

Earthbound dark matter detectors have focused on direct interaction of dark matter with atomic nuclei for the signal. John Cherry and co-authors have suggested that dark matter may not interact directly, but rather it first annihilates to light particles, which then scatter on the atomic nuclei used as targets in the direct detection experiments.

So in this scenario dark matter particles annihilate when they encounter each other, producing dark radiation, and then the dark radiation can be detected by currently existing direct detection experiments. If this is the main channel for detection, then much lower mass dark matter particles can be observed, down to of order 10 MeV (million electron-Volts), whereas current direct detection is focused on masses of several GeV (billion electron-Volts) to 100 GeV or more. (The proton rest mass is about 1 GeV)

A Nobel Prize awaits, most likely, the first unambiguous direct detection of either dark matter, or dark energy, if it is even possible.

References

https://en.wikipedia.org/wiki/Chameleon_particle – Chameleon particle

http://news.sciencemag.org/physics/2015/08/tiny-fountain-atoms-sparks-big-insights-dark-energy?rss=1 – dark energy experiment

http://www.preposterousuniverse.com/blog/2008/10/29/dark-photons/ – dark photons

http://scitechdaily.com/physicists-work-on-new-approach-to-detect-dark-matter/ – article on detecting dark matter generated dark radiation

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.231303 – Cherry et al. paper in Physical Review Letters


The Supervoid

The largest known structure in the universe goes by the name of the Supervoid. It is an enormously large under-dense region about 1.8 billion light-years in extent. Voids (actually low density regions) in galaxy and cluster density have been mapped over several decades.

The cosmic microwave background radiation map from the Planck satellite and earlier experiments is extremely uniform. The temperature is about 2.7 Kelvins everywhere in the universe at present. There are small microKelvin scale fluctuations due to primordial density perturbations. The over-dense regions grow over cosmic timescales to become galaxies, groups and clusters of galaxies, and superclusters made of multiple clusters. Under-dense regions have fewer galaxies and groups per unit volume than the average.

The largest inhomogeneous region detected in the cosmic microwave background map is known as the Cold Spot and has a very slightly lower temperature by about 70 microKelvins (a microKelvin being only a millionth of a degree). It may be partly explained by a supervoid of radius 320 Megaparsecs, or around 1 billion light-years radius.

Superclusters heat cosmic microwave background photons slightly when they pass through, if there is significant dark energy in the universe. Supervoids cool the microwave background photons slightly. The reason is that, once dark energy becomes significant, during the second half of the universe’s expansion to date, it begins to smooth out superclusters and supervoids. It pushes the universe back towards greater uniformity while accelerating the overall expansion.

A photon will gain energy (blueshift) when it heads into a supercluster on its way to the Earth. This is an effect of general relativity. And as it leaves the other side of the supercluster as it continues its journey, it will lose energy (redshift) as it climbs out of the gravitational potential well. But while it is passing through the supercluster, that structure is spreading out due to the Big Bang overall expansion, and its gravitational potential is weakening. So the redshift or energy loss is smaller than the original energy gain or blueshift. So net-net, photons gain energy passing through a supercluster.

The opposite happens with a supervoid. Photons lose energy on the way in. They gain  energy on the way out, but less than they lost. Net-net photons lose energy, become colder, when passing through supervoids. Now all of this is relative to the overall redshift that all photons experience as they travel from the Big Bang last scattering surface to the Earth. During each period that the universe doubles in size, the Big Bang radiation doubles in wavelength, or halves in temperature.

In a newly published paper titled “Detection of a Supervoid aligned with the Cold Spot in the Cosmic Microwave Background”, astronomers looked at the distribution of galaxies in the direction of the well-established Cold Spot. The supervoid core redshift distance is in the range z = 0.15 to z = 0.25, corresponding to a distance of roughly 2 to 3 billion light-years from Earth.

They find a reduction in galaxy density of about 20%, and of dark matter around 14%, in the supervoid, relative to the overall average density values in the universe. The significance of the detection is high, around 5 standard deviations. The center of the low density region is well aligned with the position of the Cold Spot in the galactic Southern Hemisphere.

Both the existence of this supervoid and its alignment with the Cold Spot are highly significant. The chance of the two being closely aligned to this degree is calculated as just 1 chance in 20,000. The image below is Figure 2 from the authors’ paper and maps the density of galaxies in the left panel and the temperature differential of the microwave background radiation in the right panel. The white dot in the middle of each panel marks the center of the Cold Spot in the cosmic microwave background.

Supervoid.F2.large

A lower density of galaxies is indicated by a blue color in the left panel. Red and orange colors denote a higher density of galaxies. The right panel shows slightly lower temperature of the cosmic microwave background in blue, and slightly higher temperature in red.

The authors have calculated the expected temperature reduction due to the supervoid; using a first-order model it is about 20 microKelvins. While this is not sufficient to explain the entire Cold Spot temperature decrease, it is a significant portion of the overall 70 microKelvin reduction.

Dark Energy is gradually smearing out the distinction between superclusters and supervoids. Dark Energy has come to dominate the universe’s mass-energy balance fairly recently, since about 5 billion years ago. If there is no change in the Dark Energy density, over many billions of years it will push all the galaxies so far apart from one another that no other galaxies will be detectable from our Milky Way.

References

I. Szapudi et al, 2015 M.N.R.A.S., Volume 450, Issue 1, p. 288, “Detection of a supervoid aligned with the cold spot of the cosmic microwave background” – http://mnras.oxfordjournals.org/content/450/1/288.full

S. Perrenod and M. Lesser, 1980, P.A.S.P. 91:764, “A Redshift Survey of a High-Multiplicity Supercluster” http://www.jstor.org/discover/10.2307/40677683?uid=2&uid=4&sid=21106121183081

  


Planck 2015 Constraints on Dark Energy and Inflation

The European Space Agency’s Planck satellite gathered data for over 4 years, and a series of 28 papers releasing the results and evaluating constraints on cosmological models have been recently released. In general, the Planck mission’s complete results confirm the canonical cosmological model, known as Lambda Cold Dark Matter, or ΛCDM. In approximate percentage terms the Planck 2015 results indicate 69% dark energy, 26% dark matter, and 5% ordinary matter as the mass-energy components of the universe (see this earlier blog:

https://darkmatterdarkenergy.com/2015/03/07/planck-mission-full-results-confirm-canonical-cosmology-model/)

Dark Energy

We know that dark energy is the dominant force in the universe, comprising 69% of the total energy content. And it exerts a negative pressure causing the expansion to continuously speed up. The universe is not only expanding, but the expansion is even accelerating! What dark energy is we do not know, but the simplest explanation is that it is the energy of empty space, of the vacuum. Significant departures from this simple model are not supported by observations.

The dark energy equation of state is the relation between the pressure exerted by dark energy and its energy density. Planck satellite measurements are able to constrain the dark energy equation of state significantly. Consistent with earlier measurements of this parameter, which is usually denoted as w, the Planck Consortium has determined that w = -1 to within 4 or 5 percent (95% confidence).

According to the Planck Consortium, “By combining the Planck TT+lowP+lensing data with other astrophysical data, including the JLA supernovae, the equation of state for dark energy is constrained to w = −1.006 ± 0.045 and is therefore compatible with a cosmological constant, assumed in the base ΛCDM cosmology.”

A value of -1 for w corresponds to a simple Cosmological constant model with a single parameter Λ  that is the present-day energy density of empty space, the vacuum. The Λ value measured to be 0.69 is normalized to the critical mass-energy density. Since the vacuum is permeated by various fields, its energy density is non-zero. (The critical mass-energy density is that which results in a topologically flat space-time for the universe; it is the equivalent of 5.2 proton masses per cubic meter.)

Such a model has a negative pressure, which leads to the accelerated expansion that has been observed for the universe; this acceleration was first discovered in 1998 by two teams using certain supernova as standard candle distance indicators, and measuring their luminosity as a function of redshift distance.

Modified gravity

The phrase modified gravity refers to models that depart from general relativity. To date, general relativity has passed every test thrown at it, on scales from the Earth to the universe as a whole. The Planck Consortium has also explored a number of modified gravity models with extensions to general relativity. They are able to tighten the restrictions on such models, and find that overall there is no need for modifications to general relativity to explain the data from the Planck satellite.

Primordial density fluctuations

The Planck data are consistent with a model of primordial density fluctuations that is close to, but not precisely, scale invariant. These are the fluctuations which gave rise to overdensities in dark matter and ordinary matter that eventually collapsed to form galaxies and the observed large scale structure of the universe.

The concept is that the spectrum of density fluctuations is a simple power law of the form

P(k) ∝ k**(ns−1),

where k is the wave number (the inverse of the wavelength scale). The Planck observations are well fit by such a power law assumption. The measured spectral index of the perturbations has a slight tilt away from 1, with the existence of the tilt being valid to more than 5 standard deviations of accuracy.

ns = 0.9677 ± 0.0060

The existence and amount of this tilt in the spectral index has implications for inflationary models.

Inflation

The Planck Consortium authors have evaluated a wide range of potential inflationary models against the data products, including the following categories:

  • Power law
  • Hilltop
  • Natural
  • D-brane
  • Exponential
  • Spontaneously broken supersymmetry
  • Alpha attractors
  • Non-minimally coupled

Figure 12 from Constraints on InflationFigure 12 from Planck 2015 results XX Constraints on Inflation. The Planck 2015 data constraints are shown with the red and blue contours. Steeper models with  V ~ φ³ or V ~ φ² appear ruled out, whereas R² inflation looks quite attractive.

Their results appear to rule out some of these, although many models remain consistent with the data. Power law models with indices greater or equal to 2 appear to be ruled out. Simple slow roll models such as R² inflation, which is actually the first inflationary model proposed 35 years ago, appears more favored than others. Brane inflation and exponential inflation are also good fits to the data. Again, many other models still remain statistically consistent with the data.

Simple models with a few parameters characterizing the inflation suffice:

“Firstly, under the assumption that the inflaton* potential is smooth over the observable range, we showed that the simplest parametric forms (involving only three free parameters including the amplitude V (φ∗ ), no deviation from slow roll, and nearly power-law primordial spectra) are sufficient to explain the data. No high-order derivatives or deviations from slow roll are required.”

* The inflaton is the name cosmologists give to the inflation field

“Among the models considered using this approach, the R2 inflationary model proposed by Starobinsky (1980) is the most preferred. Due to its high tensor- to-scalar ratio, the quadratic model is now strongly disfavoured with respect to R² inflation for Planck TT+lowP in combination with BAO data. By combining with the BKP likelihood, this trend is confirmed, and natural inflation is also disfavoured.”

Isocurvature and tensor components

They also evaluate whether the cosmological perturbations are purely adiabatic, or include an additional isocurvature component as well. They find that an isocurvature component would be small, less than 2% of the overall perturbation strength. A single scalar inflaton field with adiabatic perturbations is sufficient to explain the Planck data.

They find that the tensor-to-scalar ratio is less than 9%, which again rules out or constrains certain models of inflation.

Summary

The simplest LambdaCDM model continues to be quite robust, with the dark energy taking the form of a simple cosmological constant. It’s interesting that one of the oldest and simplest models for inflation, characterized by a power law relating the potential to the inflaton amplitude, and dating from 35 years ago, is favored by the latest Planck results. A value for the power law index of less than 2 is favored. All things being equal, Occam’s razor should lead us to prefer this sort of simple model for the universe’s early history. Models with slow-roll evolution throughout the inflationary epoch appear to be sufficient.

The universe started simply, but has become highly structured and complex through various evolutionary processes.

References

Planck Consortium 2015 papers are at http://www.cosmos.esa.int/web/planck/publications – This site links to the 28 papers for the 2015 results, as well as earlier publications. Especially relevant are these – XIII Cosmological parameters, XIV Dark energy and modified gravity, and XX Constraints on inflation.


Planck Mission Full Results Confirm Canonical Cosmology Model

Dark Matter, Dark Energy values refined

The Planck satellite, launched by the European Space Agency, made observations of the cosmic microwave background (CMB) for a little over 4 years, beginning in August, 2009 until October, 2013.

Preliminary results based on only the data obtained over the first year and a quarter of operation, and released in 2013, established high confidence in the canonical cosmological model. This ΛCDM (Lambda-Cold Dark Matter) model is of a topologically flat universe, initiated in an inflationary Big Bang some 13.8 billion years ago and dominated by dark energy (the Λ component), and secondarily by cold dark matter (CDM). Ordinary matter, of which stars, planets and human beings are composed, is the third most important component from a mass-energy standpoint. The amount of dark energy is over twice the mass-energy equivalent of all matter combined, and the dark matter is well in excess of the ordinary matter component.

The_history_of_the_Universe

This general model had been well-established by the Wilkinson Microwave Anisotropy Probe (WMAP), but the Planck results have provided much greater sensitivity and confidence in the results.

Now a series of 28 papers have been released by the Planck Consortium detailing results from the entire mission, with over three times as much data gathered. The first paper in the series, Planck 2015 Results I, provides an overview of these results. Papers XIII and XIV detail the cosmological parameters measured and the findings on dark energy, while several additional papers examine potential departures from a canonical cosmological model and constraints on inflationary models.

In particular they find that:

Ωb*h²  = .02226 to within 1%.

In this expression Ωb is the baryon (basically ordinary matter) mass-energy fraction (fraction of total-mass energy in ordinary matter) and h = H0/100. H0 is the Hubble constant which measures the expansion rate of the universe, and indirectly, its age. The best value for H0 is 67.8 kilometers/sec/Megaparsec  (millions of parsecs, where 1 parsec = 3.26 light-years). H0 has an uncertainty of about 1.3% (two standard deviations). In this case h = .678 and the expression above becomes:

Ωb = .048, with uncertainty around 3% of its value. Thus, just under 5% of the mass-energy density in the universe is in ordinary matter.

The cold matter density is measured to be:

Ωc*h²  = .1186 with uncertainty less than 2% and with the h value substituted we have Ωc = .258 with similar uncertainty.

Since the radiation density in the universe is known to be very low, the remainder of the mass-energy fraction is from dark energy,

Ωe = 1 – .048 – .258 = .694

So in approximate percentage terms the Planck 2015 results indicate 69% dark energy, 26% dark matter, and 5% ordinary matter as the mass-energy balance of the universe. These results are essentially the same as the ratios found from the preliminary results reported in 2013. It is to be emphasized that these are present-day values of the constituents. The components evolve differently as the universe expands. Dark energy is manifested with its current energy density in every new unit of volume as the universe continues to expand, while the average dark matter and ordinary matter densities decrease inversely as the volume grows. This implies that in the past, dark energy was less important, but it will dominate more and more as the universe continues to expand.

Why is dark energy produced as the universe expands? The simplest explanation is that it is the irreducible quantum energy of empty space, of the vacuum. Empty space – space with no particles whatsoever – still has fields (scalar fields, in particular) permeating it, and these fields have a minimum energy. It also has ‘virtual’ particles popping in and out of existence very briefly. This is the cosmological constant (Λ) model for the dark energy.

This is the ultimate free lunch in nature. The dark energy works as a negative gravity; it enters into the equations of general relativity as a negative pressure which causes space to expand. And as space expands, more dark energy is created! A wonderful self-reinforcing process is in place. Since the dark energy dominates over matter, the expansion of the universe is accelerating, and has been for the last 5 billion years or so. Why wonderful? Because it adds billions upon billions of years of life to our universe.

The Planck Consortium also find the universe is topologically flat to a very high degree, with an upper limit of 1/2 of 1% deviation from flatness at large scales. This is an impressive observational result.

One of the most interesting results is Planck’s ability to constrain inflationary models. While a massive inflation almost certainly happened during the first billionth of a trillionth of a trillionth of a second as the Universe began, as indicated by the very uniformity of the CMB signal, there are many possible models of the inflationary field’s energy potential.

We’ll take a look at this in a future blog entry.


2013 in review

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

A New York City subway train holds 1,200 people. This blog was viewed about 4,900 times in 2013. If it were a NYC subway train, it would take about 4 trips to carry that many people.

Click here to see the complete report.


More Dark Matter: First Planck Results

Image

Credit: European Space Agency and Planck Collaboration 

Map of CMB temperature fluctuations with slightly colder areas in blue, and hotter areas in red.

 

The first results from the European Space Agency’s Planck satellite have provided excellent confirmation for the Lambda-CDM (Dark Energy and Cold Dark Matter) model. The results also indicate somewhat more dark matter, and somewhat less dark energy, than previously thought. These are the most sensitive and accurate measurements of fluctuations in the cosmic microwave background (CMB) radiation to date.

Results from Planck’s first 1 year and 3 months of observations were released in March, 2013. The new proportions for mass-energy density in the current universe are:

  • Ordinary matter 5%
  • Dark matter 27%
  • Dark energy 68%

Planck_cosmic_recipe_node_full_image

Credit: European Space Agency and Planck Collaboration

The prior best estimate for dark matter primarily from the NASA WMAP satellite observations, was 23%. So the dark matter fraction is higher, and the dark energy fraction correspondingly lower, than WMAP measurements had indicated.

Dark energy still dominates by a very considerable degree, although somewhat less than had been thought prior to the Planck results. This dark energy – Lambda – drives the universe’s expansion to speed up, which is known as the runaway universe. At one time dark matter dominated, but for the last 5 billion years, dark energy has been dominant, and it grows in importance as the universe continues to expand.

The Planck results also added a little bit to the age of the universe, which is measured to be about 13.8 billion years, about 3 times the age of the earth. The CMB radiation itself, was emitted when the universe was only 380,000 years old. It was originally in the infrared and optical portions of the spectrum, but has been massively red-shifted, by around 1500 times, due to the expansion of the universe.

There are many other science results from the Planck Science team in cosmology and astrophysics. These include initial support indicated for relatively simple models of “slow roll” inflation in the extremely early universe. You can find details at the ESA web sites referenced below, and in the large collection of papers from the 47th ESlab Conference link.

References:

http://www.esa.int/Our_Activities/Space_Science/Planck/Planck_reveals_an_almost_perfect_Universe – news article at ESA site

https://darkmatterdarkenergy.com/2011/07/04/dark-energy-drives-a-runaway-universe/ – runaway universe blog

http://www.rssd.esa.int/index.php?project=planck – Planck Science Team site

http://www.sciops.esa.int/index.php?project=PLANCK&page=47_eslab – 47th ESlab Conference presentations on Planck science results


Looking for Dark Energy in the Lyman Alpha Forest

Baryon acoustic oscillations (BAO) are the acoustic (sound) waves that occur in the very early universe due to very small density inhomogeneities in the nearly uniform fluid. These primordial acoustic oscillations have left an imprint on the way in which galaxies are spatially distributed. The characteristic scale length for these oscillations is around 500 million light years (in the frame of the present-day universe). A spatial correlation function is used to measure the degree to which galaxies and clumps of matter in general, including dark matter, are separated from one another. The characteristic length scale serves as a standard ruler for very large-scale clustering, and is seen as a distinctive break (change in slope) in the power spectrum of the degree of spatial correlation vs. distance.

An international consortium of astronomers representing 29 institutions have submitted a paper last month to the journal Astronomy and Astrophysics; the current version can be found here (http://arxiv.org/abs/1211.2616). They used a clever technique of detecting clouds of neutral hydrogen along the line of sight to a large number of quasars with high redshifts. Thus the matter clumps in this case are neutral hydrogen clouds or neutral hydrogen within intervening galaxies or proto-galaxies. These absorb light from the quasar and produce absorption lines in the spectrum at discrete locations corresponding to various redshifts. The authors are detecting a characteristic transition known as the Lyman alpha line, which is found well into the ultraviolet at 121.6 nanometers (for zero redshift).

For this work over 48,000 high-redshift quasars, with a mean redshift of 2.3, were taken from the 3rd Sloan Digital Sky Survey. A quasar may have many hydrogen clouds intervening along the line of sight from the Earth to the quasar. These clouds will be seen at different redshifts (less than the quasar redshift) reflecting their position along the line of sight. This is referred to as the Lyman alpha “forest”. This study is the first application of Lyman alpha forest measurement to the detection of the BAO feature. At the average red shift of 2.3, the wavelength of Lyman alpha radiation is shifted to 401.3 nm [calculated as (1+2.3)*121.6 nm], in the violet portion of the visible spectrum. The study incorporated redshifts from the absorbing clouds in the range of 1.96 to 3.38; these are found in front of (at lower redshift than) quasars with redshifts ranging from 2.1 to 3.5.

Speedup or slowdown versus age of universe. The Big Bang is on the left, 13.7 billion years ago.

The authors’ measurement of the expansion rate of the universe is shown as the red dot in this figure. The white line through the various data points is the rate of expansion of the universe expected versus time for the standard Lambda-Cold Dark Matter cosmological model. The expansion rate at early times was lessening due to gravity from matter (ordinary and dark), but it is now increasing, since dark energy has come to dominate during the last 5 billion years or so. The red data point is clearly on the slowing down portion of the curve. Image credit: http://sdss3.wordpress.com/2012/11/13/boss-detects-baryon-acoustic-oscillations-in-the-lyman-alpha-forest-at-z-of-2-3/  

The BAO feature has been measured a number of times, using galaxy spatial distributions, but always at lower redshifts, that is at more recent times. This is not only the first Lyman alpha-based measurement, but the first measurement made at a high redshift for which the universe was still slowing down, i.e. the expansion was decelerating. At the redshift of 2.3, when the universe was only about 3 billion years old, the gravitational effect of dark plus ordinary matter was stronger than the repulsive effect of dark energy. It is only more recently, after the universe become about 9 billion years old (some 5 billion years ago), and corresponding to redshifts less than about z = 0.8, that dark energy began to dominate and cause an acceleration in the overall expansion of the universe.

Since this observation shows a significantly higher rate of expansion than occurred at the minimum around 5 billion years ago, it is further evidence that dark energy in some form is real. As the authors state in their paper: “Combined with CMB constraints, we deduce the expansion rate at z = 2.3 and demonstrate directly the sequence of deceleration and acceleration expected in dark-energy dominated cosmologies.” This is an exciting result, providing additional confirmation that dark energy represents around three-quarters of the present-day energy balance of the universe.


Dark Energy Survey First Light!

Last month the Dark Energy Survey project achieved first light from its remote location in Chile’s Atacama Desert. The term first light is used by astronomers to refer to the first observation by a new instrument.

And what an instrument this is! It is in fact the world’s most powerful digital camera. This Dark Energy Camera, or DECam, is a 570 Megapixel optical survey camera with a very wide field of view. The field of view is over 2 degrees, which is rather unusual in optical astronomy. And the camera requires special CCDs that are sensitive in the red and infrared parts of the spectrum. This is because distant galaxies have their light shifted toward the red and the infrared by the cosmological expansion. If the galaxy redshift is one,  the light travels for about 8 billion years and the wavelength of light that the DECam detects is doubled, relative to what it was when it was originally emitted.

Dark Energy Camera

Image: DECam, near center of image, is deployed at the focus of the 4-meter Victor M. Blanco optical telescope in Chile (Credit: Dark Energy Survey Collaboration)

The DECam has been deployed to further our understanding of dark energy through not just one experimental method, but in fact four different methods. That’s how you solve tough problems – by attacking them on multiple fronts.

It’s taken 8 years to get to this point, and there have been some delays, as normal for large projects. But now this new instrument is mounted at the focal plane of the existing 4-meter telescope of the National Science Foundation’s Cerro Tololo Inter-American observatory in Chile. It will begin its program of planned measurements of several hundred million galaxies starting in December after several weeks of testing and calibration. Each image from the camera-telescope combination can capture up to 100,000 galaxies out to distances of up to 8 billion light years. This is over halfway back to the origin of the universe almost 14 billion years ago.

In a previous blog entry I talked about the DES and the 4 methods in some detail. In brief they are based on observations of:

  1. Type 1a supernova (the method used to first detect dark energy)
  2. Very large scale spatial correlations of galaxies separated by 500 million light-years (this experiment is known as Baryon Acoustic Oscillations since the galaxy separations reflect the imprint of sound waves in the very early universe, prior to galaxy formation)
  3. The number of clusters of galaxies as a function of redshift (age of the universe)
  4. Gravitational lensing, i.e. distortion of background images by gravitational effects of foreground clusters in accordance with general relativity

NGC 1365

Image: NGC 1365, a barred spiral galaxy located in the Fornax cluster located 60 million light years from Earth (Credit: Dark Energy Survey Collaboration)

What does the Dark Energy Survey team, which has over 120 members from over 20 countries, hope to learn about dark energy? We already have a good handle on its magnitude, at around 73% presently of the universe’s total mass-energy density.

The big issue is does it behave as a cosmological constant or as something more complex? In other words, how does the dark energy vary over time and is there possibly some spatial variation as well? And what is its equation of state, or relationship between its pressure and density?

With a cosmological constant explanation the relationship is Pressure = – Energy_density, a negative pressure, which is necessary in any model of the dark energy, in order for it to drive the accelerated expansion seen for the universe. Current observations from other experiments, especially those measuring the cosmic microwave background, support an equation of state parameter within around 5% of the value -1, as represented in the equation in the previous sentence. This is consistent with the interpretation as a pressure resulting from the vacuum. Dark energy appears also to have a constant or nearly constant density per unit volume of space. It is unlike ordinary matter and dark matter, that both drop in mass density (and thus energy density) as the volume of the universe grows. Thus dark energy becomes ever more dominant over dark matter and ordinary matter as the universe continues to expand.

We can’t wait to see the first publication of results from research into the nature of dark energy using the DECam.

References:

http://www.noao.edu/news/2012/pr1204.php – Press release from National Optical Astronomical Observatory on DECam first light

www.darkenergysurvey.org

http://www.ctio.noao.edu/noao/ – Cerro Tololo Inter-American Observatory page

http://lambda.gsfc.nasa.gov/product/map/dr4/pub_papers/sevenyear/basic_results/wmap_7yr_basic_results.pdf – WMAP 7 year results on cosmic microwave background

https://darkmatterdarkenergy.com/2011/03/08/dark-energy-survey/