Tag Archives: Dark energy

Black Holes* as a possible source of Universal Dark Energy

Previously I have written about the possibility of primordial black holes as the explanation for dark matter, and on the observational constraints around such a possibility. 

But maybe it is dark energy, not dark matter, that black holes explain. More precisely,  it would be dark energy stars (or gravatars, or GEODEs) that are observationally similar to black holes.

Dark energy

Dark energy is named thusly because it has negative pressure. There is something known as an equation of state that relates pressure to energy density. For normal matter, or for dark matter, the coefficient of the relationship, w, is zero or slightly positive, and for radiation it is 1/3.

If it is non-zero and positive then the fluid component loses energy as the universe expands, and for radiation, this means there is a cosmological redshift. The redshift is in proportion to the universe’s linear scale factor, which can be written as the inverse of the cosmological redshift plus one, by normalizing it to the present-day scale. The cosmological redshift is a measure of the epoch as well, currently z = 0, and the higher the redshift the farther we look back into the past, into the earlier years of the universe. Light emitted at frequency ν is shifted to lower frequency (longer wavelength) ν’ = ν / (1 + z).

Since 1998, we have known that we live in a universe dominated by dark energy (and its associated dark pressure, or negative pressure). The associated dark pressure outweighs dark energy by a factor of 3 because it appears 3 times, once for each spatial component in Einstein’s stress-energy tensor equations of general relativity.

Thus dark energy contributes a negative gravity, or expansion acceleration, and we observe that our universe has been accelerating in its expansion for the past 4 or 5 billion years, since dark energy now provides over 2/3 of the universal energy balance. Dark matter and ordinary matter together amount to just less than 1/3 of the average rest-mass energy density.

If w is less than -1/3 for some pervasive cosmological component, then you have dark energy behavior for that component, and in our universe today over the past several billion years, measurements show w = -1 or very close to it. This is the cosmological constant case where dark energy’s negative pressure has the same magnitude but the opposite sign of the positive dark energy density. More precisely, the dark pressure is the negative of the energy density times the speed of light squared.

Non-singular black holes

There has been consideration for decades of other types of black holes that would not have a singularity at the center. In standard solutions of general relativity black holes have a central singular point or flat zone, depending on whether their angular momentum is zero or positive.

For example a collapsing neutron star overwhelms all pressure support from neutron degeneracy pressure once its mass exceeds the TOV limit at about 2.7 solar masses (depending on angular momentum), and forms a black hole that is often presumed to collapse to a singularity.

But when considering quantum gravity, and quantum physics generally, then there should be some very exotic behavior at the center, we just don’t know what. Vacuum energy is one possibility.

For decades various proposals have been made for alternatives to a singularity, but the problem has been observationally intractable. A Soviet cosmologist Gliner, who was born just 100 years ago in Kyiv, and who only passed away in 2021, proposed the basis for dark energy stars and a dark energy driven cosmology framework in 1965 (in English translation, 1966).

E. Gliner, early 1970s in St. Petersburg, courtesy Gliner family

He defended his Ph.D. thesis in general relativity including dark energy as a component of the stress-energy tensor in 1972. Gliner emigrated to the US in 1980.

The essential idea is that the equation of state for compressed matter changes to that of a material (or “stuff”) with a fully negative pressure, w = -1 and thus that black hole collapse would naturally result in dark energy cores, creating dark energy stars or gravatars rather than traditional black holes. The cores could be surrounded with an intermediate transition zone and a skin or shell of normal matter (Beltracchi and Gondolo 2019). The dark energy cores would have negative pressure.

Standard black hole solution is incomplete

Normally black hole physics is attacked with Kerr (non-zero angular momentum) or Schwarzschild (zero angular momentum) solutions. But these are incomplete, in that they assume empty surroundings. There is no matching of the solution to the overall background which is a cosmological solution. The universe tracks an isotropic and homogeneous (on the largest scales) Lambda-cold dark matter (ΛCDM) solution of the equations of general relativity. Since dark energy now dominates, it is approaching a de Sitter exponential runaway, whereas traditional black hole solutions with singularities are quite the opposite, known as anti-de Sitter.

We have no general solution for black hole equations including the backdrop of an expanding universe. The local Kerr solution for rotating black holes that is widely used ignores the far field. Really one should match the two solution regimes, but there has been no analytical solution that does that; black hole computations are very difficult in general, even ignoring the far field.

In 2019, Croker and Weiner of the University of Hawaii found a way to match a model of dark energy cores to the standard ΛCDM cosmology, and demonstrate that for w = -1 that dark energy stars (black holes with dark energy cores) would have masses that grow in proportion to the cube of the universe’s linear scale factora, starting immediately from their initial formation at high redshifts. In effect they are forced to grow their masses and expand (with their radius proportional to mass as for a black hole) by all of the other dark energy stars in the universe acting in a collective fashion. They call this effect, cosmological coupling of the dark energy star gravity to the long-range and long-term cosmological gravitational field.

This can be considered a blueshift for mass, as distinguished from the energy or frequency redshift we see with radiation in the cosmos.

Their approach potentially addresses several problems: (1) an excess of larger galaxies and their supermassive black holes seen very early on in the recent James Webb Space Telescope data, (2) more intermediate mass black holes than expected, as confirmed from gravitational wave observations of black hole mergers, (see Coker, Zevin et al. 2021 for a possible explanation via cosmological coupling), and (3) possibly a natural explanation for all or a substantial portion of the dark energy in the universe, which has been assumed to be highly diffuse rather than composed primarily of a very large number of point sources.

Inside dark energy stars, the dark energy density would be many, many orders of magnitude higher than it is in the universe at large. But as we will see below, it might be enough to explain all of the dark energy budget of the ΛCDM cosmology.

M87* supermassive black hole (or dark energy star) imaged in polarized radio waves by the Event Horizon Telescope collaboration; signals are combined from a global collection of radio telescopes via aperture synthesis techniques. European Southern Observatory, licensed under a Creative Commons Attribution 4.0 International License

A revolutionary proposal

Here’s where it gets weird. A number of researchers have investigated the coupling of a black hole’s interior to an external expanding universe. In this case there is no singularity but instead a vacuum energy solution interior to the (growing) compact stellar remnant.

And one of the most favored possibilities is that the coupling causes the mass for all black holes to grow in proportion to the universe’s characteristic linear size a cubed, just as if it were a cosmological constant form of dark energy. This type of “stuff” retains equal energy density throughout all of space even as the space expands, as a result of its negative pressure with equation of state parameter w = -1.

Just this February a very interesting pair of papers has been published in The Astrophysical Journal (the most prestigious American journal for such work) by a team of astronomers from 9 countries (US, UK, Canada, Japan, Netherlands, Germany, Denmark, Portugal, and Cyprus), led by the University of Hawaii team mentioned above.

They have used observations of a large number of supermassive black holes and their companion galaxies out to redshift 2.5 (when the universe was less than 3 billion years of age) to argue that there is observable cosmologicalcouplingbetween the cosmological gravitational field at large and the SMBH masses, where they suppose those masses are dominated by dark energy cores.

Figure 1 from Farrah, Croker, et al. shows their measured cosmological coupling parameter k based on 3 catalogs (5 samples using different emission lines) of supermassive black holes contained in elliptical galaxies at high redshifts, 0.7 < z < 2.5. If k =3, that corresponds to the cosmological constant case with equation of state parameter w = -1.

Their argument is that the black hole* (or *dark energy star) masses have grown much faster than could be explained by the usual mechanism of accretion of nearby matter and mergers.

In Figure 1 from their second paper of the pair (Farrah, Croker, et al. 2023), they present their measurements of the strength of cosmological coupling for five different galaxy surveys (three sets of galaxies but two sets were surveyed at two frequencies each). They observed strong increases in the measured SMBH masses from redshifts close to z =1 and extending above z = 2. They derive a coupling strength parameter k that measures the power law index of how fast the black hole masses grow with redshift. 

Their reformulation of the black hole model to include the far field yields cosmological coupling of the dark energy cores. The mass of the dark energy core, coupled to the overall cosmological solution, results in a mass increase M ~ a^k , a power law of index k, depending on the equation of state for the dark energy. Here a is the cosmological linear scale factor of the universal expansion and is also equal to 1/(1+z) where z is the redshift at which a galaxy and its SMBH are observed. (The scale factor a is normalized to 1 presently, such that z = 0 now and is positive in the past).

And they are claiming that their sample of several hundred galaxies and supermassive black holes indicates k = 3, on average, more or less. So between z = 1 and z = 0, over the past 8 billion years, they interpret their observations as an 8-fold growth in black hole masses. And they say this is consistent with M growing by a^3 as the universe’s linear scale has doubled (a was 1/2 at z = 1). This implies they are measuring a different class of black holes than we normally think of, those don’t increase in mass other than by accretion and mergers. Normal black holes would yield k > 0 but not by much, based on expected accretion and mergers. The k = 0 case they state is excluded by their observations with over 99.9% confidence.

The set of upper graphs in Figure 1 is for the various surveys, and the large lower graph combines all of the surveys as a single data set. They find a near-Gaussian distribution, and k is centered near 3, with an uncertainty close to 1. There is a 2/3 chance that the value lies between 2.33 and 3.85, based on their total sample of over 400 active galaxy nuclei.

And they also suggest this effect would be for all dark energy dominated “black holes”, including stellar class and intermediate BHs, not just SMBHs. So they claim fast evolution in all dark energy star masses, in proportion to the volume growth of the expanding universe, and consistent with dark energy cores having an equation of state just like the observed cosmological constant.

Now it gets really interesting.

We already know that the dark energy density of the universe, unlike the ever-thinning mater and radiation density, is more or less constant in absolute terms. That is the cosmological constant, due to vacuumenergy, interpretation of dark energy for which the pressure is negative and causes acceleration of the universe’s expansion. Each additional volume of the growth has its own associated vacuum energy (around 4 proton masses’ worth of rest energy per cubic meter). This is the universe’s biggest free lunch since its original creation.

The authors focus on dark energy starts created during the earliest bursts of star formation. These are the so-called Pop III stars, never observed because all or mostly all have reached end of life long ago. When galaxy and star formation starts as early as about 200 million years after the Big Bang, there is only hydrogen and helium for atomic matter. Heavier elements must be made in those first Pop III stars. As a result of their composition, the first stars with zero ‘metallacity’ have higher stellar masses; high mass stars are the ones that evolve most rapidly and they quickly end up as white dwarfs, or more to the point here, black holes or neutron stars in supernovae events. Or, they end their lives as dark energy stars.

The number of these compact post supernova remnant stars will decrease in density in inverse proportion to the increasing volume of the expanding universe. But the masses of all those that are dark energy stars would increase as the cube of the scale factor, in proportion to the increasing volume.

And the net effect would be just right to create a cosmological constant form of dark energy as the total contribution of billions upon billions of dark energy stars. And dark energy would be growing as a background field from very early on. Regular matter and dark matter thin out with time, but this cohort would have roughly constant energy density once most of the first early rounds of star formation completed, perhaps by redshift z = 8, well within the first billion years. Consequently, dark energy cores, collectively, would dominate the universe within the last 4 or 5 billion years or so, as the ordinary and dark matter density fell off. And now its dominance keeps growing with time.

But is there enough dark energy in cores?

But is it enough? How much dark energy is captured in these dark energy stars, and can it explain the dominant 69% of the universe’s energy balance that is inferred from observations of distant supernovae, and from other methods?

The dark energy cores are presumably formed from the infall and extreme compression of ordinary matter, from baryons captured into the progenitors of these black hole like stars and being compressed to such a high degree that they are converted into a rather uniform dark energy fluid. And that dark energy fluid has the unusual property of negative pressure that prevents further compression to a singularity.

It is possible they could consume some dark matter, but ordinary matter clumps much more easily since it can radiate away energy via radiation, which dark matter does not do. Any dark matter consumption would only build their case here, but we know the overall dark matter ratio of 5:1 versus ordinary matter has not changed much since the cosmic microwave background emission after the first 380,000 years. 

We know from cosmic microwave background measurements and other observations, that the ordinary matter or baryon budget of the universe is just about 4.9%, we’ll call it 5% in round numbers. The rest is 69% dark energy, and 26% dark matter.

So the question is, how much of the 5% must be locked up in dark energy stars to explain the 69% dark energy dominance that we currently see?

Remember that with dark energy stars the mass grows as the volume of the universe grows, that is in proportion to (1 + z)3. Now dark energy stars will be formed at different cosmological redshifts, but let’s just ask what fraction of baryons would we need to convert, assuming all are formed at the same epoch. This will give us a rough feel for the range of when and of how much.

Table 1 looks at some possibilities. It asks what fraction of baryons need to collapse into dark energy cores, and we see that the range is from only about 0.2% to 1% of baryons are required. Those baryons are just 5% of the mass-energy of the universe, and only 1% or less of those are needed, because the mass expansion factors range from about 1000 to about 10,000 — 3 to 4 orders of magnitude, depending on when the dark energy stars form.

Table 1. The first column has the redshift (epoch) of dark energy star formation. In actuality it will happen over a broad range of redshifts, but the earliest stars and galaxies seem to have formed from around 200 to 500 million years after the Big Bang started. The second column has the mass expansion factor (1+z)3; the DE star’s gravitational mass grows by that factor from the formation z until now. The third column is the age of the universe at DE star formation. The fourth column tells us what fraction of all baryons need to be incorporated into dark energy cores in those stars (they could be somewhat more massive than that). The fifth column is the lower bound on their current mass if they never experience a merger or accretion of other matter. All in all it looks as if less than 1% of baryons convert to dark energy cores.

The fifth column shows the current mass of a minimal 3 solar mass dark energy star at present, noting that 3 solar masses is the lightest known black hole. There may be lighter dark energy stars, but not very much lighter than that, perhaps a little less than 2 solar masses. And the number density should be highest at the low end according to everything we know about star formation.

Now to some degree these are underestimates for the final mass, as shown in the fifth column, since there will be mergers and accretion of other matter into these stars, and of the two effects, the mergers are more important, but they support the general argument. If a dark energy star merges with a neutron star, or other type of black hole, the dark energy core gains in relative terms. So all of this is a plausibility argument that says if the formation is of dark energy stars of a few solar masses in the epoch from 200 to 500 million years after the Big Bang, that less than 1% of all baryons are needed. And it says that the final masses are well into the intermediate range of thousands or tens of thousands of solar masses, and yet they can hide out in galaxies or between galaxies with hundreds of billions of solar masses, only contributing a few percent to the total mass. 

Dark energy star cosmology 

Dark energy star cosmology needs to agree with the known set of cosmological observations. It has to provide all or a significant fraction of the total dark energy budget in order to be useful. It appears from simple arguments that it can meet the budget by conversion of a small percentage of the baryons in the universe to dark energy stars.

It should exhibit an equation of state w = -1 or nearly so, and it appears to do that. It should not contribute too much mass to upset our galaxy mass estimates. It does that and it does not appear to explain dark matter in any direct way.

Dark energy stars collectively could potentially fill that role. In the model described above it is their collective effects that are being modeled as a dark energy background field that in turn drives dark energy star cores to higher masses over time. Dark energy (as a global field) feeds on itself (the dark energy cores)!

There are some differences with the normal ΛCDM cosmology assumption of a highly uniform dark energy background, not one composed of a very large number of point sources. In particular the ΛCDM cosmology has the dark energy background there from the very beginning, but it is not significant until,the universe has expanded sufficiently.

With the dark energy star case it has to be built up, one dark energy core at a time. So the dark energy effects do not begin until redshifts less than say z = 20 to 30 and most of it may be built up by z = 8 to 10, within the first billion years.

In the dark energy star case we will have accretion of nearby matter including stars, and mergers with neutron stars, other dark energy stars, and other black hole types.

A merger with a neutron star or non dark energy star only increases the mass in dark energy cores; it is positive evolution in the aggregate dark energy core component. A merger of two dark energy stars will lose some of the collective mass in conversion to gravitational radiation, and is a negative contribution toward the overall dark energy budget.

One way to distinguish between the two cosmological models is to push our measurement of the strength of dark energy as far back as we can and look for variations. Another is to identify as many individual intermediate scale black holes / dark energy stars as we can from gravitational wave surveys and from detailed studies of globular clusters and dwarf galaxies.

What about dark matter?

Dark matter’s ratio to ordinary matter at the time of the cosmic microwave background emission is measured to be 5:1 and currently in galaxies and their rotation curves and in clusters of galaxies in their intracluster medium it is also seen to be around 5:1 on average. Since the dark energy cores in the Croker et al. proposal are created hundreds of millions of years after the cosmic microwave background era, then these dark energy stars can not be a major contribution to dark matter per se.

The pair of papers just published by the team doesn’t really discuss dark matter implications. But a previous paper by Croker, Runburg and Farrah (2020) explored the interaction between the dark energy bulk behavior of the global population of dark energy stars with cold dark matter and found little or no affect.

Their process converts a rather small percentage of baryons (or even some dark matter particles) into dark energy and its negative pressure. Such material couples differently to the gravitational field than dark matter, which like ordinary matter is approximately dust-like with an equation of state parameter w = 0.

In the 2020 paper they find that GEODEs or dark energy stars can be spread out even more than dark matter that dominates galaxy halos, or the intracluster medium in rich clusters of galaxies.

Prizes ahead?

This concept of cosmological coupling is one of the most interesting areas of observational and theoretical cosmology in this century. If this work by Croker and collaborators is confirmed the team will be winning prizes in astrophysics and cosmology, since it could be a real breakthrough in both our understanding of the nature of dark energy and our understanding of black hole physics.

In any case, Dark Energy Star already has its own song. 

Glossary

Black Hole: A dense collection of matter that collapses inside a small radius, and in theory, to a singularity, and which has sufficiently strong gravity that nothing, not even light, is able to escape. Black holes are characterized by three numbers: mass, angular momentum, and charge.

Cosmological constant: Einstein added this term, Λ, on the left hand side of the equations of general relativity, in a search for a static universe solution. It corresponds to an equation of state parameter w = -1. If the term is moved to the right hand side it becomes a dark energy source term in the stress-energy tensor.

Cosmological coupling: The coupling of local properties to the overall cosmological model. For example, photons redshift to lower energies with the expansion of the universe. It is argued that dark energy stellar cores would collectively couple to the overall Friedmann cosmology that matches the bulk parameters of the universe. In this case it would be a ‘blueshift’ style increase in mass in proportion to the growing volume of the universe, or perhaps more slowly.

Dark Energy: Usually attributed to energy of the vacuum, dark energy has a negative pressure in proportion to its energy density. It was confirmed by Nobel prize winning teams that dark energy is the dominant component of the universe’s mass-energy balance, some 69% of the critical value, and is driving an accelerated expansion with an equation of state w = -1 to within small errors.

Dark Energy Star: A highly compact object that should look like a black hole externally but has no singularity at its core. Instead it has a core of dark energy. If one integrates over all dark energy stars, it may add up to a portion or all of the universe’s dark energy budget. It should have a crust of ‘normal’ matter with anisotropic stress at the boundary with the core, or an intermediate transition zone with varying equation of state between the crust and the core.

Dark Matter: An unknown substance thought to reside in galactic halos, with 5 times as much matter density on average as ordinary matter. Dark matter does not interact electromagnetically and is typically considered to be particulate in nature, although primordial mini black holes have been suggested as one possible explanation.

Equation of state: The relationship between pressure and energy density, P = w * ρ * c^2 where P is pressure and can be negative, and ρ the energy density is positive. If w < -1/3 there is dark pressure, if w = -1 it is the simplest cosmological constant form. Dark matter or a collection of stars or galaxies can be modeled as w ~ 0.

GEODEs: GEneric Objects of Dark Energy, dark energy stars. Formation is thought to occur from Pop III stars, the first stellar generation, at epochs 30 > z > 8.

Gravastar: A stellar model that has a dark energy core and a very thin outer shell. With normal matter added there is anisotropic stress at the boundary to maintain pressure continuity from the core to the shell.

Non-singular black holes: A black hole like object with no singularity.

Primordial black holes: Black holes that may have formed in the very early universe, within the first second. Primordial dark energy stars in large numbers would be problematic, because they would grow in mass by (1 + z)^3 where z >> 1000. 

Vacuum energy: The irreducible energy of the vacuum state. The vacuum state is not empty, it is pervaded by fields and virtual particles that pop in and out of existence on very short time scales.

References

https://scitechdaily.com/cosmological-coupling-new-evidence-points-to-black-holes-as-source-of-dark-energy/ – Popular article about the research from University of Hawaii lead authors and collaborators 

https://www.phys.hawaii.edu/~kcroker/ – Kevin Croker’s web site at University of Hawaii

Beltracchi, P. and Gondolo, P. 2019, https://arxiv.org/abs/1810.12400 “Formation of Dark Energy Stars”

Croker, K.S. and Weiner J.L. 2019, https://dor.org/10.3847/1538-4357/ab32da “Implications of Symmetry and Pressure in Friedmann Cosmology. I. Formalism”

Croker, K.S., Nishimura, K.A., and Farrah D., 2020 https://arxiv.org/pdf/1904.03781.pdf, “Implications of Symmetry and Pressure in Friedmann Cosmology. II. Stellar Remnant Black Hole Mass Function”

Croker, K.S., Runburg, J., and Farrah D., 2020 https://doi.org/10.3847/1538-4357/abad2f “Implications of Symmetry and Pressure in Friedmann Cosmology. III. Point Sources of Dark Energy that tend toward Uniformity”

Croker, K.S., Zevin, M.J., Farrah, D., Nishimura, K.A., and Tarle, G. 2021, “Cosmologically coupled compact objects: a single parameter model for LIGO-Virgo mass and redshift distributions” https://arxiv.org/pdf/2109.08146.pdf

Farrah, D., Croker, K.S. et al. 2023 February, https://iopscience.iop.org/article/10.3847/2041-8213/acb704/pdfObservational Evidence for Cosmological Coupling of Black Holes and its Implications for an Astrophysical Source of Dark Energy” (appeared in Ap.J. Letters 20 February, 2023)

Farrah, D., Petty S., Croker K.S. et al. 2023 February, https://doi.org/10.3847/1538-4357/acac2e “A Preferential Growth Channel for Supermassive Black Holes in Elliptical Galaxies at z <~ 2”

Ghezzi, C.R. 2011, https://arxiv.org/pdf/0908.0779.pdf “Anisotropic dark energy stars”

Gliner, E.B. 1965, Algebraic Properties of the Energy-momentum Tensor and Vacuum-like States of Matter. ZhTF 49, 542–548. English transl.: Sov. Phys. JETP 1966, 22, 378.

Harikane, Y., Ouchi, M., et al. arXiv:2208.01612v3, “A Comprehensive Study on Galaxies at z ~ 9 – 16 Found in the Early JWST Data: UV Luminosity Functions and Cosmic Star-Formation History at the Pre-Reionization Epoch”

Perrenod, S.C. 2017, “Dark Energy and the Cosmological Constant” https://darkmatterdarkenergy.com/2017/07/13/dark-energy-and-the-comological-constant/ 

Whalen, D.J., Even, W. et al.2013, doi:10.1088/004-637X/778/1/17, “Supermassive Population III Supernovae and the Birth of the first Quasars”

Yakovlev, D. and Kaminker, A. 2023, https://arxiv.org/pdf/2301.13150.pdf “Nearly Forgotten Cosmological Concept of E.B. Gliner”


Dark Catastrophe, a few Trillion Years away?

The Equation of State for Dark Energy

The canonical cosmological model, known as ΛCDM, has all matter including CDM (cold dark matter), at approximately 30% of critical density. And it has dark energy, denoted by Λ, at 70%. While the cosmological constant form of dark energy, first included in the equations of general relativity by Einstein himself, has a positive energy, its pressure is negative.

The negative pressure associated with dark energy, not the positive dark energy density itself, is what causes the universe’s expansion to accelerate.

The form of dark energy introduced by Einstein does not vary as the universe expands, and the pressure, although of opposite sign, is directly proportional to the dark energy density. The two are related by the formula

P = – ρ c²

where P is the pressure and ρ the energy density, while c is the speed of light.

More generally one can relate the pressure to the energy density as an equation of state with the parameter w:

P = – w ρ c²

And in the cosmological constant form, w = -1 and is unvarying over the billions of years of cosmological time.

Does Dark Energy vary over long timescales?

String theory (also known as membrane theory) indicates that dark energy should evolve over time.

The present day dark energy may be the same field that was originally much much stronger and drove a very brief period of inflation, before decaying to the current low value of about 6 GeV (proton rest masses) per cubic meter.

There are searches for variation in the equation of state parameter w; they are currently inconclusive.

How much variance could there be?

In string theory, the dark energy potential gradient with respect to the field strength yields a parameter c of order unity. For differing values of c, the equation of state parameter w varies with the age of the universe, more so as c is larger.

When we talk about cosmological timescales, it is convenient to speak in terms of the cosmological redshift, where z = 0 is the present and z > 0 is looking back with a larger z indicating a larger lookback time. If the parameter c were zero then the value of w would be -1 at all redshifts (z = 0 is the current epoch and z = 1 is when the universe only about 6 billion years old, almost 8 billion years ago).

WvsC

This Figure 3 from an article referenced below by Cumrun Vafa of Harvard shows the expected variance with redshift z for the equation of state parameter w. The observationally allowed region is shaded gray. The colored lines represent different values of the parameter c from string theory (not the speed of light). APS/Alan Stonebraker

Observational evidence constraining w is gathered from the cosmic microwave background, from supernovae of Type Ia, and from the large scale galaxy distribution. That evidence from all three methods in combination restricts one to the lower part of the diagram, shaded gray, thus w could be -1 or somewhat less. There are four colored curves, labelled by their value of the string theory parameter c, and it appears that c > 0.65 could be ruled out by observations.

Hubble Constant tension: String theory explaining?

It’s not the constant tension of the Hubble constant. Rather it is the tension, or disagreement between the cosmic microwave background observational value of the Hubble constant, at around 67 kilometers/sec/Megaparsec and the value from supernovae observations, which yield 73 kilometers/sec. And the respective error bars on each measurement are small enough that the difference may be real.

The cosmic microwave background observations imply a universe about a billion years older, and also better fit with the ages of the oldest stars.

It turns out a varying dark energy with redshift as described above could help to explain much of the discrepancy although perhaps not all of it.

Better observations of the early universe’s imprint on the large scale distribution of galaxies from ground-based optical telescope surveys and from the Euclid satellite’s high redshift gravitational lensing and spectroscopic redshift measurements in the next decade will help determine whether dark energy is constant or not. This could help to disprove string theory or enhance the likelihood that string theory has explanatory power in physics.

Tests of string theory have been very elusive since we cannot reach the extremely high energies required with our Earth-based particle accelerators. Cosmological tests may be our best hope, small effects from string theory might be detectable as they build up over large distances.

And this could help us to understand if the “swampland conjecture” of string theory is likely, predicting an end to the universe within the next few trillion years as the dark energy field tunnels to an even lower energy state or all matter converts into a “tower of light states” meaning much less massive particles than the protons and neutrons of which we are composed.

Reference

“Cosmic Predictions from the String Swampland”, Cumrun Vafa, 2019. Physics 12, 115, physics.aps.org


Does Dark Energy Vary with Time?

Einstein introduced the concept of dark energy 100 years ago.

The Concordance Lambda-Cold Dark Matter cosmology appears to fit observations of the cosmic microwave background and other cosmological observations including surveys of large-scale galaxy grouping exceedingly well.

In this model, Lambda is shorthand for the dark energy in the universe. It was introduced as the greek letter Λ into the equations of general relativity, by Albert Einstein, as an unvarying cosmological constant.

Measurements of Λ indicate that dark energy accounts for about 70% of the total energy content of the universe. The remainder is found in dark matter and ordinary matter, and about 5/6 of that is in the form of dark matter. 

Alternative models of gravity, with extra gravity in very low acceleration environments, may replace apparent dark matter with this extra gravity, perhaps due to interaction between dark energy and ordinary matter.

The key point about dark energy is that while it has a positive energy, it rather strangely has a negative pressure. In the tensor equations of general relativity the pressure terms act as a negative gravity, driving an accelerated expansion of the universe.

In fact our universe is headed toward a state of doubling in scale in each dimension every 11 or 12 billion years. In the next trillion years we are looking at 80 or 90 such repeated doublings.

That assumes that dark energy is constant per volume over time, with a value equivalent to two proton – antiproton pair annihilations per cubic meter (4 GeV / m³).

But is it?

The Dark Energy Survey results seem to say so. This experiment looked at 26 million galaxies for the clustering patterns, and also gravitational lensing (Einstein taught us that mass bends light paths).

They determined the parameter w for dark energy and found it to be consistent with -1.0 as expected for the cosmological constant model of unvarying dark energy. See this blog for details:

https://darkmatterdarkenergy.com/2017/08/10/dark-energy-survey-first-results-canonical-cosmology-supported/

The pressure – energy density relation is:

P = w \cdot \rho \cdot c^2

The parameter w elucidates the relation between the energy density given by ρ and the pressure P. This is called the equation of state. Matter and radiation have w >= 0. In order to have dark energy with a negative pressure dominating, then w should be < -1/3. And w = -1 gives us the cosmological constant form.

EquationofStateImage credit: www.scholarpedia.org

Cosmologists seek to determine w, and whether it varies over time scales of billions of years.

The Concordance model is not very well tested at high redshifts with z > 1 (corresponding to epochs of the universe less than half the current age) other than with the cosmic microwave background data. Recently two Italian researchers, Risaliti and Lusso have examined datasets of high-redshift quasars to investigate whether the Concordance model fits.

Typically supernovae are employed for the redshift-distance relation, and cosmological models are tested against the observed relationship, known as the Hubble diagram. The authors use X-ray and ultraviolet fluxes of quasars to extend the diagram to high redshifts (greater distances, earlier epochs), and calibrate observed quasar luminosities with the supernovae data sets.

Their analysis drew from a sample of 1600 quasars with redshifts up to 5 and including a new sample of 30 high redshift z ~ 3 quasars, observed with the European XMM-Newton satellite.

They claim a 4 standard deviation variance for z > 2, a reasonably high significance.

Models with a varying w include quintessence models, with time-varying scalar fields. If w decreases below -1, it is known as phantom energy. Their results are suggestive of a value of w < -1, corresponding to a dark or phantom energy increasing with time.

For convenience cosmologists introduce a second parameter for possible evolution in w, writing as:

     w = w0 + wa*(1-a)   ,where a, the scale factor equals 1/(1+z) and a = 1 for present day.

darkenergyvaries.fig4

The best fit results for their analysis are with w0 = -1.4 and wa ~ 1, but these results have large errors, as shown in Figure 4 above, from their paper. Their results are within the red (2 standard deviation, or σ) and orange (3σ) contours. The outer 3σ contours almost touch the cosmological constant point that has w0 = -1 and wa = 0.

These are intriguing results that require further investigation. They are antithetical to quintessence models, and apparently in tension with a simple cosmological constant.

The researchers plan on further analysis in future work by including Baryon Acoustic Oscillation (large scale galaxy clustering) measurements at z > 2.

References

https://darkmatterdarkenergy.com/2017/08/10/dark-energy-survey-first-results-canonical-cosmology-supported/ – Results from Dark Energy Survey of galaxies

Risaliti, G. and Lusso, E. 2018 Cosmological constraints from the Hubble diagram of quasars at high redshifts https://arxiv.org/abs/1811.02590

 

 


Matter and Energy Tell Spacetime How to Be: Dark Gravity

Is gravity fundamental or emergent? Electromagnetism is one example of a fundamental force. Thermodynamics is an example of emergent, statistical behavior.

Newton saw gravity as a mysterious force acting at a distance between two objects, obeying the well-known inverse square law, and occurring in a spacetime that was inflexible, and had a single frame of reference.

Einstein looked into the nature of space and time and realized they are flexible. Yet general relativity is still a classical theory, without quantum behavior. And it presupposes a continuous fabric for space.

As John Wheeler said, “spacetime tells matter how to move; matter tells spacetime how to curve”. Now Wheeler full well knew that not just matter, but also energy, curves spacetime.

A modest suggestion: invert Wheeler’s sentence. And then generalize it. Matter, and energy, tells spacetime how to be.

Which is more fundamental? Matter or spacetime?

Quantum theories of gravity seek to couple the known quantum fields with gravity, and it is expected that at the extremely small Planck scales, time and space both lose their continuous nature.

In physics, space and time are typically assumed as continuous backdrops.

But what if space is not fundamental at all? What if time is not fundamental? It is not difficult to conceive of time as merely an ordering of events. But space and time are to some extent interchangeable, as Einstein showed with special relativity.

So what about space? Is it just us placing rulers between objects, between masses?

Particle physicists are increasingly coming to the view that space, and time, are emergent. Not fundamental.

If emergent, from what? The concept is that particles, and quantum fields, for that matter, are entangled with one another. Their microscopic quantum states are correlated. The phenomenon of quantum entanglement has been studied in the laboratory and is well proven.

Chinese scientists have even, just last year, demonstrated quantum entanglement of photons over a satellite uplink with a total path exceeding 1200 kilometers.

Quantum entanglement thus becomes the thread Nature uses to stitch together the fabric of space. And as the degree of quantum entanglement changes the local curvature of the fabric changes. As the curvature changes, matter follows different paths. And that is gravity in action.

Newton’s laws are an approximation of general relativity for the case of small accelerations. But if space is not a continuous fabric and results from quantum entanglement, then for very small accelerations (in a sub-Newtonian range) both Newton dynamics and general relativity may be incomplete.

The connection between gravity and thermodynamics has been around for four decades, through research on black holes, and from string theory. Jacob Bekenstein and Stephen Hawking determined that a black hole possesses entropy proportional to its area divided by the gravitational constant G. This area law entropy approach can be used to derive general relativity as Ted Jacobson did in 1995.

But it may be that the supposed area law component is insufficient; according to Erik Verlinde’s new emergent gravity hypothesis, there is also a volume law component for entropy, that must be considered due to dark energy and when accelerations are very low.

We have had hints about this incomplete description of gravity in the velocity measurements made at the outskirts of galaxies during the past eight decades. Higher velocities than expected are seen, reflecting higher acceleration of stars and gas than Newton (or Einstein) would predict. We can call this dark gravity.

Now this dark gravity could be due to dark matter. Or it could just be modified gravity, with extra gravity over what we expected.

It has been understood since the work of Mordehai Milgrom in the 1980s that the excess velocities that are observed are better correlated with extra acceleration than with distance from the galactic center.

Stacey McGaugh and collaborators have demonstrated a very tight correlation between the observed accelerations and the expected Newtonian acceleration, as I discussed in a prior blog here. The extra acceleration kicks in below a few times 10^{-10} meters per second per second (m/s²).

This is suspiciously close to the speed of light divided by the age of the universe! Which is about 7 \cdot 10^{-10} m/s².

Why should that be? The mass/energy density (both mass and energy contribute to gravity) of the universe is dominated today by dark energy.

The canonical cosmological model has 70% dark energy, 25% dark matter, and 5% ordinary matter. In fact if there is no dark matter, just dark gravity, or dark acceleration, then it could be more like a 95% and 5% split between dark energy and (ordinary) matter components.

A homogeneous universe composed only of dark energy in general relativity is known as a de  Sitter (dS) universe. Our universe is, at present, basically a dS universe ‘salted’ with matter.

Then one needs to ask how does gravity behave in dark energy influenced domains? Now unlike ordinary matter, dark energy is highly uniformly distributed on the largest scales. It is driving an accelerated expansion of the universe (the fabric of spacetime!) and dragging the ordinary matter along with it.

But where the density of ordinary matter is high, dark energy is evacuated. An ironic thought, since dark energy is considered to be vacuum energy. But where there is lots of matter, the vacuum is pushed aside.

That general concept was what Erik Verlinde used to derive an extra acceleration formula in 2016. He modeled an emergent, entropic gravity due to ordinary matter and also due to the interplay between dark energy and ordinary matter.  He modeled the dark energy as responding like an elastic medium when it is displaced within the vicinity of matter. Using this analogy with elasticity, he derived an extra acceleration as proportional to the square root of the product of the usual Newtonian acceleration and a term related to the speed of light divided by the universe’s age. This leads to a 1/r force law for the extra component since Newtonian acceleration goes as 1/r².

g _D = sqrt  {(a_0 \cdot g_B / 6 )}

Verlinde’s dark gravity depends on the square root of the product of a characteristic acceleration a0 and ordinary Newtonian (baryonic) gravity, gB

The idea is that the elastic, dark energy medium, relaxes over a cosmological timescales. Matter displaces energy and entropy from this medium, and there is a back reaction of the dark energy on matter that is expressed as a volume law entropy. Verlinde is able to show that this interplay between the matter and dark energy leads precisely to the characteristic acceleration is a_0 / 6 = c \cdot H / 6 , where H is the Hubble expansion parameter and is equal to one over the age of the universe for a dS universe. This turns out be the right value of just over 10^{-10} m/s² that matches observations.

In our solar system, and indeed in the central regions of galaxies, we see gravity as the interplay of ordinary matter and other ordinary matter. We are not used to this other dance.

Domains of gravity

Acceleration

Domain Gravity vis-a-vis Newtonian formula

Examples

High (GM/R ~ c²) Einstein, general relativity Higher

Black holes, neutron stars

Normal Newtonian dynamics 1/r² Solar system, Sun orbit in Milky Way

Very low (< c/ age of U.)

Dark Gravity Higher, additional 1/r term Outer edges of galaxies, dwarf galaxies, clusters of galaxies

The table above summarizes three domains for gravity: general relativity, Newtonian, and dark gravity, the latter arising at very low accelerations. We are always calculating gravity incorrectly! Usually, such as in our solar system, it matters not at all. For example at the Earth’s surface gravity is 11 orders of magnitude greater than the very low acceleration domain where the extra term kicks in.

Recently, Alexander Peach, a Teaching Fellow in physics at Durham University, has taken a different angle based on Verlinde’s original, and much simpler, exposition of his emergent gravity theory in his 2010 paper. He derives an equivalent result to Verlinde’s in a way which I believe is easier to understand. He assumes that holography (the assumption that all of the entropy can be calculated as area law entropy on a spherical screen surrounding the mass) breaks down at a certain length scale. To mimic the effect of dark energy in Verlinde’s new hypothesis, Peach adds a volume law contribution to entropy which competes with the holographic area law at this certain length scale. And he ends up with the same result, an extra 1/r entropic force that should be added for correctness in very low acceleration domains.

Peach.fig2.jpeg

In figure 2 (above) from Peach’s paper he discusses a test particle located beyond a critical radius r_c for which volume law entropy must also be considered. Well within r_c  (shown in b) the dark energy is fully displaced by the attracting mass located at the origin and the area law entropy calculation is accurate (indicated by the shaded surface). Beyond r_c the dark energy effect is important, the holographic screen approximation breaks down, and the volume entropy must be included in the contribution to the emergent gravitational force (shown in c). It is this volume entropy that provides an additional 1/r term for the gravitational force.

Peach makes the assumption that the bulk and boundary systems are in thermal equilibrium. The bulk is the source of volume entropy. In his thought experiment he models a single bit of information corresponding to the test particle being one Compton wavelength away from the screen, just as Verlinde initially did in his description of emergent Newtonian gravity in 2010. The Compton wavelength is equal to the wavelength a photon would have if its energy were equal to the rest mass energy of the test particle. It quantifies the limitation in measuring the position of a particle.

Then the change in boundary (screen) entropy can be related to the small displacement of the particle. Assuming thermal equilibrium and equipartition within each system and adopting the first law of thermodynamics, the extra entropic force can be determined as equal to the Newtonian formula, but replacing one of the r terms in the denominator by r_c .

To understand r_c , for a given system, it is the radius at which the extra gravity is equal to the Newtonian calculation, in other words, gravity is just twice as strong as would be expected at that location. In turn, this traces back to the fact that, by definition, it is the length scale beyond which the volume law term overwhelms the holographic area law.

It is thus the distance at which the Newtonian gravity alone drops to about 1.2 \cdot 10^{-10} m/s², i.e. c \cdot H / 6 , for a given system.

So Peach and Verlinde use two different methods but with consistent assumptions to model a dark gravity term which follows a 1/r force law. And this kicks in at around 10^{-10} m/s².

The ingredients introduced by Peach’s setup may be sufficient to derive a covariant theory, which would entail a modified version of general relativity that introduces new fields, which could have novel interactions with ordinary matter. This could add more detail to the story of covariant emergent gravity already considered by Hossenfelder (2017), and allow for further phenomenological testing of emergent dark gravity. Currently, it is not clear what the extra degrees of freedom in the covariant version of Peach’s model should look like. It may be that Verlinde’s introduction of elastic variables is the only sensible option, or it could be one of several consistent choices.

With Peach’s work, physicists have taken another step in understanding and modeling dark gravity in a fashion that obviates the need for dark matter to explain our universe

We close with another of John Wheeler’s sayings:

“The only thing harder to understand than a law of statistical origin would be a law that is not of statistical origin, for then there would be no way for it—or its progenitor principles—to come into being. On the other hand, when we view each of the laws of physics—and no laws are more magnificent in scope or better tested—as at bottom statistical in character, then we are at last able to forego the idea of a law that endures from everlasting to everlasting. “

It is a pleasure to thank Alexander Peach for his comments on, and contributions to, this article.

References:

https://darkmatterdarkenergy.com/2018/08/02/dark-acceleration-the-acceleration-discrepancy/ blog “Dark Acceleration: The Acceleration Discrepancy”

https://arxiv.org/abs/gr-qc/9504004 “Thermodynamics of Spacetime: The Einstein Equation of State” 1995, Ted Jacobson

https://darkmatterdarkenergy.com/2017/07/13/dark-energy-and-the-comological-constant/ blog “Dark Energy and the Cosmological Constant”

https://darkmatterdarkenergy.com/2016/12/30/emergent-gravity-verlindes-proposal/ blog “Emergent Gravity: Verlinde’s Proposal”

https://arxiv.org/pdf/1806.10195.pdf “Emergent Dark Gravity from (Non) Holographic Screens” 2018, Alexander Peach

https://arxiv.org/pdf/1703.01415.pdf “A Covariant Version of Verlinde’s Emergent Gravity” Sabine Hossenfelder


Unified Physics including Dark Matter and Dark Energy

Dark matter keeps escaping direct detection, whether it might be in the form of WIMPs, or primordial black holes, or axions. Perhaps it is a phantom and general relativity is inaccurate for very low accelerations. Or perhaps we need a new framework for particle physics other than what the Standard Model and supersymmetry provide.

We are pleased to present a guest post from Dr. Thomas J. Buckholtz. He introduces us to a theoretical framework referred to as CUSP, that results in four dozen sets of elementary particles. Only one of these sets is ordinary matter, and the framework appears to reproduce the known fundamental particles. CUSP posits ensembles that we call dark matter and dark energy. In particular, it results in the approximate 5:1 ratio observed for the density of dark matter relative to ordinary matter at the scales of galaxies and clusters of galaxies. (If interested, after reading this post, you can read more at his blog linked to his name just below).

Thomas J. Buckholtz

My research suggests descriptions for dark matter, dark energy, and other phenomena. The work suggests explanations for ratios of dark matter density to ordinary matter density and for other observations. I would like to thank Stephen Perrenod for providing this opportunity to discuss the work. I use the term CUSP – concepts uniting some physics – to refer to the work. (A book, Some Physics United: With Predictions and Models for Much, provides details.)

CUSP suggests that the universe includes 48 sets of elementary-particle Standard Model elementary particles and composite particles. (Known composite particles include the proton and neutron.) The sets are essentially (for purposes of this blog) identical. I call each instance an ensemble. Each ensemble includes its own photon, Higgs boson, electron, proton, and so forth. Elementary particle masses do not vary by ensemble. (Weak interaction handedness might vary by ensemble.)

One ensemble correlates with ordinary matter, 5 ensembles correlate with dark matter, and 42 ensembles contribute to dark energy densities. CUSP suggests interactions via which people might be able to detect directly (as opposed to infer indirectly) dark matter ensemble elementary particles or composite particles. (One such interaction theoretically correlates directly with Larmor precession but not as directly with charge or nominal magnetic dipole moment. I welcome the prospect that people will estimate when, if not now, experimental techniques might have adequate sensitivity to make such detections.)

Buckholtztable

This explanation may describe (much of) dark matter and explain (at least approximately some) ratios of dark matter density to ordinary matter density. You may be curious as to how I arrive at suggestions CUSP makes. (In addition, there are some subtleties.)

Historically regarding astrophysics, the progression ‘motion to forces to objects’ pertains. For example, Kepler’s work replaced epicycles with ellipses before Newton suggested gravity. CUSP takes a somewhat reverse path. CUSP models elementary particles and forces before considering motion. The work regarding particles and forces matches known elementary particles and forces and extrapolates to predict other elementary particles and forces. (In case you are curious, the mathematics basis features solutions to equations featuring isotropic pairs of isotropic quantum harmonic oscillators.)

I (in effect) add motion by extending CUSP to embrace symmetries associated with special relativity. In traditional physics, each of conservation of angular momentum, conservation of momentum, and boost correlates with a spatial symmetry correlating with the mathematics group SU(2). (If you would like to learn more, search online for “conservation law symmetry,” “Noether’s theorem,” “special unitary group,” and “Poincare group.”) CUSP modeling principles point to a need to add to temporal symmetry and, thereby, to extend a symmetry correlating with conservation of energy to correlate with the group SU(7). The number of generators of a group SU(n) is n2−1. SU(7) has 48 generators. CUSP suggests that each SU(7) generator correlates with a unique ensemble. (In case you are curious, the number 48 pertains also for modeling based on either Newtonian physics or general relativity.)

CUSP math suggests that the universe includes 8 (not 1 and not 48) instances of traditional gravity. Each instance of gravity interacts with 6 ensembles.

The ensemble correlating with people (and with all things people see) connects, via our instance of gravity, with 5 other ensembles. CUSP proposes a definitive concept – stuff made from any of those 5 ensembles – for (much of) dark matter and explains (approximately) ratios of dark matter density to ordinary matter density for the universe and for galaxy clusters. (Let me not herein do more than allude to other inferably dark matter based on CUSP-predicted ordinary matter ensemble composite particles; to observations that suggest that, for some galaxies, the dark matter to ordinary matter ratio is about 4 to 1, not 5 to 1; and other related phenomena with which CUSP seems to comport.)

CUSP suggests that interactions between dark matter plus ordinary matter and the seven peer combinations, each comprised of 1 instance of gravity and 6 ensembles, is non-zero but small. Inferred ratios of density of dark energy to density of dark matter plus ordinary matter ‘grow’ from zero for observations pertaining to somewhat after the big bang to 2+ for observations pertaining to approximately now. CUSP comports with such ‘growth.’ (In case you are curious, CUSP provides a nearly completely separate explanation for dark energy forces that govern the rate of expansion of the universe.)

Relationships between ensembles are reciprocal. For each of two different ensembles, the second ensemble is either part of the first ensemble’s dark matter or part of the first ensemble’s dark energy. Look around you. See what you see. Assuming that non-ordinary-matter ensembles include adequately physics-savvy beings, you are looking at someone else’s dark matter and yet someone else’s dark energy stuff. Assuming these aspects of CUSP comport with nature, people might say that dark matter and dark-energy stuff are, in effect, quite familiar.

Copyright © 2018 Thomas J. Buckholtz

 


Dark Energy Survey First Results: Canonical Cosmology Supported

The Dark Energy Survey (DES) first year results, and a series of papers, were released on August 4, 2017. This is a massive international collaboration with over 60 institutions represented and 200 authors on the paper summarizing initial results. Over 5 years the Dark Energy Survey team plans to survey some 300 million galaxies.

The instrument is the 570-megapixel Dark Energy Camera installed on the Cerro Tololo Inter-American Observatory 4-meter Blanco Telescope.

11-0222-13D_hr2-682x1024.jpg

Image: DECam imager with CCDs (blue) in place. Credit: darkenergysurvey.org

Over 26 million source galaxy measurements from far, far away are included in these initial results. Typical distances are several billion light-years, up to 9 billion light-years. Also included is a sample of 650,000 luminous red galaxies, lenses for the gravitational lensing, and typically these are foreground elliptical galaxies. These are at redshifts < 0.9 corresponding to up to 7 billion light-years.

They use 3 main methods to make cosmological measurements with the sample:

1. The correlations of galaxy positions (galaxy-galaxy clustering)

2. The gravitational lensing of the large sample of background galaxies by the smaller foreground population (cosmic shear)

3. The gravitational lensing of the luminous red galaxies (galaxy-galaxy lensing)

Combining these three methods provides greater interpretive power, and is very effective in eliminating nuisance parameters and systematic errors. The signals being teased out from the large samples are at only the one to ten parts in a thousand level.

They determine 7 cosmological parameters including the overall mass density (including dark matter), the baryon mass density, the neutrino mass density, the Hubble constant, and the equation of state parameter for dark energy. They also determine the spectral index and characteristic amplitude of density fluctuations.

Their results indicate Ωm of 0.28 to a few percent, indicating that the universe is 28% dark matter and 72% dark energy. They find a dark energy equation of state w = – 0.80 but with error bars such that the result is consistent with either a cosmological constant interpretation of w = -1 or a somewhat softer equation of state.

They compare the DES results with those from the Planck satellite for the cosmic microwave background and find they are statistically significant with each other and with the Λ-Cold Dark MatterΛ model (Λ, or Lambda, stands for the cosmological constant). They also compare to other galaxy correlation measurements known as BAO for Baryon Acoustic Oscillations (very large scale galaxy structure reflecting the characteristic scale of sound waves in the pre-cosmic microwave background plasma) and to Type 1a supernovae data.

This broad agreement with Planck results is a significant finding since the cosmic microwave background is at very early times, redshift z = 1100 and their galaxy sample is at more recent times, after the first five billion years had elapsed, with z < 1.4 and more typically when the universe was roughly ten billion years old.

Upon combining with Planck, BAO, and the supernovae data the best fit is Ωm of 0.30 with an error of less than 0.01, the most precise determination to date. Of this, about 0.25 is ascribed to dark matter and 0.05 to ordinary matter (baryons). And the implied dark energy fraction is 0.70.

Furthermore, the combined result for the equation of state parameter is precisely w = -1.00 with only one percent uncertainty.

The figure below is Figure 9 from the DES paper. The figure indicates, in the leftmost column the measures and error bars for the amplitude of primordial density fluctuations, in the center column the fraction of mass-energy density in matter, and in the right column the equation of state parameter w.

DES.aug17.paper1.fig9.jpeg

The DES year one results for all 3 methods are shown in the first row. The Planck plus BAO plus supernovae combined results are shown in the last row. And the middle row, the fifth row, shows all of the experiments combined, statistically. Note the values of 0.3 and – 1.0 for Ωm and w, respectively, and the extremely small error bars associated with these.

This represents continued strong support for the canonical Λ-Cold Dark Matter cosmology, with unvarying dark energy described by a cosmological constant.

They did not evaluate modifications to general relativity such as Emergent Gravity or MOND with respect to their data, but suggest they will evaluate such a possibility in the future.

References

https://arxiv.org/abs/1708.01530, “Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing”, 2017, T. Abbott et al.

https://en.wikipedia.org/wiki/Weak_gravitational_lensing, Wikipedia article on weak gravitational lensing discusses galaxy-galaxy lensing and cosmic shear


Dark Energy and the Cosmological Constant

I am seeing a lot of confusion around dark energy and the cosmological constant. What are they? Is gravity always attractive? Or is there such a thing as negative gravity or anti-gravity?

First, what is gravity? Einstein taught us that it is the curvature of space. Or as famous relativist John Wheeler wrote “Matter tells space how to curve, and curved space tells matter how to move”.

Dark Energy has been recognized with the Nobel Prize for Physics, so its reality is accepted. There were two teams racing against one another and they found the same result in 1998: the expansion of the universe is accelerating!

Normally one would have thought it would be slowing down due to the matter within; both ordinary and dark matter would work to slow the expansion. But this is not observed for distant galaxies. One looks at a certain type of supernova that always has a certain mass and thus the same absolute luminosity. So the apparent brightness can be used to determine the luminosity distance. This is compared with the redshift that provides the velocity of recession or velocity-determined distance in accordance with Hubble’s law.

A comparison of the two types of distance measures, particularly for large distances, shows the unexpected acceleration. The most natural explanation is a dark energy component equal to twice the matter component, and that matter component would include any dark matter. Now do not confuse dark energy with dark matter. The latter contributes to gravity in the normal way in proportion to its mass. Like ordinary matter it appears to be non-relativistic and without pressure.

Einstein presaged dark energy when he added the cosmological constant term to his equations of general relativity in 1917. He was trying to build a static universe. It turns out that such a model is unstable, and he later called his insertion of the cosmological constant a blunder. A glorious blunder it was, as we learned eight decades later!

Here is the equation:

G_{ab}+\Lambda g_{ab} = {8\pi G \over c^{4}}T_{ab}

The cosmological constant is represented by the Λ term, and interestingly it is usually written on the left hand side with the metric terms, not on the right hand side with the stress-energy (and pressure and mass) tensor T.

If we move it to the right hand side and express as an energy density, the term looks like this:

\rho  = {\Lambda \over8\pi G }

with \rho  as the vacuum energy density or dark energy, and appearing on the right it also takes a negative sign. So this is a suggestion as to why it is repulsive.

The type of dark energy observed in our current universe can be fit with the simple cosmological constant model and it is found to be positive. So if you move \Lambda to the other side of the equation, it enters negatively.

Now let us look at dark energy more generally. It satisfies an equation of state defined by the relationship of pressure to density, with P as pressure and ρ denoting density:

P = w \cdot \rho \cdot c^2

Matter, whether ordinary or dark, is to first order pressureless for our purposes, quantified by its rest mass, and thus takes w = 0. Radiation it turns out has w = 1/3. The dark energy has a negative w, which is why you have heard the phrase ‘negative pressure’. The simplest case is w = -1, which the cosmological constant, a uniform energy density independent of location and age of the universe. Alternative models of dark energy known as quintessence can have a larger w, but it must be less than -1/3.

275px-EquationofState.gif

Credit: http://www.scholarpedia.org/article/Cosmological_constant

Why less than -1/3? Well the equations of general relativity as a set of nonlinear differential equations are usually notoriously difficult to solve, and do not admit of analytical solutions. But our universe appears to be highly homogeneous and isotropic, so one can use a simple FLRW spherical metric, and in this case one end up with the two Friedmann equations (simplified by setting c =1).

\ddot a/a  = - {4 \pi  G \over 3} ({\rho + 3 p}) + {\Lambda \over 3 }

This is for a (k = 0) flat on large scales universe as observed. Here \ddot a is the acceleration (second time derivative) of the scale factor a. So if \ddot a is positive, the expansion of the universe is speeding up.

The \Lambda term can be rewritten using the dark energy density relation above. Now the equation needs to account for both matter (which is pressureless, whether it is ordinary or dark matter) and dark energy. Again the radiation term is negligible at present, by four orders of magnitude. So we end up with:

\ddot a/a  = - {4 \pi  G \over 3} ({\rho_m + \rho_{de} + 3 p_{de}})

Now the magic here was in the 3 before the p. The pressure gets 3 times the weighting in the stress-energy tensor T. Why, because energy density is just there as a scalar, but pressure must be accounted for in each of the 3 spatial dimensions. And since p for dark energy is negative and equal to the dark energy density (times the square of the speed of light), then

\rho + 3 p is always negative for the dark energy terms, provided w < -1/3. That unusual behavior is why we call it ‘dark energy’.

Overall it is a battle between matter and dark energy density on the one side, and dark energy pressure (being negative and working oppositely to how we ordinarily think of gravity) on the other. The matter contribution gets weaker over time, since as the universe expands the matter becomes less dense by a relative factor of (1=z)^3 , that is the matter was on average denser in the past by the cube of one plus the redshift for that era.

Dark energy eventually wins out, because it, unlike matter does not thin out with the expansion. Every cubic centimeter of space, including newly created space with the expansion has its own dark energy, generally attributed to the vacuum. Due to the quantum uncertainty (Heisenberg) principle, even the vacuum has fields and non zero energy.

Now the actual observations at present for our universe show, in units of the critical density that

\rho_m \approx 1/3

\rho_{de} \approx 2/3

and thus

p_{de} \approx - 2

And the sum of them all is around -1, just coincidentally. Since there is a minus sign in front of the whole thing, the acceleration of the universe is positive. This is all gravity, it is just that some terms take the opposite side. The idea that gravity can only be attractive is not correct.

If we go back in time, say to the epoch when matter still dominated with \rho_m \approx 2/3 and  \rho_{de} \approx 1/3 , then the total including pressure would be 2/3 +1/3 – 1, or 0.

That would be the epoch when the universe changed from decelerating to accelerating, as dark energy came to dominate. With our present cosmological parameters, it corresponds to a redshift of z \approx 0.6, and almost 6 billion years ago.

Image: NASA/STScI, public domain


No Dark Energy?

Dark Energy is the dominant constituent of the universe, accounting for 2/3 of the mass-energy balance at present.

At least that is the canonical concordance cosmology, known as the ΛCDM or Lambda – Cold Dark Matter model. Here Λ is the symbol for the cosmological constant, the simplest, and apparently correct (according to most cosmologists), model for dark energy.

Models of galaxy formation and clustering use N-body simulations run on supercomputers to model the growth of structure (galaxy groups and clusters) in the universe. The cosmological parameters in these models are varied and then the models are compared to observed galaxy catalogs at various redshifts, representing different ages of the universe.

It all works pretty well except that the models assume a fully homogeneous universe on the large scale. While the universe is quite homogeneous for scales above a billion light-years, there is a great deal of filamentary web-like structure at scales above clusters, including superclusters and voids, as you can easily see in this map of our galactic neighborhood.

399px-2MASS_LSS_chart-NEW_Nasa

Galaxies and clusters in our neighborhood. IPAC/Caltech, by Thomas Jarrett“Large Scale Structure in the Local Universe: The 2MASS Galaxy Catalog”, Jarrett, T.H. 2004, PASA, 21, 396

Well why not take that structure into account when doing the modeling? It has long been known that more local inhomogeneities such as those seen here might influence the observational parameters such as the Hubble expansion rate. Thus even at the same epoch, the Hubble parameter could vary from location to location.

Now a team from Hungary and Hawaii have modeled exactly that, in a paper entitled “Concordance cosmology without dark energy” https://arxiv.org/pdf/1607.08797.pdf . They simulate structure growth while estimating the local values of expansion parameter in many regions as their model evolves.

Starting with a completely matter dominated (Einstein – de Sitter) cosmology they find that they can reasonably reproduce the average expansion history of the universe — the scale factor and the Hubble parameter — and do that somewhat better than the Planck -derived canonical cosmology.

Furthermore, they claim that they can explain the tension between the Type Ia supernovae value of the Hubble parameter (around 73 kilometers per second per Megaparsec) and that determined from the Planck satellite observations of the cosmic microwave background radiation (67 km/s/Mpc).

Future surveys of higher resolution should be able to distinguish between their model and ΛCDM, and they also acknowledge that their model needs more work to fully confirm consistency with the cosmic microwave background observations.

Meanwhile I’m not ready to give up on dark energy and the cosmological constant since supernova observations, cosmic microwave background observations and the large scale galactic distribution (labeled BAO in the figure below) collectively give a consistent result of about 70% dark energy and 30% matter. But their work is important, something that has been a nagging issue for quite a while and one looks forward to further developments.

 

Measurements of Dark Energy and Matter content of Universe

Dark Energy and Matter content of Universe


Emergent Gravity in the Solar System

In a prior post I outlined Erik Verlinde’s recent proposal for Emergent Gravity that may obviate the need for dark matter.

Emergent gravity is a statistical, thermodynamic phenomenon that emerges from the underlying quantum entanglement of micro states found in dark energy and in ordinary matter. Most of the entropy is in the dark energy, but the presence of ordinary baryonic matter can displace entropy in its neighborhood and the dark energy exerts a restoring force that is an additional contribution to gravity.

Emergent gravity yields both an area entropy term that reproduces general relativity (and Newtonian dynamics) and a volume entropy term that provides extra gravity. The interesting point is that this is coupled to the cosmological parameters, basically the dark energy term which now dominates our de Sitter-like universe and which acts like a cosmological constant Λ.

In a paper that appeared in arxiv.org last month, a trio of astronomers Hees, Famaey and Bertone claim that emergent gravity fails by seven orders of magnitude in the solar system. They look at the advance of the perihelion for six planets out through Saturn and claim that Verlinde’s formula predicts perihelion advances seven orders of magnitude larger than should be seen.

hst_saturn_nicmos

No emergent gravity needed here. Image credit: NASA GSFC

But his formula does not apply in the solar system.

“..the authors claiming that they have ruled out the model by seven orders of magnitude using solar system data. But they seem not to have taken into account that the equation they are using does not apply on solar system scales. Their conclusion, therefore, is invalid.” – Sabine Hossenfelder, theoretical physicist (quantum gravity) Forbes blog 

Why is this the case? Verlinde makes 3 main assumptions: (1) a spherically symmetric, isolated system, (2) a system that is quasi-static, and (3) a de Sitter spacetime. Well, check for (1) and check for (2) in the case of the Solar System. However, the Solar System is manifestly not a dark energy-dominated de Sitter space.

It is overwhelmingly dominated by ordinary matter. In our Milky Way galaxy the average density of ordinary matter is some 45,000 times larger than the dark energy density (which corresponds to only about 4 protons per cubic meter). And in our Solar System it is concentrated in the Sun, but on average out to the orbit of Saturn is a whopping 3.7 \cdot 10^{17} times the dark energy density.

The whole derivation of the Verlinde formula comes from looking at the incremental entropy (contained in the dark energy) that is displaced by ordinary matter. Well with over 17 orders of magnitude more energy density, one can be assured that all of the dark energy entropy was long ago displaced within the Solar System, and one is well outside of the domain of Verlinde’s formula, which only becomes relevant when acceleration drops near to or below  c * H. The Verlinde acceleration parameter takes the value of 1.1 \cdot 10^{-8}  centimeters/second/second for the observed value of the Hubble parameter. The Newtonian acceleration at Saturn is .006 centimeters/second/second or 50,000 times larger.

The conditions where dark energy is being displaced only occur when the gravity has dropped to much smaller values; his approximation is not simply a second order term that can be applied in a domain where dark energy is of no consequence.

There is no entropy left to displace, and thus the Verlinde formula is irrelevant at the orbit of Saturn, or at the orbit of Pluto, for that matter. The authors have not disproven Verlinde’s proposal for emergent gravity.

 

 

 

 

 


Emergent Gravity: Verlinde’s Proposal

In a previous blog entry I give some background around Erik Verlinde’s proposal for an emergent, thermodynamic basis of gravity. Gravity remains mysterious 100 years after Einstein’s introduction of general relativity – because it is so weak relative to the other main forces, and because there is no quantum mechanical description within general relativity, which is a classical theory.

One reason that it may be so weak is because it is not fundamental at all, that it represents a statistical, emergent phenomenon. There has been increasing research into the idea of emergent spacetime and emergent gravity and the most interesting proposal was recently introduced by Erik Verlinde at the University of Amsterdam in a paper “Emergent Gravity and the Dark Universe”.

A lot of work has been done assuming anti-de Sitter (AdS) spaces with negative cosmological constant Λ – just because it is easier to work under that assumption. This year, Verlinde extended this work from the unrealistic AdS model of the universe to a more realistic de Sitter (dS) model. Our runaway universe is approaching a dark energy dominated dS solution with a positive cosmological constant Λ.

The background assumption is that quantum entanglement dictates the structure of spacetime, and its entropy and information content. Quantum states of entangled particles are coherent, observing a property of one, say the spin orientation, tells you about the other particle’s attributes; this has been observed in long distance experiments, with separations exceeding 100 kilometers.

400px-SPDC_figure.pngIf space is defined by the connectivity of quantum entangled particles, then it becomes almost natural to consider gravity as an emergent statistical attribute of the spacetime. After all, we learned from general relativity that “matter tells space how to curve, curved space tells matter how to move” – John Wheeler.

What if entanglement tells space how to curve, and curved space tells matter how to move? What if gravity is due to the entropy of the entanglement? Actually, in Verlinde’s proposal, the entanglement entropy from particles is minor, it’s the entanglement of the vacuum state, of dark energy, that dominates, and by a very large factor.

One analogy is thermodynamics, which allows us to represent the bulk properties of the atmosphere that is nothing but a collection of a very large number of molecules and their micro-states. Verlinde posits that the information and entropy content of space are due to the excitations of the vacuum state, which is manifest as dark energy.

The connection between gravity and thermodynamics has been around for 3 decades, through research on black holes, and from string theory. Jacob Bekenstein and Stephen Hawking determined that a black hole possesses entropy proportional to its area divided by the gravitational constant G. String theory can derive the same formula for quantum entanglement in a vacuum. This is known as the AdS/CFT (conformal field theory) correspondence.

So in the AdS model, gravity is emergent and its strength, the acceleration at a surface, is determined by the mass density on that surface surrounding matter with mass M. This is just the inverse square law of Newton. In the more realistic dS model, the entropy in the volume, or bulk, must also be considered. (This is the Gibbs entropy relevant to excited states, not the Boltzmann entropy of a ground state configuration).

Newtonian dynamics and general relativity can be derived from the surface entropy alone, but do not reflect the volume contribution. The volume contribution adds an additional term to the equations, strengthening gravity over what is expected, and as a result, the existence of dark matter is ‘spoofed’. But there is no dark matter in this view, just stronger gravity than expected.

This is what the proponents of MOND have been saying all along. Mordehai Milgrom observed that galactic rotation curves go flat at a characteristic low acceleration scale of order 2 centimeters per second per year. MOND is phenomenological, it observes a trend in galaxy rotation curves, but it does not have a theoretical foundation.

Verlinde’s proposal is not MOND, but it provides a theoretical basis for behavior along the lines of what MOND states.

Now the volume in question turns out to be of order the Hubble volume, which is defined as c/H, where H is the Hubble parameter denoting the rate at which galaxies expand away from one another. Reminder: Hubble’s law is v = H \cdot d where v is the recession velocity and the d the distance between two galaxies. The lifetime of the universe is approximately 1/H.

clusters_1280.abell1835.jpg

The value of c / H is over 4 billion parsecs (one parsec is 3.26 light-years) so it is in galaxies, clusters of galaxies, and at the largest scales in the universe for which departures from general relativity (GR) would be expected.

Dark energy in the universe takes the form of a cosmological constant Λ, whose value is measured to be 1.2 \cdot 10^{-56} cm^{-2} . Hubble’s parameter is 2.2 \cdot 10^{-18} sec^{-1} . A characteristic acceleration is thus H²/ sqrt(Λ) or 4 \cdot 10^{-8}  cm per sec per sec (cm = centimeters, sec = second).

One can also define a cosmological acceleration scale simply by c \cdot H , the value for this is about 6 \cdot 10^{-8} cm per sec per sec (around 2 cm per sec per year), and is about 15 billion times weaker than Earth’s gravity at its surface! Note that the two estimates are quite similar.

This is no coincidence since we live in an approximately dS universe, with a measured  Λ ~ 0.7 when cast in terms of the critical density for the universe, assuming the canonical ΛCDM cosmology. That’s if there is actually dark matter responsible for 1/4 of the universe’s mass-energy density. Otherwise Λ could be close to 0.95 times the critical density. In a fully dS universe, \Lambda \cdot c^2 = 3 \cdot H^2 , so the two estimates should be equal to within sqrt(3) which is approximately the difference in the two estimates.

So from a string theoretic point of view, excitations of the dark energy field are fundamental. Matter particles are bound states of these excitations, particles move freely and have much lower entropy. Matter creation removes both energy and entropy from the dark energy medium. General relativity describes the response of area law entanglement of the vacuum to matter (but does not take into account volume entanglement).

Verlinde proposes that dark energy (Λ) and the accelerated expansion of the universe are due to the slow rate at which the emergent spacetime thermalizes. The time scale for the dynamics is 1/H and a distance scale of c/H is natural; we are measuring the time scale for thermalization when we measure H. High degeneracy and slow equilibration means the universe is not in a ground state, thus there should be a volume contribution to entropy.

When the surface mass density falls below c \cdot H / (8 \pi \cdot G) things change and Verlinde states the spacetime medium becomes elastic. The effective additional ‘dark’ gravity is proportional to the square root of the ordinary matter (baryon) density and also to the square root of the characteristic acceleration c \cdot H.

This dark gravity additional acceleration satisfies the equation g _D = sqrt  {(a_0 \cdot g_B / 6 )} , where g_B is the usual Newtonian acceleration due to baryons and a_0 = c \cdot H is the dark gravity characteristic acceleration. The total gravity is g = g_B + g_D . For large accelerations this reduces to the usual g_B and for very low accelerations it reduces to sqrt  {(a_0 \cdot g_B / 6 )} .

The value a_0/6 at 1 \cdot 10^{-8} cm per sec per sec derived from first principles by Verlinde is quite close to the MOND value of Milgrom, determined from galactic rotation curve observations, of 1.2 \cdot 10^{-8} cm per sec per sec.

So suppose we are in a region where g_B is only 1 \cdot 10^{-8} cm per sec per sec. Then g_D takes the same value and the gravity is just double what is expected. Since orbital velocities go as the square of the acceleration then the orbital velocity is observed to be sqrt(2) higher than expected.

In terms of gravitational potential, the usual Newtonian potential goes as 1/r, resulting in a 1/r^2 force law, whereas for very low accelerations the potential now goes as log(r) and the resultant force law is 1/r. We emphasize that while the appearance of dark matter is spoofed, there is no dark matter in this scenario, the reality is additional dark gravity due to the volume contribution to the entropy (that is displaced by ordinary baryonic matter).

M33_rotation_curve_HI.gif

Flat to rising rotation curve for the galaxy M33

Dark matter was first proposed by Swiss astronomer Fritz Zwicky when he observed the Coma Cluster and the high velocity dispersions of the constituent galaxies. He suggested the term dark matter (“dunkle materie”). Harold Babcock in 1937 measured the rotation curve for the Andromeda galaxy and it turned out to be flat, also suggestive of dark matter (or dark gravity). Decades later, in the 1970s and 1980s, Vera Rubin (just recently passed away) and others mapped many rotation curves for galaxies and saw the same behavior. She herself preferred the idea of a deviation from general relativity over an explanation based on exotic dark matter particles. One needs about 5 times more matter, or about 5 times more gravity to explain these curves.

Verlinde is also able to derive the Tully-Fisher relation by modeling the entropy displacement of a dS space. The Tully-Fisher relation is the strong observed correlation between galaxy luminosity and angular velocity (or emission line width) for spiral galaxies, L \propto v^4 .  With Newtonian gravity one would expect M \propto v^2 . And since luminosity is essentially proportional to ordinary matter in a galaxy, there is a clear deviation by a ratio of v².

massdistribution.jpeg

 Apparent distribution of spoofed dark matter,  for a given ordinary (baryonic) matter distribution

When one moves to the scale of clusters of galaxies, MOND is only partially successful, explaining a portion, coming up shy a factor of 2, but not explaining all of the apparent mass discrepancy. Verlinde’s emergent gravity does better. By modeling a general mass distribution he can gain a factor of 2 to 3 relative to MOND and basically it appears that he can explain the velocity distribution of galaxies in rich clusters without the need to resort to any dark matter whatsoever.

And, impressively, he is able to calculate what the apparent dark matter ratio should be in the universe as a whole. The value is \Omega_D^2 = (4/3) \Omega_B where \Omega_D is the apparent mass-energy fraction in dark matter and \Omega_B is the actual baryon mass density fraction. Both are expressed normalized to the critical density determined from the square of the Hubble parameter, 8 \pi G \rho_c = 3 H^2 .

Plugging in the observed \Omega_B \approx 0.05 one obtains \Omega_D \approx 0.26 , very close to the observed value from the cosmic microwave background observations. The Planck satellite results have the proportions for dark energy, dark matter, ordinary matter as .68, .27, and .05 respectively, assuming the canonical ΛCDM cosmology.

The main approximations Verlinde makes are a fully dS universe and an isolated, static (bound) system with a spherical geometry. He also does not address the issue of galaxy formation from the primordial density perturbations. At first guess, the fact that he can get the right universal \Omega_D suggests this may not be a great problem, but it requires study in detail.

Breaking News!

Margot Brouwer and co-researchers have just published a test of Verlinde’s emergent gravity with gravitational lensing. Using a sample of over 33,000 galaxies they find that general relativity and emergent gravity can provide an equally statistically good description of the observed weak gravitational lensing. However, emergent gravity does it with essentially no free parameters and thus is a more economical model.

“The observed phenomena that are currently attributed to dark matter are the consequence of the emergent nature of gravity and are caused by an elastic response due to the volume law contribution to the entanglement entropy in our universe.” – Erik Verlinde

References

Erik Verlinde 2011 “On the Origin of Gravity and the Laws of Newton” arXiv:1001.0785

Stephen Perrenod, 2013, 2nd edition, “Dark Matter, Dark Energy, Dark Gravity” Amazon, provides the traditional view with ΛCDM  (read Dark Matter chapter with skepticism!)

Erik Verlinde 2016 “Emergent Gravity and the Dark Universe arXiv:1611.02269v1

Margot Brouwer et al. 2016 “First test of Verlinde’s theory of Emergent Gravity using Weak Gravitational Lensing Measurements” arXiv:1612.03034v