Author Archives: moneyordebt

About moneyordebt

Self-made millionaire, cryptocurrency and technology marketing expert. Email Income Expert (certified by Jason Capital). Astrophysicist.

Dark Acceleration: The Acceleration Discrepancy

Are Newton and Einstein both wrong?

Maybe Dark Matter has been going by the wrong name all along, ever since the cantankerous Swiss astronomer Fritz Zwicky coined the name, when his observations of the Coma cluster of galaxies showed the velocities were very much higher than expected.

He assumed Newton’s laws, and indeed general relativity, are correct. And that has been the canonical assumption ever since.

“Dark matter” has been studied on the scale of individual galaxies, clusters of galaxies, and the universe as a whole. The measurements of rotation velocity of spiral galaxies decades ago set the tone.

But the effects have been seen in the velocity dispersions of elliptical galaxies, in clusters of galaxies, and indeed in the cosmic microwave background temperature fluctuations.

What effects? Well for galaxies, whether for rotation or for dispersions within elliptical galaxies, what is actually observed is extra acceleration.

We all know F = ma, force equals mass times acceleration, for Newtonian dynamics.

In the case of galaxy rotation curves, the outer regions of the galaxies rotate faster than expected, where the expectation is set by the profile of visible matter and the modeling of the relationship between stellar luminosity and masses.

What is actually measured is the rotational (centripetal) observed acceleration of outer regions as being higher than expected, sometimes very much so.

But is m the problem? Is there missing ‘dark matter’? Or is ‘a’ the problem; does the Newtonian formula fail for the outer regions, or more specifically in environments where the acceleration is very low, less than about one ten billionth of a meter per second per second (less than 1 Angstrom per second per second)?

Now general relativity is not the explanation for the discrepancy, because we see departures from Newtonian behavior towards general relativistic formulae when acceleration is quite high, not when it is very low. So if Newton is wrong at very low accelerations, so is Einstein.

It turns out that the extra acceleration is best correlated not with the distance from the galaxy center, but with the amplitude of the expected Newtonian acceleration. When the expected acceleration is very low, the observed acceleration has the biggest discrepancy, always in the direction of more acceleration than expected.


Figure 3 from Lelli et al. 2016 “One Law to Rule Them All” This shows the observed gravitational acceleration on the y-axis (log scale) displayed vs. the expected Newtonian acceleration on the x-axis. Over 2000 data points drawn from 153 galaxy rotation curves would lie on the dotted line if there were no extra acceleration. There is very clear extra acceleration and it is correlated to the Newtonian acceleration, with a larger proportional effect at lower accelerations. At these very low accelerations the observed values are about an order of magnitude above the Newtonian value.

From an Occam’s razor point of view it is actually simpler to think about modifying the laws of gravity in very low acceleration environments. It is only in these actual astrophysical laboratories that we are able to test how gravity behaves at very low accelerations.

Explanations such as emergent gravity and other modified Newtonian dynamics approaches need serious theoretical and experimental investigation. They have been playing a distant second fiddle to expensive dark matter searches for WIMPs and axions, which keep coming up short even as the experiments become more and more sensitive.


F. Lelli, S. McGaugh, J. Schombert, M. Pawlowski, 2016 “One Law to Rule Them All: The Radial Acceleration Relation of Galaxies”

WIMPZillas: The Biggest WIMPs


In the search for direct detection of dark matter, the experimental focus has been on WIMPS – weakly interacting massive particles. Large crystal detectors are placed deep underground to avoid contamination from cosmic rays and other stray particles.

WIMPs are often hypothesized to arise as supersymmetric partners of Standard Model particles. However, there are also WIMP candidates that arise due to non-supersymmetric extensions to the Standard Model.

The idea is that the least massive supersymmetric particle would be stable, and neutral. The (hypothetical) neutralino is the most often cited candidate.

The search technique is essentially to look for direct recoil of dark matter particles onto ordinary atomic nuclei.

The only problem is that we keep not seeing WIMPs. Not in the dark matter searches, not at the Large Hadron Collider whose main achievement has been the detection of the Higgs boson at mass 125 GeV. The mass of the Higgs is somewhat on the heavy side, and constrains the likelihood of supersymmetry being a correct Standard Model extension.

The figure below shows WIMP interaction with ordinary nuclear matter cross-section limits from a range of experiments spanning from 1 to 1000 GeV masses for WIMP candidates. Typical supersymmetric (SUSY) models are disfavored by these results at higher masses above 40 GeV or so as the observational limits are well down into the yellow shaded regions.


Perhaps the problem is that the WIMPs are much heavier than where the experiments have been searching. Most of the direction detection experiments are sensitive to candidate masses in the range from around 1 GeV to 1000 GeV (1 GeV or giga-electronVolt is about 6% greater than the rest mass energy of a proton). The 10 to 100 GeV range has been the most thoroughly searched region and multiple experiments place very strong constraints on interaction cross-sections with normal matter.

WIMPzillas is the moniker given to the most massive WIMPs, with masses from a billion GeV up to  potentially as large as the GUT (grand Unified Theory) scale of 10^{16}    GeV .

The more general term is Superheavy Dark Matter, and this is proposed as a possibility for unexplained ultra high energy cosmic rays (UHECR). The WIMPzillas may decay to highly energetic gamma rays, or other particles, and these would be detected as the UHECR. 

UHECR have energies greater than a billion GeV (10^9 GeV) and the most massive event ever seen (the so-called Oh My God Particle) was detected at 3 \cdot 10^{11}  GeV . It had energy equivalent to a baseball with velocity of 94 kilometers per hour. Or 40 million times the energy of particles in the Large Hadron Collider.

It has taken decades of searching at multiple cosmic ray arrays to detect particles at or near that energy.

Most UHECR appear to be spatially correlated with external galaxy sources, in particular with nearby Active Galactic Nuclei that are powered by supermassive black holes accelerating material near, but outside of, their event horizons.

However, they are not expected to be able to produce cosmic rays with energies above around 10^{11} GeV , thus the WIMPzilla possibility. Again WIMPzillas could span the range from 10^9 GeV up to 10^{16} GeV .

In a paper published last year, Kolb and Long calculated the production of WIMPzillas from Higgs boson pairs in the early universe. These Higgs pairs would have very high kinetic energies, much beyond their rest mass.

This production would occur during the “Reheating” period after inflation, as the inflaton (scalar energy field) dumped its energy into particles and radiation of the plasma.

There is another production mechanism, a gravitational mechanism, as the universe transitions from the accelerated expansion phase during cosmological inflation into the matter dominated (and then radiation-dominated) phases.

Thermal production from the Higgs portal, according to their results, is the dominant source of WIMPzillas for masses above 10^{14} GeV . It may also be the dominant source for masses less than about 10^{11} GeV .

They based their assumptions on chaotic inflation with quadratic inflation potential, followed by a typical model for reheating, but do not expect that their conclusions would be changed strongly with different inflation models.

It will take decades to discriminate between Big Bang-produced WIMPzilla style cosmic rays and those from extragalactic sources, since many more 10^{11} GeV  and above UHECRs should be detected to build statistics on these rare events.

But it is possible that WIMPzillas have already been seen.

The density is tiny. The current dark matter density in the Solar neighborhood is measured at 0.4 Gev per cc. Thus in a cubic meter there would be the equivalent of 400,000 proton masses. 

But if the WIMPzillas are at energies 10^{11} Gev and above (100 billion GeV), a cubic kilometer would only contain 4000 particles at a given time. Not easy to catch.

References – SuperCDMS experiment led by UC Berkeley – Dark matter review chapter from Lawrence Berkeley Lab (Figure above is from this review article). – Ultra high energy cosmic rays – E. Kolb and A. Long, 2017 “Superheavy Dark Matter through Higgs Portal Operators”

Dark Ages, Dark Matter

Cosmologists call the first couple of hundred million years of the universe’s history the Dark Ages. This is the period until the first stars formed. The Cosmic Dawn is the name given to the epoch during which these first stars formed.

Now there has been a stunning detection of the 21 centimeter line from neutral hydrogen gas in that era. Because the first stars are beginning to form, their radiation induces the hyperfine transition for electrons in the ground state orbitals of hydrogen. This radiation undergoes a cosmological expansion of around a factor of 18 since the era of the Cosmic Dawn. By the time it reaches us, instead of being at the laboratory frequency of 1420 MHz, it is at around 78 MHz.

This is a difficult frequency at which to observe, since the region of spectrum is between the TV and FM bands in the U.S. and instrumentation itself is a source of radio noise. Very remote, radio quiet, sites are necessary to minimize interference from terrestrial sources, and the signal must be picked out from a much stronger cosmic background.



Image credit: CSIRO-Australia and EDGES collaboration, MIT and Arizona State University. EDGES is funded by the National Science Foundation.

This detection was made in Western Australia with a radio detector known as EDGES, that is sensitive in the 50 to 100 MHz range. It is surprisingly small, roughly the size of a large desk. The EDGES program is a collaboration between MIT and Arizona State University.

The researchers detected an absorption feature beginning at 78 MHz, corresponding to a redshift of 17.2 (1420/78 = 18.2 = 1 + z, where z is redshift) and for  the canonical cosmological model it corresponds to an age of the universe of 180 million years.

The absorption feature is much stronger than expected from models, implying a lower gas temperature than expected.

At that redshift the cosmic microwave background temperature is at 50 Kelvins (at the present era it is only 2.7 Kelvins). The neutral hydrogen feature is seen in absorption against the warmer cosmic microwave background, and is much cooler (both its ‘spin’ and ‘kinetic’ temperatures).

This neutral hydrogen appears to be at only 3 Kelvins. Existing models had the expectation that it would be at around 7 Kelvins or even higher. (A Kelvin degree equals a Celsius degree, but has its zero point at absolute zero rather than water’s freezing temperature).

In a companion paper, it has been proposed that interactions with dark matter kept the hydrogen gas cooler than expected. This would require an interaction cross section between dark matter and ordinary matter (non- gravitational interaction, perhaps due to the weak force) and low velocities and low masses for dark matter particles. The mass should be only a few GeV (a proton rest mass is .94 GeV). Most WIMP searches in Earth-based labs have been above 10 GeV.

These results need to be confirmed by other experiments. And the dark matter explanation is speculative. But the door has been opened for Cosmic Dawn observations of neutral hydrogen as a new way to hunt for dark matter.


“A Surprising Chill before the Cosmic Dawn”

EDGES science:

EDGES array and program:

R. Barkana 2018, “Possible Interactions between Baryons and Dark Matter Particles Revealed by the First Stars”

Unified Physics including Dark Matter and Dark Energy

Dark matter keeps escaping direct detection, whether it might be in the form of WIMPs, or primordial black holes, or axions. Perhaps it is a phantom and general relativity is inaccurate for very low accelerations. Or perhaps we need a new framework for particle physics other than what the Standard Model and supersymmetry provide.

We are pleased to present a guest post from Dr. Thomas J. Buckholtz. He introduces us to a theoretical framework referred to as CUSP, that results in four dozen sets of elementary particles. Only one of these sets is ordinary matter, and the framework appears to reproduce the known fundamental particles. CUSP posits ensembles that we call dark matter and dark energy. In particular, it results in the approximate 5:1 ratio observed for the density of dark matter relative to ordinary matter at the scales of galaxies and clusters of galaxies. (If interested, after reading this post, you can read more at his blog linked to his name just below).

Thomas J. Buckholtz

My research suggests descriptions for dark matter, dark energy, and other phenomena. The work suggests explanations for ratios of dark matter density to ordinary matter density and for other observations. I would like to thank Stephen Perrenod for providing this opportunity to discuss the work. I use the term CUSP – concepts uniting some physics – to refer to the work. (A book, Some Physics United: With Predictions and Models for Much, provides details.)

CUSP suggests that the universe includes 48 sets of elementary-particle Standard Model elementary particles and composite particles. (Known composite particles include the proton and neutron.) The sets are essentially (for purposes of this blog) identical. I call each instance an ensemble. Each ensemble includes its own photon, Higgs boson, electron, proton, and so forth. Elementary particle masses do not vary by ensemble. (Weak interaction handedness might vary by ensemble.)

One ensemble correlates with ordinary matter, 5 ensembles correlate with dark matter, and 42 ensembles contribute to dark energy densities. CUSP suggests interactions via which people might be able to detect directly (as opposed to infer indirectly) dark matter ensemble elementary particles or composite particles. (One such interaction theoretically correlates directly with Larmor precession but not as directly with charge or nominal magnetic dipole moment. I welcome the prospect that people will estimate when, if not now, experimental techniques might have adequate sensitivity to make such detections.)


This explanation may describe (much of) dark matter and explain (at least approximately some) ratios of dark matter density to ordinary matter density. You may be curious as to how I arrive at suggestions CUSP makes. (In addition, there are some subtleties.)

Historically regarding astrophysics, the progression ‘motion to forces to objects’ pertains. For example, Kepler’s work replaced epicycles with ellipses before Newton suggested gravity. CUSP takes a somewhat reverse path. CUSP models elementary particles and forces before considering motion. The work regarding particles and forces matches known elementary particles and forces and extrapolates to predict other elementary particles and forces. (In case you are curious, the mathematics basis features solutions to equations featuring isotropic pairs of isotropic quantum harmonic oscillators.)

I (in effect) add motion by extending CUSP to embrace symmetries associated with special relativity. In traditional physics, each of conservation of angular momentum, conservation of momentum, and boost correlates with a spatial symmetry correlating with the mathematics group SU(2). (If you would like to learn more, search online for “conservation law symmetry,” “Noether’s theorem,” “special unitary group,” and “Poincare group.”) CUSP modeling principles point to a need to add to temporal symmetry and, thereby, to extend a symmetry correlating with conservation of energy to correlate with the group SU(7). The number of generators of a group SU(n) is n2−1. SU(7) has 48 generators. CUSP suggests that each SU(7) generator correlates with a unique ensemble. (In case you are curious, the number 48 pertains also for modeling based on either Newtonian physics or general relativity.)

CUSP math suggests that the universe includes 8 (not 1 and not 48) instances of traditional gravity. Each instance of gravity interacts with 6 ensembles.

The ensemble correlating with people (and with all things people see) connects, via our instance of gravity, with 5 other ensembles. CUSP proposes a definitive concept – stuff made from any of those 5 ensembles – for (much of) dark matter and explains (approximately) ratios of dark matter density to ordinary matter density for the universe and for galaxy clusters. (Let me not herein do more than allude to other inferably dark matter based on CUSP-predicted ordinary matter ensemble composite particles; to observations that suggest that, for some galaxies, the dark matter to ordinary matter ratio is about 4 to 1, not 5 to 1; and other related phenomena with which CUSP seems to comport.)

CUSP suggests that interactions between dark matter plus ordinary matter and the seven peer combinations, each comprised of 1 instance of gravity and 6 ensembles, is non-zero but small. Inferred ratios of density of dark energy to density of dark matter plus ordinary matter ‘grow’ from zero for observations pertaining to somewhat after the big bang to 2+ for observations pertaining to approximately now. CUSP comports with such ‘growth.’ (In case you are curious, CUSP provides a nearly completely separate explanation for dark energy forces that govern the rate of expansion of the universe.)

Relationships between ensembles are reciprocal. For each of two different ensembles, the second ensemble is either part of the first ensemble’s dark matter or part of the first ensemble’s dark energy. Look around you. See what you see. Assuming that non-ordinary-matter ensembles include adequately physics-savvy beings, you are looking at someone else’s dark matter and yet someone else’s dark energy stuff. Assuming these aspects of CUSP comport with nature, people might say that dark matter and dark-energy stuff are, in effect, quite familiar.

Copyright © 2018 Thomas J. Buckholtz


Primordial Black Holes and Dark Matter

Based on observed gravitational interactions in galactic halos (galaxy rotation curves) and in group and clusters, there appears to be 5 times as much dark matter as ordinary matter in the universe. The alternative is no dark matter, but more gravity than expected at low accelerations, as discussed in this post on emergent gravity.

The main candidates for dark matter are exotic, undiscovered particles such as WIMPs (weakly interacting massive particles) and axions. Experiments attempting direct detection for these have repeatedly come up short.

The non-particle alternative category is MACHOs (massive compact halo objects) composed of ordinary matter.  Planets, dwarf stars and neutron stars have been ruled out by various observational signatures. The one ordinary matter possibility that has remained viable is that of black holes, and in particular black holes with much less than the mass of the Sun.

The only known possibility for such low mass black holes is that of primordial black holes (PBHs) formed in the earliest moments of the Big Bang.

Gravitational microlensing, or microlensing for short, seeks to detect PBHs by their general relativistic gravitational effect on starlight. MACHO and EROS were experiments to monitor stars in the Large Magellanic Cloud. These were able to place limits on the abundance of PBHs with masses from about one hundred millionth of a the Sun’s mass up to 10 solar masses. PBHs from that mass range are not able to explain the total amount of dark matter determined from gravitational interactions.

LIGO has recently detected several merging black holes in the tens of solar mass range. However the frequency of LIGO detections appears too low by two orders of magnitude to explain the amount of gravitationally detected dark matter. PBHs in this mass range are also constrained by cosmic microwave background observations.

Extremely low mass PBHs, below 10 billion tons, cannot survive until the present epoch of the universe. This is due to Hawking radiation. Black holes evaporate due to their quantum nature. Solar mass black holes have an extremely long lifetime against evaporation. But very low mass black holes will evaporate in billions of years or much sooner, depending on mass.

The remaining mass window for possible PBH, in sufficient amount to explain dark matter, is from about 10 trillion ton objects up to those with ten millionths of the Sun’s mass.


Figure 5 from H. Niikura et al. “Microlensing constraints on primordial black holes with the Subaru/HSC Andromeda observation”,  

Here f is the fraction of dark matter which can be explained by PBHs. The red shaded area is excluded by the authors observations and analysis of Andromeda Galaxy data. This rules out masses above 100 trillion tons and below a hundred thousandth of the Sun’s mass. (Solar mass units used above and grams are used below).


Now, a team of Japanese astronomers have used the Subaru telescope on the Big Island of Hawaii (operated by Japan’s national observatory) to determine constraints on PBHs by observing millions of stars in the Andromeda Galaxy.

The idea is that a candidate PBH would pass in front of the line of sight to the star, acting as a lens, and magnifying the light from the star in question for a relatively brief period of time. The astronomers looked for stars exhibiting variability in their light intensity.

With only a single nights’ data they made repeated short exposures and were able to pick out over 15,000 stars in Andromeda exhibiting such variable light intensity. However, among these possible candidates, only a single one turned out to fit the characteristics expected for a PBH detection.

If PBHs in this mass range were sufficiently abundant to explain dark matter, then one would have expected of order one thousand events, and they saw nothing like this number. In summary, with 95% confidence, they are able to rule out PBHs as the main source of dark matter for the mass range from 100 trillion tons up to one hundred thousandth of the Sun’s mass.

The window for primordial black holes as the explanation for dark matter appears to be closing.





Dark Energy Survey First Results: Canonical Cosmology Supported

The Dark Energy Survey (DES) first year results, and a series of papers, were released on August 4, 2017. This is a massive international collaboration with over 60 institutions represented and 200 authors on the paper summarizing initial results. Over 5 years the Dark Energy Survey team plans to survey some 300 million galaxies.

The instrument is the 570-megapixel Dark Energy Camera installed on the Cerro Tololo Inter-American Observatory 4-meter Blanco Telescope.


Image: DECam imager with CCDs (blue) in place. Credit:

Over 26 million source galaxy measurements from far, far away are included in these initial results. Typical distances are several billion light-years, up to 9 billion light-years. Also included is a sample of 650,000 luminous red galaxies, lenses for the gravitational lensing, and typically these are foreground elliptical galaxies. These are at redshifts < 0.9 corresponding to up to 7 billion light-years.

They use 3 main methods to make cosmological measurements with the sample:

1. The correlations of galaxy positions (galaxy-galaxy clustering)

2. The gravitational lensing of the large sample of background galaxies by the smaller foreground population (cosmic shear)

3. The gravitational lensing of the luminous red galaxies (galaxy-galaxy lensing)

Combining these three methods provides greater interpretive power, and is very effective in eliminating nuisance parameters and systematic errors. The signals being teased out from the large samples are at only the one to ten parts in a thousand level.

They determine 7 cosmological parameters including the overall mass density (including dark matter), the baryon mass density, the neutrino mass density, the Hubble constant, and the equation of state parameter for dark energy. They also determine the spectral index and characteristic amplitude of density fluctuations.

Their results indicate Ωm of 0.28 to a few percent, indicating that the universe is 28% dark matter and 72% dark energy. They find a dark energy equation of state w = – 0.80 but with error bars such that the result is consistent with either a cosmological constant interpretation of w = -1 or a somewhat softer equation of state.

They compare the DES results with those from the Planck satellite for the cosmic microwave background and find they are statistically significant with each other and with the Λ-Cold Dark MatterΛ model (Λ, or Lambda, stands for the cosmological constant). They also compare to other galaxy correlation measurements known as BAO for Baryon Acoustic Oscillations (very large scale galaxy structure reflecting the characteristic scale of sound waves in the pre-cosmic microwave background plasma) and to Type 1a supernovae data.

This broad agreement with Planck results is a significant finding since the cosmic microwave background is at very early times, redshift z = 1100 and their galaxy sample is at more recent times, after the first five billion years had elapsed, with z < 1.4 and more typically when the universe was roughly ten billion years old.

Upon combining with Planck, BAO, and the supernovae data the best fit is Ωm of 0.30 with an error of less than 0.01, the most precise determination to date. Of this, about 0.25 is ascribed to dark matter and 0.05 to ordinary matter (baryons). And the implied dark energy fraction is 0.70.

Furthermore, the combined result for the equation of state parameter is precisely w = -1.00 with only one percent uncertainty.

The figure below is Figure 9 from the DES paper. The figure indicates, in the leftmost column the measures and error bars for the amplitude of primordial density fluctuations, in the center column the fraction of mass-energy density in matter, and in the right column the equation of state parameter w.


The DES year one results for all 3 methods are shown in the first row. The Planck plus BAO plus supernovae combined results are shown in the last row. And the middle row, the fifth row, shows all of the experiments combined, statistically. Note the values of 0.3 and – 1.0 for Ωm and w, respectively, and the extremely small error bars associated with these.

This represents continued strong support for the canonical Λ-Cold Dark Matter cosmology, with unvarying dark energy described by a cosmological constant.

They did not evaluate modifications to general relativity such as Emergent Gravity or MOND with respect to their data, but suggest they will evaluate such a possibility in the future.

References, “Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing”, 2017, T. Abbott et al., Wikipedia article on weak gravitational lensing discusses galaxy-galaxy lensing and cosmic shear

Dark Energy and the Cosmological Constant

I am seeing a lot of confusion around dark energy and the cosmological constant. What are they? Is gravity always attractive? Or is there such a thing as negative gravity or anti-gravity?

First, what is gravity? Einstein taught us that it is the curvature of space. Or as famous relativist John Wheeler wrote “Matter tells space how to curve, and curved space tells matter how to move”.

Dark Energy has been recognized with the Nobel Prize for Physics, so its reality is accepted. There were two teams racing against one another and they found the same result in 1998: the expansion of the universe is accelerating!

Normally one would have thought it would be slowing down due to the matter within; both ordinary and dark matter would work to slow the expansion. But this is not observed for distant galaxies. One looks at a certain type of supernova that always has a certain mass and thus the same absolute luminosity. So the apparent brightness can be used to determine the luminosity distance. This is compared with the redshift that provides the velocity of recession or velocity-determined distance in accordance with Hubble’s law.

A comparison of the two types of distance measures, particularly for large distances, shows the unexpected acceleration. The most natural explanation is a dark energy component equal to twice the matter component, and that matter component would include any dark matter. Now do not confuse dark energy with dark matter. The latter contributes to gravity in the normal way in proportion to its mass. Like ordinary matter it appears to be non-relativistic and without pressure.

Einstein presaged dark energy when he added the cosmological constant term to his equations of general relativity in 1917. He was trying to build a static universe. It turns out that such a model is unstable, and he later called his insertion of the cosmological constant a blunder. A glorious blunder it was, as we learned eight decades later!

Here is the equation:

G_{ab}+\Lambda g_{ab} = {8\pi G \over c^{4}}T_{ab}

The cosmological constant is represented by the Λ term, and interestingly it is usually written on the left hand side with the metric terms, not on the right hand side with the stress-energy (and pressure and mass) tensor T.

If we move it to the right hand side and express as an energy density, the term looks like this:

\rho  = {\Lambda \over8\pi G }

with \rho  as the vacuum energy density or dark energy, and appearing on the right it also takes a negative sign. So this is a suggestion as to why it is repulsive.

The type of dark energy observed in our current universe can be fit with the simple cosmological constant model and it is found to be positive. So if you move \Lambda to the other side of the equation, it enters negatively.

Now let us look at dark energy more generally. It satisfies an equation of state defined by the relationship of pressure to density, with P as pressure and ρ denoting density:

P = w \cdot \rho \cdot c^2

Matter, whether ordinary or dark, is to first order pressureless for our purposes, quantified by its rest mass, and thus takes w = 0. Radiation it turns out has w = 1/3. The dark energy has a negative w, which is why you have heard the phrase ‘negative pressure’. The simplest case is w = -1, which the cosmological constant, a uniform energy density independent of location and age of the universe. Alternative models of dark energy known as quintessence can have a larger w, but it must be less than -1/3.



Why less than -1/3? Well the equations of general relativity as a set of nonlinear differential equations are usually notoriously difficult to solve, and do not admit of analytical solutions. But our universe appears to be highly homogeneous and isotropic, so one can use a simple FLRW spherical metric, and in this case one end up with the two Friedmann equations (simplified by setting c =1).

\ddot a/a  = - {4 \pi  G \over 3} ({\rho + 3 p}) + {\Lambda \over 3 }

This is for a (k = 0) flat on large scales universe as observed. Here \ddot a is the acceleration (second time derivative) of the scale factor a. So if \ddot a is positive, the expansion of the universe is speeding up.

The \Lambda term can be rewritten using the dark energy density relation above. Now the equation needs to account for both matter (which is pressureless, whether it is ordinary or dark matter) and dark energy. Again the radiation term is negligible at present, by four orders of magnitude. So we end up with:

\ddot a/a  = - {4 \pi  G \over 3} ({\rho_m + \rho_{de} + 3 p_{de}})

Now the magic here was in the 3 before the p. The pressure gets 3 times the weighting in the stress-energy tensor T. Why, because energy density is just there as a scalar, but pressure must be accounted for in each of the 3 spatial dimensions. And since p for dark energy is negative and equal to the dark energy density (times the square of the speed of light), then

\rho + 3 p is always negative for the dark energy terms, provided w < -1/3. That unusual behavior is why we call it ‘dark energy’.

Overall it is a battle between matter and dark energy density on the one side, and dark energy pressure (being negative and working oppositely to how we ordinarily think of gravity) on the other. The matter contribution gets weaker over time, since as the universe expands the matter becomes less dense by a relative factor of (1=z)^3 , that is the matter was on average denser in the past by the cube of one plus the redshift for that era.

Dark energy eventually wins out, because it, unlike matter does not thin out with the expansion. Every cubic centimeter of space, including newly created space with the expansion has its own dark energy, generally attributed to the vacuum. Due to the quantum uncertainty (Heisenberg) principle, even the vacuum has fields and non zero energy.

Now the actual observations at present for our universe show, in units of the critical density that

\rho_m \approx 1/3

\rho_{de} \approx 2/3

and thus

p_{de} \approx - 2

And the sum of them all is around -1, just coincidentally. Since there is a minus sign in front of the whole thing, the acceleration of the universe is positive. This is all gravity, it is just that some terms take the opposite side. The idea that gravity can only be attractive is not correct.

If we go back in time, say to the epoch when matter still dominated with \rho_m \approx 2/3 and  \rho_{de} \approx 1/3 , then the total including pressure would be 2/3 +1/3 – 1, or 0.

That would be the epoch when the universe changed from decelerating to accelerating, as dark energy came to dominate. With our present cosmological parameters, it corresponds to a redshift of z \approx 0.6, and almost 6 billion years ago.

Image: NASA/STScI, public domain

Yet Another Intermediate Black Hole Merger

Another merger of two intermediate mass black holes has been observed by the LIGO gravitational wave observatories.

There are now three confirmed black hole pair mergers, along with a previously known fourth possible, that lacks sufficient statistical confidence.

These three mergers have all been detected in the past two years and are the only observations ever made of gravitational waves.

They are extremely powerful events. The lastest event is known as GW170104 (gravitational wave discovery of January 4, 2017).

It all happened in the wink of an eye. In a fifth of a second, a black hole of 30 solar masses approximately merged with a black hole of about 20 solar masses. It is estimated that the two orbited around one another six times (!) during that 0.2 seconds of their final existence as independent objects.

The gravitational wave generation was so great that an entire solar mass of gravitational energy was liberated in the form of gravitational waves.

This works out to something like 2 \cdot 10^{47} Joules of energy, released in 0.2 seconds, or an average of 10^{48} Watts during that interval. You know, a Tera Tera Tera Terawatt.

Researchers have now discovered a whole new class of black holes with masses ranging from about 10 solar masses (unmerged) to 60 solar masses (merged). If they keep finding these we might have to give serious consideration to intermediate mass black holes as contributors to dark matter.  See this prior blog for a discussion of primordial black holes as a possible dark matter contributor:


Image credit: LIGO/Caltech/MIT/Sonoma State (Aurore Simonnet)

No Dark Energy?

Dark Energy is the dominant constituent of the universe, accounting for 2/3 of the mass-energy balance at present.

At least that is the canonical concordance cosmology, known as the ΛCDM or Lambda – Cold Dark Matter model. Here Λ is the symbol for the cosmological constant, the simplest, and apparently correct (according to most cosmologists), model for dark energy.

Models of galaxy formation and clustering use N-body simulations run on supercomputers to model the growth of structure (galaxy groups and clusters) in the universe. The cosmological parameters in these models are varied and then the models are compared to observed galaxy catalogs at various redshifts, representing different ages of the universe.

It all works pretty well except that the models assume a fully homogeneous universe on the large scale. While the universe is quite homogeneous for scales above a billion light-years, there is a great deal of filamentary web-like structure at scales above clusters, including superclusters and voids, as you can easily see in this map of our galactic neighborhood.


Galaxies and clusters in our neighborhood. IPAC/Caltech, by Thomas Jarrett“Large Scale Structure in the Local Universe: The 2MASS Galaxy Catalog”, Jarrett, T.H. 2004, PASA, 21, 396

Well why not take that structure into account when doing the modeling? It has long been known that more local inhomogeneities such as those seen here might influence the observational parameters such as the Hubble expansion rate. Thus even at the same epoch, the Hubble parameter could vary from location to location.

Now a team from Hungary and Hawaii have modeled exactly that, in a paper entitled “Concordance cosmology without dark energy” . They simulate structure growth while estimating the local values of expansion parameter in many regions as their model evolves.

Starting with a completely matter dominated (Einstein – de Sitter) cosmology they find that they can reasonably reproduce the average expansion history of the universe — the scale factor and the Hubble parameter — and do that somewhat better than the Planck -derived canonical cosmology.

Furthermore, they claim that they can explain the tension between the Type Ia supernovae value of the Hubble parameter (around 73 kilometers per second per Megaparsec) and that determined from the Planck satellite observations of the cosmic microwave background radiation (67 km/s/Mpc).

Future surveys of higher resolution should be able to distinguish between their model and ΛCDM, and they also acknowledge that their model needs more work to fully confirm consistency with the cosmic microwave background observations.

Meanwhile I’m not ready to give up on dark energy and the cosmological constant since supernova observations, cosmic microwave background observations and the large scale galactic distribution (labeled BAO in the figure below) collectively give a consistent result of about 70% dark energy and 30% matter. But their work is important, something that has been a nagging issue for quite a while and one looks forward to further developments.


Measurements of Dark Energy and Matter content of Universe

Dark Energy and Matter content of Universe

Distant Galaxy Rotation Curves Appear Newtonian

One of the main ways in which dark matter was postulated, primarily in the 1970s, by Vera Rubin (recently deceased) and others, was by looking at the rotation curves for spiral galaxies in their outer regions. Although that was not the first apparent dark matter discovery, which was by Fritz Zwicky from observations of galaxy motion in the Coma cluster of galaxies during the 1930s.

Most investigations of spiral galaxies and star-forming galaxies have been relatively nearby, at low redshift, because of the difficulty in measuring these accurately at high redshift. For what is now a very large sample of hundreds of nearby galaxies, there is a consistent pattern. Galaxy rotation curves flatten out.


M64, image credit: NASA, ESA, and the Hubble Heritage Team (AURA/STScI)

If there were only ordinary matter one would expect the velocities to drop off as one observes the curve far from a galaxy’s center. This is virtually never seen at low redshifts, the rotation curves consistently flatten out. There are only two possible explanations: dark matter, or modification to the law of gravity at very low accelerations (dark gravity).

Dark matter, unseen matter, would case rotational velocities to be higher than otherwise expected. Dark, or modified gravity, additional gravity beyond Newtonian (or general relativity) would do the same.

Now a team of astronomers (Genzel et al. 2017) have measured the rotation curves of six individual galaxies at moderately high redshifts ranging from about 0.9 to 2.4.

Furthermore, as presented in a companion paper, they have stacked a sample of 97 galaxies with redshifts from 0.6 to 2.6  to derive an average high-redshift rotation curve (P. Lang et al. 2017). While individually they cannot produce sufficiently high quality rotation curves, they are able to produce a mean normalized curve for the sample as a whole with sufficiently good statistics.

In both cases the results show rotation curves that fall off with increasing distance from the galaxy center, and in a manner consistent with little or no dark matter contribution (Keplerian or Newtonian style behavior).

In the paper with rotation curves of 6 galaxies they go on to explain their falling rotation curves as due to “first, a large fraction of the massive high-redshift galaxy population was strongly baryon-dominated, with dark matter playing a smaller part than in the local Universe; and second, the large velocity dispersion in high-redshift disks introduces a substantial pressure term that leads to a decrease in rotation velocity with increasing radius.” 

So in essence they are saying that the central regions of galaxies were relatively more dominated in the past by baryons (ordinary matter), and that since they are measuring Hydrogen alpha emission from gas clouds in this study that they must also take into account the turbulent gas cloud behavior, and this is generally seen to be larger at higher redshifts.

Stacy McGaugh, a Modified Newtonian Dynamics (MOND) proponent, criticizes their work saying that their rotation curves just don’t go far enough out from the galaxy centers to be meaningful. But his criticism of their submission of their first paper to Nature (sometimes considered ‘lightweight’ for astronomy research results) is unfounded since the second paper with the sample of 97 galaxies has been sent to the Astrophysical Journal and is highly detailed in its observational analysis.

The father of MOND, Mordehai Milgrom, takes a more pragmatic view in his commentary. Milgrom calculates that the observed accelerations at the edge of these galaxies are several times higher than the value at which rotation curves should flatten. In addition to this criticism he notes that half of the galaxies have low inclinations, which make the observations less certain, and that the velocity dispersion of gas in galaxies that provides pressure support and allows for lower rotational velocities, is difficult to correct for.

As in MOND, in Erik Verlinde’s emergent gravity there is an extra acceleration (only apparent when the ordinary Newtonian acceleration is very low) of order. This spoofs the behavior of dark matter, but there is no dark matter. The extra ‘dark gravity’ is given by:

g _D = sqrt  {(a_0 \cdot g_B / 6 )}

In this equation a0 = c*H, where H is the Hubble parameter and gB is the usual Newtonian acceleration from the ordinary matter (baryons). Fundamentally, though, Verlinde derives this as the interaction between dark energy, which is an elastic, unequilibrated medium, and baryonic matter.

One could consider that this dark gravity effect might be weaker at high redshifts. One possibility is that density of dark energy evolves with time, although at present no such evolution is observed.

Verlinde assumes a dark energy dominated de Sitter model universe for which the cosmological constant is much larger than the matter contribution and approaches unity, Λ = 1 in units of the critical density. Our universe does not yet fully meet that criteria, but has Λ about 0.68, so it is a reasonable approximation.

At redshifts around z = 1 and 2 this approximation would be much less appropriate. We do not yet have a Verlindean cosmology, so it is not clear how to compute the expected dark gravity in such a case, but it may be less than today, or greater than today. Verlinde’s extra acceleration goes as the square root of the Hubble parameter. That was greater in the past and would imply more dark gravity. But  in reality the effect is due to dark energy, so it may go with the one-fourth power  of an unvarying cosmological constant and not change with time (there is a relationship that goes as H² ∝ Λ in the de Sitter model) or change very slowly.

At very large redshifts matter would completely dominate over the dark energy and the dark gravity effect might be of no consequence, unlike today. As usual we await more observations, both at higher redshifts, and further out from the galaxy centers at moderate redshifts.


R. Genzel et al. 2017, “Strongly baryon-dominated disk galaxies at the peak of galaxy formation ten billion years ago”, Nature 543, 397–401,

P. Lang et al. 2017, “Falling outer rotation curves of star-forming galaxies at 0.6 < z < 2.6 probed with KMOS^3D and SINS/ZC-SINF”

Stacy McGaugh 2017,

Mordehai Milgrom 2017, “High redshift rotation curves and MOND”

Erik Verlinde 2016, “Emergent Gravity and the Dark Universe” https;//