# Tag Archives: gravitational lensing

## We don’t Need no Stinkin’ Dark Matter

### Extra Acceleration

You’ve heard of dark matter, right? Some sort of exotic particle that lurks in the outskirts of galaxies.

Maybe you know the story of elusive dark matter. The first apparent home for dark matter was in clusters of galaxies, as Fritz Zwicky postulated for the Coma Cluster in the 1930s, due to the excessive galaxy random motions that he measured.

There have been eight decades of discovery and measurement of the gravitational anomalies that dark matter is said to cause, and eight decades of notable failure to directly find any very faint ordinary matter, black holes, or exotic particle matter in sufficient quantities to explain the magnitude of the observed anomalies.

If dark matter is actually real and composed of particles or primordial black holes then there is five times as much mass per unit volume on average in that form as there is in the form of ordinary matter. Ordinary matter is principally in the form of protons and neutrons, primarily as hydrogen and helium atoms and ions.

Why do we call it dark? It gives off no light. Ordinary matter gives off light, it radiates. What else gives off no light? A gravitational field stronger than predicted by existing laws.

Gravitational anomalies are seen in the outer regions of galaxies by examining galaxy rotation curves, which flatten out unexpectedly with distance from the galactic center.  They are seen in galaxy groups and clusters from measuring galaxy velocity dispersions, from X-ray observations of intracluster gas, and from gravitational lensing measurements. A dark matter component is also deduced at the cosmic scale from the power spectrum of the cosmic microwave background spatial variations.

The excessive velocities due to extra acceleration are either caused by dark matter or by some departure of gravity over and above the predictions of general relativity.

Actually at high accelerations general relativity is the required model but at low accelerations Newtonian dynamics is an accurate approximation. The discrepancies arise only at very low accelerations. These excess velocities, X-ray emission, and lensing are observed only at very low accelerations, so we are basically talking about an alternative of extra gravity which is over and above the 1/r² law for Newtonian dynamics.

### Alternatives to General Relativity and Newtonian Dynamics

There are multiple proposed laws for modifying gravity at very low accelerations. To match observations the effect should start to kick in for accelerations less than c * H, where H is the Hubble expansion parameter and its inverse is nearly equal to the present age of the universe.

That is only around 1 part in 14 million expressed in units of centimeters per second per second. This is not something typically measurable in Earth-bound laboratories; scientists have trouble pinning down the value of the gravitational constant G to within 1 part in 10,000.

This is a rather profound coincidence, suggesting that there is something fundamental at play in the nature of gravity itself, not necessarily a rather arbitrary creation of exotic dark matter particle in the very early universe. It suggests instead that there is an additional component of gravity tied in some way to the age and state of our universe.

Do you think of general relativity as the last word on gravity? From an Occam’s razor point of view it is actually simpler to think about modifying the laws of gravity in very low acceleration environments, than to postulate an exotic never-seen-in-the-lab dark matter particle. And we already know that general relativity is incomplete, since it is not a quantum theory.

The emergent gravity concept neatly solves the quantum issue by saying gravity is not fundamental in the way that electromagnetism and the nuclear forces are. Rather it is described as an emergent property of a system due to quantum entanglement of fields and particles. In this view, the fabric of space also arises from this entanglement. Gravity is a statistical property of the system, the entropy (in thermodynamic terms) of entanglement at each point.

### Dark Energy

Now we have a necessary aside on dark energy. Do you know that dark energy is on firmer standing now than dark matter? And do you know that dark energy is just described by additional energy and pressure components in the stress-energy tensor, fully described within general relativity?

We know that dark energy dominates over dark matter in the canonical cosmological model (Lambda-Cold Dark Matter) for the universe. The canonical model has about 2/3 dark energy and the solution for the universe’s expansion approximates a de Sitter model in general relativity with an exponential ‘runaway’ expansion.

### Dark Gravity

As we discuss this no dark matter alternative, we refer to it as dark gravity, or dark acceleration. Regardless of the nature of dark matter and dark gravity, the combination of ordinary gravity and dark gravity is still insufficient to halt the expansion of the universe. In this view, the dark gravity is due to ordinary matter, there is just more of it (gravity) than we expect, again only for the very low c * H or lower acceleration environments.

Some of the proposed laws for modified gravity are:

1. MOND – Modified Newtonian Dynamics, from Milgrom
2. Emergent gravity, from Verlinde
3. Metric skew tensor gravity (MSTG), from Moffat (and also the more recent variant scalar-tensor-vector gravity (STVG), sometimes called MOG (Modified gravity)

Think of the dark gravity as an additional term in the equations, beyond the gravity we are familiar with. Each of the models adds an additional term to Newtonian gravity, that only becomes significant for accelerations less than c*H. The details vary between the proposed alternatives. All do a good job of matching galaxy rotation curves for spiral galaxies and the Tully-Fisher relation that can be used for analyzing elliptical galaxies.

Things are trickier in clusters of galaxies, which are observed for galaxy velocity dispersions, X-ray emission of intracluster gas, and gravitational lensing. The MOND model appears to come up short by a factor of about two in explaining the total dark gravity implied.

Emergent gravity and modified gravity theories including MSTG claim to be able to match the observations in clusters.

### Clusters of Galaxies

Most galaxies are found in groups and clusters.

Clusters and groups form from the collapse of overdense regions of hydrogen and helium gas in the early universe. Collapsing under its own gravity, such a region will heat up via frictional processes and cooler sub-regions will collapse further to form galaxies within the cluster.

Rich clusters have hundreds, even thousands of galaxies, and their gravitational potential is so high that the gas is heated to millions of degrees via friction and shock waves and gives off X-rays. The X-ray emission from clusters has been actively studied since the 1970s, via satellite experiments.

What is found is that most matter is in the form of intracluster gas, not galaxies. Some of this is left over primordial gas that never formed galaxies and some is gas that was once in a galaxy but expelled via energetic processes, especially supernovae.

Observations indicate that around 90% of (ordinary) matter is in the form of intracluster gas, and only around 10% within the galaxies in the form of stars or interstellar gas and dust. Thus modeling the mass profile of a cluster is best done by looking at how the X-ray emission falls off as one moves away from the center of a cluster.

In their 2005 paper, Brownstein and Moffat compiled X-ray emission profiles and fit gas mass profiles with radius and temperature profiles for 106 galaxy clusters. They aggregated data from a sample of 106 clusters and find that an MSTG model can reproduce the X-ray emission with a mass profile that does not require dark matter.

The figure below shows the average profile of cumulative mass interior to a given radius. The mass is in units of solar masses and runs into the hundreds of trillions. The average radius extends to over 1000 Kiloparsecs or over 1 Megaparsec (a parsec is 3.26 light-years).

The bottom line is that emergent gravity and MSTG both claim to have explanatory power without any dark matter for observations of galaxy rotation curves, gravitation lensing in clusters (Brower et al. 2016), and cluster mass profiles deduced from the X-ray emission from hot gas.

Figure 2 from J.R. Brownstein and J.W. Moffat (2005), “Galaxy Cluster Masses without Non-Baryonic Dark Matter”. Shown is cumulative mass required as a function of radius. The red curve is the average of X-ray observations from a sample of 106 clusters. The black curve is the authors’ model assuming MSTG, a good match. The cyan curve is the MOND model, the blue curve is a Newtonian model, and both require dark matter. The point is that the authors can match observations with much less matter and there is no need to postulate additional exotic dark matter.

What we would very much like to see is a better explanation of the cosmic microwave background density perturbation spectrum for the cosmic scale, for either of these dark gravity models. The STVG variant of MSTG claims to address those observations as well, without the need for dark matter.

In future posts we may look at that issue and also the so called ‘silver bullet’ that dark matter proponents often promote, the Bullet Cluster, that consists of two galaxy clusters colliding and a claimed separation of dark matter and gas.

#### References

Brower, M. et al. 2016, “First test of Verlinde’s theory of Emergent Gravity using Weak Gravitational Lensing Measurements” https://arxiv.org/abs/1612.03034v2

Brownstein, J. and Moffat, J. 2005, “Galaxy Cluster Masses without Non-baryonic Dark Matter”, https://arxiv.org/abs/astro-ph/0507222

Perrenod, S. 1977 “The Evolution of Cluster X-ray Sources” http://adsabs.harvard.edu/abs/1978ApJ…226..566P, thesis.

https://darkmatterdarkenergy.com/2018/09/19/matter-and-energy-tell-spacetime-how-to-be-dark-gravity/

https://darkmatterdarkenergy.com/2016/12/30/emergent-gravity-verlindes-proposal/

https://darkmatterdarkenergy.com/2016/12/09/modified-newtonian-dynamics-is-there-something-to-it/

## Mini Black Holes as Dark Matter?

Ancient Voyager Satellite Says No for the Smallest Possible

Black holes can come in all sizes from about a billion tons up to billions of solar masses.

Because isolated black holes are difficult to detect, especially smaller mass ones, they have long been considered as candidates for dark matter, invoked to explain the extra gravitational accelerations measured at the outskirts of galaxies.

Stephen Hawking showed that black holes radiate low energy particles very slowly due to quantum thermodynamic effects. So the very lowest mass black holes evaporate away due to Hawking radiation during the life of the universe.

Voyager Satellites

The Voyager satellites were launched in 1977 and NASA has determined that Voyager 1 crossed the heliopause in 2012. This is the boundary for the solar wind, which holds back a large portion of galactic cosmic rays. Voyager 2 crossed the heliopause last year.

Forty-two years after launch, and having toured Jupiter, Saturn, Uranus, and Neptune, these remarkable satellites are still returning valuable data about the outer reaches of the Solar System.

What is the connection between black holes, dark matter, and Voyager 1?

In the early universe, large numbers of so-called primordial black holes (PBHs) of various sizes may have formed. The question arises, could these be the primary component of dark matter?

Primordial Black Holes as Dark Matter Candidates

The detection of gravitational waves from half a dozen mergers of black holes of intermediate mass has given new energy to this idea. Also, there is the continued failure to detect exotic particle candidates for dark matter in Earth-bound laboratory experiments.

A team of Japanese astronomers, searching for microlensing effects with stars in the Andromeda galaxy, have ruled out small black holes in the range of $10^{20}$ grams up to about 3 times the Earth’s mass. https://darkmatterdarkenergy.com/2017/12/07/primordial-black-holes-and-dark-matter has more detail.

Constraints from other lensing experiments (MACHO, EROS) and the cosmic microwave background appear to rule out more massive primordial black holes as the explanation for most dark matter.

What about the tiniest allowable black holes, from about $4 \cdot 10^{14}$ gm (smaller ones have evaporated already) up to $10^{20}$ gm?

Voyager 1 Constraints

With a recent analysis researchers at the Laboratoire de Physique Theorique et Hautes Energies (LPTHE) show that the Voyager 1 satellite now rules out primordial black holes with masses below $10^{17}$ gm as well, as the source of most dark matter. And it is because of the Hawking radiation that we do not detect.

Although Hawking radiation has never been detected, it is on very firm theoretical grounds that it should exist. Everything, including strange objects like black holes, has a quantum nature.

$T = 1.1 GeV / (m/10^{13} gm)$

Thus for an $m = 10^{16}$ gm black hole the Hawking temperature is about 1 MeV. (GeV or giga electron-Volt is a billion eV and around the rest mass energy of a proton, and an MeV or mega electron-Volt is a million eV and about twice the rest mass energy of an electron.)

Since these temperatures are in the MeV range, only very light particles such as neutrinos, electrons, and positrons would be emitted by the PBHs.

Figure 1 from the Boudaud and Cirelli paper shows the observed combined electron and positron cosmic ray flux from Voyager 1 in the energy range from 3 MeV to 50 MeV. It also shows results in the 1 to 10 GeV range from the Alpha Magnetic Spectrometer 2 experiment on the International Space Station (located well inside the heliopause). Two different models of how the energetic particles propagate through the galaxy are used.

Smallest possible Black Holes ruled out

PBHs with $10^{15}$ or $10^{16}$ grams are clearly ruled out; they would inject far too many energetic electron and positron cosmic rays into the interstellar medium that Voyager 1 has entered.

The authors state that no more than 0.1% of dark matter can be due to PBHs of mass less than $10^{16}$ grams (10 billion tons).

In Figure 1, a monotonic mass distribution was assumed (PBHs all have the same mass). They also consider various log-normal mass distributions and similar constraints on the allowable PBH mass were found.

What about at $10^{17}$ grams and above? Most mass regions are ruled out.

The mass region above $5 \cdot 10^{17}$ grams and up to about $10^{20}$ grams has been excluded as a primary source of dark matter from PBHs by a 2012* result from Barnacka, Glicenstein, and Moderski. They searched for gravitational lensing effects upon gamma ray burst sources due to intervening black holes.

So vast ranges of possible PBH masses are ruled out. However the mass region from $3 \cdot 10^{16}$ up to $5 \cdot 10^{17}$ grams remains a possibility as a dark matter hideout for PBHs.

*The same year that Voyager 1 crossed the heliopause, coincidentally

References

Boudaud, M. And Cirelli, M. 2019 “Voyager 1 electrons and positrons further constrain primordial black holes as dark matter” https://arxiv.org/abs/1807.03075

https://darkmatterdarkenergy.com/2017/12/07/primordial-black-holes-and-dark-matter/

Barnacka, A., Glicenstein, J.-F., Moderski, R. 2012 “New constraints on primordial black holes abundance from femtolensing of gamma-ray bursts” http://arxiv.org/abs/1204.2056

## Emergent Gravity: Verlinde’s Proposal

In a previous blog entry I give some background around Erik Verlinde’s proposal for an emergent, thermodynamic basis of gravity. Gravity remains mysterious 100 years after Einstein’s introduction of general relativity – because it is so weak relative to the other main forces, and because there is no quantum mechanical description within general relativity, which is a classical theory.

One reason that it may be so weak is because it is not fundamental at all, that it represents a statistical, emergent phenomenon. There has been increasing research into the idea of emergent spacetime and emergent gravity and the most interesting proposal was recently introduced by Erik Verlinde at the University of Amsterdam in a paper “Emergent Gravity and the Dark Universe”.

A lot of work has been done assuming anti-de Sitter (AdS) spaces with negative cosmological constant Λ – just because it is easier to work under that assumption. This year, Verlinde extended this work from the unrealistic AdS model of the universe to a more realistic de Sitter (dS) model. Our runaway universe is approaching a dark energy dominated dS solution with a positive cosmological constant Λ.

The background assumption is that quantum entanglement dictates the structure of spacetime, and its entropy and information content. Quantum states of entangled particles are coherent, observing a property of one, say the spin orientation, tells you about the other particle’s attributes; this has been observed in long distance experiments, with separations exceeding 100 kilometers.

If space is defined by the connectivity of quantum entangled particles, then it becomes almost natural to consider gravity as an emergent statistical attribute of the spacetime. After all, we learned from general relativity that “matter tells space how to curve, curved space tells matter how to move” – John Wheeler.

What if entanglement tells space how to curve, and curved space tells matter how to move? What if gravity is due to the entropy of the entanglement? Actually, in Verlinde’s proposal, the entanglement entropy from particles is minor, it’s the entanglement of the vacuum state, of dark energy, that dominates, and by a very large factor.

One analogy is thermodynamics, which allows us to represent the bulk properties of the atmosphere that is nothing but a collection of a very large number of molecules and their micro-states. Verlinde posits that the information and entropy content of space are due to the excitations of the vacuum state, which is manifest as dark energy.

The connection between gravity and thermodynamics has been around for 3 decades, through research on black holes, and from string theory. Jacob Bekenstein and Stephen Hawking determined that a black hole possesses entropy proportional to its area divided by the gravitational constant G. String theory can derive the same formula for quantum entanglement in a vacuum. This is known as the AdS/CFT (conformal field theory) correspondence.

So in the AdS model, gravity is emergent and its strength, the acceleration at a surface, is determined by the mass density on that surface surrounding matter with mass M. This is just the inverse square law of Newton. In the more realistic dS model, the entropy in the volume, or bulk, must also be considered. (This is the Gibbs entropy relevant to excited states, not the Boltzmann entropy of a ground state configuration).

Newtonian dynamics and general relativity can be derived from the surface entropy alone, but do not reflect the volume contribution. The volume contribution adds an additional term to the equations, strengthening gravity over what is expected, and as a result, the existence of dark matter is ‘spoofed’. But there is no dark matter in this view, just stronger gravity than expected.

This is what the proponents of MOND have been saying all along. Mordehai Milgrom observed that galactic rotation curves go flat at a characteristic low acceleration scale of order 2 centimeters per second per year. MOND is phenomenological, it observes a trend in galaxy rotation curves, but it does not have a theoretical foundation.

Verlinde’s proposal is not MOND, but it provides a theoretical basis for behavior along the lines of what MOND states.

Now the volume in question turns out to be of order the Hubble volume, which is defined as c/H, where H is the Hubble parameter denoting the rate at which galaxies expand away from one another. Reminder: Hubble’s law is $v = H \cdot d$ where v is the recession velocity and the d the distance between two galaxies. The lifetime of the universe is approximately 1/H.

The value of c / H is over 4 billion parsecs (one parsec is 3.26 light-years) so it is in galaxies, clusters of galaxies, and at the largest scales in the universe for which departures from general relativity (GR) would be expected.

Dark energy in the universe takes the form of a cosmological constant Λ, whose value is measured to be $1.2 \cdot 10^{-56} cm^{-2}$. Hubble’s parameter is $2.2 \cdot 10^{-18} sec^{-1}$. A characteristic acceleration is thus H²/ sqrt(Λ) or $4 \cdot 10^{-8}$ cm per sec per sec (cm = centimeters, sec = second).

One can also define a cosmological acceleration scale simply by $c \cdot H$, the value for this is about $6 \cdot 10^{-8}$ cm per sec per sec (around 2 cm per sec per year), and is about 15 billion times weaker than Earth’s gravity at its surface! Note that the two estimates are quite similar.

This is no coincidence since we live in an approximately dS universe, with a measured  Λ ~ 0.7 when cast in terms of the critical density for the universe, assuming the canonical ΛCDM cosmology. That’s if there is actually dark matter responsible for 1/4 of the universe’s mass-energy density. Otherwise Λ could be close to 0.95 times the critical density. In a fully dS universe, $\Lambda \cdot c^2 = 3 \cdot H^2$, so the two estimates should be equal to within $sqrt(3)$ which is approximately the difference in the two estimates.

So from a string theoretic point of view, excitations of the dark energy field are fundamental. Matter particles are bound states of these excitations, particles move freely and have much lower entropy. Matter creation removes both energy and entropy from the dark energy medium. General relativity describes the response of area law entanglement of the vacuum to matter (but does not take into account volume entanglement).

Verlinde proposes that dark energy (Λ) and the accelerated expansion of the universe are due to the slow rate at which the emergent spacetime thermalizes. The time scale for the dynamics is 1/H and a distance scale of c/H is natural; we are measuring the time scale for thermalization when we measure H. High degeneracy and slow equilibration means the universe is not in a ground state, thus there should be a volume contribution to entropy.

When the surface mass density falls below $c \cdot H / (8 \pi \cdot G)$ things change and Verlinde states the spacetime medium becomes elastic. The effective additional ‘dark’ gravity is proportional to the square root of the ordinary matter (baryon) density and also to the square root of the characteristic acceleration $c \cdot H$.

This dark gravity additional acceleration satisfies the equation $g _D = sqrt {(a_0 \cdot g_B / 6 )}$, where $g_B$ is the usual Newtonian acceleration due to baryons and $a_0 = c \cdot H$ is the dark gravity characteristic acceleration. The total gravity is $g = g_B + g_D$. For large accelerations this reduces to the usual $g_B$ and for very low accelerations it reduces to $sqrt {(a_0 \cdot g_B / 6 )}$.

The value $a_0/6$ at $1 \cdot 10^{-8}$ cm per sec per sec derived from first principles by Verlinde is quite close to the MOND value of Milgrom, determined from galactic rotation curve observations, of $1.2 \cdot 10^{-8}$ cm per sec per sec.

So suppose we are in a region where $g_B$ is only $1 \cdot 10^{-8}$ cm per sec per sec. Then $g_D$ takes the same value and the gravity is just double what is expected. Since orbital velocities go as the square of the acceleration then the orbital velocity is observed to be $sqrt(2)$ higher than expected.

In terms of gravitational potential, the usual Newtonian potential goes as 1/r, resulting in a $1/r^2$ force law, whereas for very low accelerations the potential now goes as $log(r)$ and the resultant force law is 1/r. We emphasize that while the appearance of dark matter is spoofed, there is no dark matter in this scenario, the reality is additional dark gravity due to the volume contribution to the entropy (that is displaced by ordinary baryonic matter).

Flat to rising rotation curve for the galaxy M33

Dark matter was first proposed by Swiss astronomer Fritz Zwicky when he observed the Coma Cluster and the high velocity dispersions of the constituent galaxies. He suggested the term dark matter (“dunkle materie”). Harold Babcock in 1937 measured the rotation curve for the Andromeda galaxy and it turned out to be flat, also suggestive of dark matter (or dark gravity). Decades later, in the 1970s and 1980s, Vera Rubin (just recently passed away) and others mapped many rotation curves for galaxies and saw the same behavior. She herself preferred the idea of a deviation from general relativity over an explanation based on exotic dark matter particles. One needs about 5 times more matter, or about 5 times more gravity to explain these curves.

Verlinde is also able to derive the Tully-Fisher relation by modeling the entropy displacement of a dS space. The Tully-Fisher relation is the strong observed correlation between galaxy luminosity and angular velocity (or emission line width) for spiral galaxies, $L \propto v^4$.  With Newtonian gravity one would expect $M \propto v^2$. And since luminosity is essentially proportional to ordinary matter in a galaxy, there is a clear deviation by a ratio of v².

Apparent distribution of spoofed dark matter,  for a given ordinary (baryonic) matter distribution

When one moves to the scale of clusters of galaxies, MOND is only partially successful, explaining a portion, coming up shy a factor of 2, but not explaining all of the apparent mass discrepancy. Verlinde’s emergent gravity does better. By modeling a general mass distribution he can gain a factor of 2 to 3 relative to MOND and basically it appears that he can explain the velocity distribution of galaxies in rich clusters without the need to resort to any dark matter whatsoever.

And, impressively, he is able to calculate what the apparent dark matter ratio should be in the universe as a whole. The value is $\Omega_D^2 = (4/3) \Omega_B$ where $\Omega_D$ is the apparent mass-energy fraction in dark matter and $\Omega_B$ is the actual baryon mass density fraction. Both are expressed normalized to the critical density determined from the square of the Hubble parameter, $8 \pi G \rho_c = 3 H^2$.

Plugging in the observed $\Omega_B \approx 0.05$ one obtains $\Omega_D \approx 0.26$, very close to the observed value from the cosmic microwave background observations. The Planck satellite results have the proportions for dark energy, dark matter, ordinary matter as .68, .27, and .05 respectively, assuming the canonical ΛCDM cosmology.

The main approximations Verlinde makes are a fully dS universe and an isolated, static (bound) system with a spherical geometry. He also does not address the issue of galaxy formation from the primordial density perturbations. At first guess, the fact that he can get the right universal $\Omega_D$ suggests this may not be a great problem, but it requires study in detail.

Breaking News!

Margot Brouwer and co-researchers have just published a test of Verlinde’s emergent gravity with gravitational lensing. Using a sample of over 33,000 galaxies they find that general relativity and emergent gravity can provide an equally statistically good description of the observed weak gravitational lensing. However, emergent gravity does it with essentially no free parameters and thus is a more economical model.

“The observed phenomena that are currently attributed to dark matter are the consequence of the emergent nature of gravity and are caused by an elastic response due to the volume law contribution to the entanglement entropy in our universe.” – Erik Verlinde

References

Erik Verlinde 2011 “On the Origin of Gravity and the Laws of Newton” arXiv:1001.0785

Stephen Perrenod, 2013, 2nd edition, “Dark Matter, Dark Energy, Dark Gravity” Amazon, provides the traditional view with ΛCDM  (read Dark Matter chapter with skepticism!)

Erik Verlinde 2016 “Emergent Gravity and the Dark Universe arXiv:1611.02269v1

Margot Brouwer et al. 2016 “First test of Verlinde’s theory of Emergent Gravity using Weak Gravitational Lensing Measurements” arXiv:1612.03034v

## Modified Newtonian Dynamics – Is there something to it?

You are constantly accelerating. The Earth’s gravity is pulling you downward at g = 9.8 meters per second per second. It wants to take your velocity up to about 10 meters per second after only the first second of free fall. Normally you don’t fall, because the floor is solid due to electromagnetic forces and also it is electromagnetic forces that give your body structural integrity and power your muscles, resisting the pull of gravity.

You are also accelerating due to the Earth’s spin and its revolution about the Sun.

International Space Station, image credit: NASA

Our understanding of gravity comes primarily from these large accelerations, such as the Earth’s pull on ourselves and on satellites, the revolution of the Moon about the Earth, and the planetary orbits about the Sun. We also are able to measure the solar system’s velocity of revolution about the galactic center, but with much lower resolution, since the timescale is of order 1/4 billion years for a single revolution with an orbital radius of about 25,000 light-years!

It becomes more difficult to determine if Newtonian dynamics and general relativity still hold for very low accelerations, or at very large distance scales such as the Sun’s orbit about the galactic center and beyond.

Modified Newtonian Dynamics (MOND) was first proposed by Mordehai Milgrom in the early 1980s as an alternative explanation for flat galaxy rotation curves, which are normally attributed to dark matter. At that time the best evidence for dark matter came from spiral galaxy rotation curves, although the need for dark matter (or some deviation from Newton’s laws) was originally seen by Fritz Zwicky in the 1930s while studying clusters of galaxies.

NGC 3521. Image Credit: ESA/Hubble & NASA and S. Smartt (Queen’s University Belfast); Acknowledgement: Robert Gendler

Galaxy Rotation Curve for M33. Public Domain, By Stefania.deluca – Own work,  https://commons.wikimedia.org/w/index.php?curid=34962949

If general relativity is always correct, and Newton’s laws of gravity are correct for non-relativistic, weak gravity conditions, then one expects the orbital velocities of stars in the outer reaches of galaxies to drop in concert with the fall in light from stars and/or radio emission from interstellar gas, reflecting decreasing baryonic matter density. (Baryonic matter is ordinary matter, dominated by protons and neutrons). As seen in the image above for M33, the orbital velocity does not drop, it continues to rise well past the visible edge of the galaxy.

To first order, assuming a roughly spherical distribution of matter, the square of the velocity at a given distance from the center is proportional to the mass interior to that distance divided by the distance (signifying the gravitational potential), thus

v² ~ G M / r

where G is the gravitational constant, and M is the galactic mass within a spherical volume of radius r. This potential corresponds to the familiar 1/r² dependence of the force of gravity according to Newton’s laws.  In other words, at the outer edge of a galaxy the velocity of stars should fall as the square root of the increasing distance, for Newtonian dynamics.

Instead, for the vast majority of galaxies studied, it doesn’t – it flattens out, or falls off very slowly with increasing distance, or even continues to rise, as for M33 above. The behavior is roughly as if gravity followed an inverse distance law for the force (1/r) in the outer regions, rather than an inverse square law with distance (1/r²).

So either there is more matter at large distances from galactic centers than expected from the light distribution, or the gravitational law is modified somehow such that gravity is stronger than expected. If there is more matter, it gives off little or no light, and is called unseen, or dark, matter.

It must be emphasized that MOND is completely empirical and phenomenological. It is curve fitted to the existing rotational curves, rather successfully, but not based on a theoretical construct for gravity. It has a free parameter for weak acceleration, and for very small accelerations, gravity is stronger than expected. It turns out that this free parameter, $a_0$, is of the same order as the ‘Hubble acceleration’ $c \cdot H$. (The Hubble distance is c / H and is 14 billion light-years; H has units of inverse time and the age of the universe is 1/H to within a few percent).

The Hubble acceleration is approximately .7 nanometers / sec / sec or 2 centimeters / sec / year  (a nanometer is a billionth of a meter, sec = second).

Milgrom’s fit to rotation curves found a best fit at .12 nanometers/sec/sec, or about 1/6 of $a_0$. This is very small as compared to the Earth’s gravity, for example. It’s the ratio between 80 years and one second, or about 2.5 billion. So you can imagine how such a variation could have escaped detection for a long time, and would require measurements at the extragalactic scale.

The TeVeS – tensor, vector, scalar theory is a theoretical construct that modifies gravity from general relativity. General relativity is a tensor theory that reduces to Newtonian dynamics for weak gravity. TeVeS has more free parameters than general relativity, but can be constructed in a way that will reproduce galaxy rotation curves and MOND-like behavior.

But MOND, and by implication, TeVeS, have a problem. They work well, surprisingly well, at the galactic scale, but come up short for galaxy clusters and for the very largest extragalactic scales as reflected in the spatial density perturbations of the cosmic microwave background radiation. So MOND as formulated doesn’t actually fully eliminate the requirement for dark matter.

Horseshoe shaped Einstein Ring

Image credit: ESA/Hubble and NASA

Any alternative to general relativity also must explain gravitational lensing, for which there are a large number of examples. Typically a background galaxy image is distorted and magnified as its light passes through a galaxy cluster, due to the large gravity of the cluster. MOND proponents do claim to reproduce gravitational lensing in a suitable manner.

Our conclusion about MOND is that it raises interesting questions about gravity at large scales and very low accelerations, but it does not eliminate the requirement for dark matter. It is also very ad hoc. TeVeS gravity is less ad hoc, but still fails to reproduce the observations at the scale of galaxy clusters and above.

Nevertheless the rotational curves of spirals and irregulars are correlated with the visible mass only, which is somewhat strange if there really is dark matter dominating the dynamics. Dark matter models for galaxies depend on dark matter being distributed more broadly than ordinary, baryonic, matter.

In the third article of this series we will take a look at Erik Verlinde’s emergent gravity concept, which can reproduce the Tully-Fisher relation and galaxy rotation curves. It also differs from MOND both in terms of being a theory, although incomplete, rather than empiricism, and apparently in being able to more successfully address the dark matter issues at the scale of galaxy clusters.

References

Wikipedia MOND entry: https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics

M. Milgrom 2013, “Testing the MOND Paradigm of Modified Dynamics with Galaxy-Galaxy Gravitational Lensing” https://arxiv.org/abs/1305.3516

R. Reyes et al. 2010, “Confirmation of general relativity on large scales from weak lensing and galaxy velocities” https://arxiv.org/abs/1003.2185

“In rotating galaxies, distribution of normal matter precisely determines gravitational acceleration” https://www.sciencedaily.com/releases/2016/09/160921085052.htm

## WIMPs or MACHOs or Primordial Black Holes

A decade or more ago, the debate about dark matter was, is it due to WIMPs (weakly interacting massive particles) or MACHOs (massive compact halo objects)? WIMPs would be new exotic particles, while MACHOs are objects formed from ordinary matter but very hard to detect due to their limited electromagnetic radiation emission.

Schwarzenegger (MACHO), not Schwarzschild (Black Holes)

Image credit: Georges Biard, CC BY-SA 3.0

Candidates in the MACHO category such as white dwarf or brown dwarf stars have been ruled out by observational constraints. Black holes formed in the very early universe, dubbed primordial black holes, were thought by many to have been ruled out as well, at least across many mass ranges, such as between the mass of the Moon and the mass of the Sun.

The focus during recent years, and most of the experimental searches, has shifted to WIMPs or other exotic particles (axions or sterile neutrinos primarily). But the WIMPs, which were motivated by supersymmetric extensions to the Standard Model of particle physics, have remained elusive. Most experiments have only placed stricter and stricter limits on their possible abundance and interaction cross-sections. The Large Hadron Collider has not yet found any evidence for supersymmetric particles.

Have primordial black holes (PBHs) as the explanation for dark matter been given short shrift? The recent detections by the LIGO instruments of two gravitational wave events, well explained by black hole mergers, have sparked new interest. A previous blog entry addressed this possibility:

The black holes observed in these events have masses in a range from about 8 to about 36 solar masses, and they could well be primordial.

There are a number of mechanisms to create PBHs in the early universe, prior to the very first second and the beginning of Big Bang nucleosynthesis. At any era, if there is a total mass M confined within a radius R, such that

2*GM/R > c^2 ,

then a black hole will form. The above equation defines the Schwarzschild limit (G is the gravitational constant and c the speed of light). A PBH doesn’t even have to be formed from matter whether ordinary or exotic; if the energy and radiation density is high enough in a region, it can also result in collapse to a black hole.

Cosmic Strings

Image credit: David Daverio, Université de Genève, CSCS supercomputer simulation data

The mechanisms for PBH creation include:

1. Cosmic string loops – If string theory is correct the very early universe had very long strings and many short loops of strings. These topological defects intersect and form black holes due to the very high density at their intersection points. The black holes could have a broad range of masses.
2. Bubble collisions from symmetry breaking – As the very early universe expanded and cooled, the strong force, weak force and electromagnetic force separated out. Bubbles would nucleate at the time of symmetry breaking as the phase of the universe changed, just as bubbles form in water as it boils to the surface. Collisions of bubbles could lead to high density regions and black hole formation. Symmetry breaking at the GUT scale (for the strong force separation) would yield BHs of mass around 100 kilograms. Symmetry breaking of the weak force from the electromagnetic force would yield BHs with a mass of around our Moon’s mass ~ 10^25 kilograms.
3. Density perturbations – These would be a natural result of the mechanisms in #1 and #2, in any case. When observing the cosmic microwave background radiation, which dates from a time when the universe was only 380,000 years old, we see density perturbations at various scales, with amplitudes of only a few parts in a million. Nevertheless these serve as the seeds for the formation of the first galaxies when the universe was only a few hundred million years old. Some perturbations could be large enough on smaller distance scales to form PBHs ranging from above a solar mass to as high as 100,000 solar masses.

For a PBH to be an effective dark matter contributor, it must have a lifetime longer than the age of the universe. BHs radiate due to Hawking radiation, and thus have finite lifetimes. For stellar mass BHs, the lifetimes are incredibly long, but for smaller BHs the lifetimes are much shorter since the lifetime is proportional to the cube of the BH mass. Thus a minimum mass for PBHs surviving to the present epoch is around a trillion kilograms (a billion tons).

Carr et al. (paper referenced below) summarized the constraints on what fraction of the matter content of the universe could be in the form of black holes. Traditional black holes, of several solar masses, created by stellar collapse and detectable due to their accretion disks, do not provide enough matter density. Neither do supermassive black holes of over a million solar masses found at the centers of most galaxies. PBHs may be important in seeding the formation of the supermassive black holes, however.

Limits on the PBH abundance in our galaxy and its halo (which is primarily composed of dark matter) are obtained from:

1. Cosmic microwave background measurements
2. Microlensing measurements (gravitational lensing)
3. Gamma-ray background limits
4. Neutral hydrogen clouds in the early universe
5. Wide binaries (disruption limits)

Microlensing surveys such as MACHO and EROS have searched for objects in our galactic halo that act as gravitational lenses for light originating from background stars in the Magellanic Clouds or the Andromeda galaxy. The galactic halo is composed primarily of dark matter.

A couple of dozen of objects with less than a solar mass have been detected.  Based on these surveys the fraction of dark matter which can be PBHs with less than a solar mass is 10% at most. The constraints from 1 solar mass up to 30 solar masses are weaker, and a PBH explanation for most of the galactic halo mass remains possible.

Similar studies conducted toward distant quasars and compact radio sources address the constraint in the supermassive black hole domain, apparently ruling out an explanation due to PBHs with from 1 million to 100 million solar masses.

Lyman-alpha clouds are neutral hydrogen clouds (Lyman-alpha is an important ultraviolet absorption line for hydrogen) that are found in the early universe at redshifts above 4. Simulations of the effect of PBH number density fluctuations on the distribution of Lyman-alpha clouds appear to limit the PBH contribution to dark matter for a characteristic PBH mass above 10,000 solar masses.

Distortions in the cosmic microwave background are expected if PBHs above 10 solar masses contributed substantially to the dark matter component. However these limits assume that PBH masses do not change. Merging and accretion events after the recombination era, when the cosmic microwave background was emitted, can allow a spectrum of PBH masses that were initially less than a solar mass before recombination evolve to one dominated by PBHs of tens, hundreds and thousands of solar masses today. This could be a way around some of the limits that appear to be placed by the cosmic microwave background temperature fluctuations.

Thus it appears could be a window in the region 30 to several thousand solar masses for PBHs as an explanation of cold dark matter.

As the Advanced LIGO gravitational wave detectors come on line, we expect many more black hole merger discoveries that will help to elucidate the nature of primordial black holes and the possibility that they contribute substantially to the dark matter component of our Milky Way galaxy and the universe.

References

B. Carr, K. Kohri, Y. Sendouda, J. Yokoyama, 2010 arxiv.org/pdf/0912.5297v2 “New cosmological constraints on primordial black holes”

S. Cleese and J. Garcia-Bellido, 2015 arxiv.org/pdf/1501.07565v1.pdf “Massive Primordial Black Holes from Hybrid Inflation as Dark Matter and the Seeds of Galaxies”

P. Frampton, 2015 arxiv.org/pdf/1511.08801.pdf “The Primordial Black Hole Mass Range”

P. Frampton, 2016 arxiv.org/pdf/1510.00400v7.pdf “Searching for Dark Matter Constituents with Many Solar Masses”

Green, A., 2011 https://www.mpifr-bonn.mpg.de/1360865/3rd_WG_Green.pdf “Primordial Black Hole Formation”

P. Pani, and A. Loeb, 2014 http://xxx.lanl.gov/pdf/1401.3025v1.pdf “Exclusion of the remaining mass window for primordial black holes as the dominant constituent of dark matter”

NEW BOOK just released:

S. Perrenod, 2016, 72 Beautiful Galaxies (especially designed for iPad, iOS; ages 12 and up)

## Dark Lenses Magnify Star Formation in Dusty Galaxies

Dusty star-forming galaxies (DSFGs) are found in abundance in the early universe. They are especially bright because they are experiencing a large burst of high-rate star formation. Since they are mainly at higher redshifts, we are seeing them well in the past; the high star formation rates occur typically during the early life of a galaxy.

The optical light from new and existing stars in such galaxies is heavily absorbed by interstellar dust interior to the galaxy. The dust is quite cold, normally well below 100 Kelvins. It reradiates the absorbed energy thermally at low temperatures. As a result the galaxy becomes bright in the infrared and far infrared portions of the spectrum.

Dark matter has two roles here. First of all, each dusty star-forming galaxy would have formed from a “halo” dominated by dark matter. Secondly, dark matter lenses magnify the DSFGs significantly, allowing us to observe them and get decent measurements in the first place.

An international team of 27 astronomers has observed half a dozen DSFGs at 3.6 micron and 4.5 micron infrared wavelengths with the space-borne Spitzer telescope. These objects were originally identified at far infrared wavelengths with the Herschel telescope. Combining the infrared and far infrared measurements allows the researchers to determine the galaxy stellar masses and the star formation rates.

The six DSFGs observed by the team have redshifts ranging from 1.0 to 3.3 (corresponding to  look back times of roughly 8 to 12 billion years). Each of the 6 DSFGs has been magnified by “Einstein” lenses. The lensing effect is due to intervening foreground galaxies, which are also dominated by dark matter, and thus possessing sufficient gravitational fields that are able to significantly deflect and magnify the DSFG images. Each of the 6 DSFGs is therefore magnified by a lens that is mostly dark.

The lenses can result in the images of the DSFGs appearing as ring-shaped or arc-shaped. Multiple images are also possible. The magnification factors are quite large, ranging from a factor of 4 to a factor of more than 16 times. (Without dark matter’s contribution the magnification would be very much less).

It is a delicate process to subtract out the foreground galaxy, which is much brighter. The authors build a model for the foreground galaxy light profile and gravitational lensing effect in each case. They remove the light from the foreground galaxy computationally in order to reveal the residual light from the background DSFG. And they calculate the magnification factors so that they can determine the intrinsic luminosity of the DSFGs.

The stellar masses for these 6 DSFGs are found to be in the range of 80 to 400 billion solar masses, and their star formation rates are in the range of 100 to 500 solar masses per year.

One of the 6 galaxies, nicknamed HLock12, is shown in the Spitzer infrared image below, along with the foreground galaxy. The model of the foreground galaxy is subtracted out, such that in the rightmost panes, the DSFG image is more apparent. There are two rows of images, the top row shows measurements at 3.6 microns, and the bottom row is for observations at 4.5 microns.

This particular DSFG among the six was found to have a stellar mass of 300 billion solar masses and a total mass in dust of 3 billion solar masses. So the dust component is just about 1% of the stellar component. The estimated star formation rate is 500 solar masses per year, which is hundreds of times larger than the current star formation rate in our own Milky Way galaxy.

It is only because of the significant magnification through gravitational lensing (“dark lenses”) that researchers are able to obtain good measurements of these DSFGs. This lensing due to intervening dark matter allows astronomers to advance our understanding of galaxy formation and early evolution, much more quickly than would otherwise be possible.

The figure 6 is from the paper referenced below. The top row shows (a) a Hubble telescope image of the field in the near infrared at 1.1 microns, and (b) the field at 3.6 microns from the Spitzer telescope. The arc is quite visible in the Hubble image in the upper right quadrant just adjacent to the foreground galaxy in the center. The model for the foreground galaxy is in column (c) and after subtraction the background galaxy image is in column (d), along with several other faint objects. The corresponding images in the bottom row are from Spitzer observations at 4.5 microns.

Reference

B. Ma et al. 2015, “Spitzer Imaging of Strongly-lensed Herschel-selected Dusty Star Forming Galaxies” http://arxiv.org/pdf/1504.05254v3.pdf

## Dusty Star-Forming Galaxies Brightened by Dark Matter

The first galaxies were formed within the first billion years of the Universe’s history. Our Milky Way galaxy contains very old stars with ages indicating formation around 500 or 600 million years after the Big Bang.

Astronomers are very eager to study galaxies in the early universe, in order to understand galaxy formation and evolution. They can do this by looking at the most distant galaxies. With the expanding universe of the Big Bang, the farther away a galaxy is, the farther back in time we are looking. Astronomers often use redshift to measure the distance, and hence age, of a galaxy. The larger the redshift, z, the farther back in time, and the closer to a galaxy’s birth and the universe’s birth.

The interstellar medium of a galaxy consists of gas and dust. The gas can be hot or cold, and in atomic or molecular form. Atomic gas may be ionized by ultraviolet starlight, or X-radiation from neutron stars or black holes (not the black holes themselves, but hot matter near the black hole), from cosmic rays or from other astrophysical mechanisms. Our Milky Way galaxy is rich in gas and dust, and contains thousands of molecular clouds. These are very cold clouds composed mainly of molecular hydrogen but also many other molecular species. Molecular clouds are the primary sites of new star formation. The Horsehead Nebula is an example of a molecular cloud in the constellation of Orion.

“Hubble Sees a Horsehead of a Different Color” by ESA/Hubble. Licensed under CC BY 3.0 via Wikimedia Commons

During their most active phase of star formation, a large galaxy might give birth to over 1000 solar masses worth of stars per year. By comparison, in the Milky Way galaxy, the new star formation rate is only of order 1 solar mass per year, the equivalent of 1 Sun, or, say, 2 stars with half the mass of our Sun, per annum. Over its entire 13 billion year life the Milky Way has formed many hundreds of billions of stars, so clearly the star formation rate was higher in the past.

Before the first stars and galaxies form, the universe contains only hydrogen and helium, and no heavier elements. Those are produced by thermonuclear reactions in stellar interiors. This is a wonderful thing, because carbon, oxygen and other heavy elements are essential to life.

After a galaxy produces its first generation of massive stars, its interstellar medium will begin to contain carbon, nitrogen, oxygen and other heavy elements (heavy means anything above helium, in this context). Massive stars (above a few solar masses) evolve rapidly, with timescales in the millions of years, rather than billions, and explode as supernovae at the end of their lives. A large portion of their material, now containing heavy elements as well as hydrogen and helium, is expelled at high velocity and mixed into the interstellar medium. The carbon, nitrogen and oxygen which is then in the respective galaxy’s interstellar medium can be detected in atomic (including ionized) or molecular forms. The relative abundance of heavy elements grows with time as more stars are formed, evolve, and recycle matter into the interstellar medium.

High-redshift (z > 2) galaxies with active star formation are best observed in the infrared. The gas and dust in molecular clouds is quite cold, usually less than 100 K (100 degrees above absolute zero). And their radiation is shifted further toward the far infrared and sub-millimeter portions of the spectrum by the redshift factor of 1+z. So radiation emitted at 100 microns is detected at the Earth at 400 microns for a source at z = 3.

These are difficult measurements to make, because if the galaxy is very distant, it is also very faint. However the possibility of getting good measurements is helped by two things. One is that galaxies with very active star formation are intrinsically brighter.

And the other reason is that intervening clusters of galaxies are massive and contain mostly dark matter. As we look far back through the universe toward an early galaxy, there is a good chance that the line of sight passes through a cluster of galaxies. Clusters of galaxies contain hundreds or even thousands of galaxies, and are dominated by dark matter. Most of the infrared radiation can pass through the intracluster medium – the space between galaxies – without being absorbed; it does not interact with dark matter. The clusters are sufficiently massive to bend the light, however, according to general relativity. As the background galaxy’s light passes through the cluster during its multi-billion year journey to the Earth and our telescopes, the cluster’s gravitational potential modifies the light ray’s path. Actually the intervening cluster of galaxies does more than displace the light, it acts as a lens, causing the image to brighten by as much as 10 times or more. This makes it much easier to gather enough photons from the target galaxy to obtain good quality results.

An international research team with participants from Germany, the U.S., Chile, the U.K. and Canada has identified 20 high redshift “dusty star forming galaxies” at very high redshift (DSFG is a technical term for galaxies with high star formation rates and lots of dust) from the South Pole Telescope infrared galaxy survey. They have been able to further elucidate the nature of 17 of these early galaxies by measuring C II emission from singly ionized atomic carbon, and CO emission from carbon monoxide molecules for 11 of those. They have also determined the total far infrared luminosity for these target galaxies. Their results allow them to place constraints on the nature of the interstellar medium and the properties of molecular clouds.

The galaxies’ high redshifts, ranging from z = 2.1  to 5.7, actually makes it possible to make Earth-bound measurements in most cases. At lower redshifts the observations would not be possible from Earth because the Earth’s atmosphere is highly opaque at the observation frequencies. But it is much more transparent at longer wavelengths, so as the redshift exceeds z = 3, Earth-based observations are possible from favorable locations, in this case the Chilean desert. For three sources with redshifts around 2 the atmosphere prohibits ground-based observations and the team therefore made observations from the orbiting Herschel Space Telescope, designed for infrared work.

The figure labelled Figure 3 below is taken from their paper. It indicates the redshift z on the x-axis (logarithmically) and the far infrared luminosity of the galaxy on the y-axis (as the log) as well. The 17 galaxies studied by the authors are indicated with red dots and labelled “SPT DSFGs”. Their very high luminosities are in the range of 10 to 100 trillion times the Sun’s luminosity. Note that the luminosities must be very high for detection at such a high redshift (distance from Earth). Also these luminosities are uncorrected for the lensing magnification, so the true luminosities are around an order of magnitude lower.

The redshift range covered in this research corresponds to ages for the universe of around 1 billion years old (z = 5.7) to a little over 3 billion years old (z = 2.1). So the lookback time is roughly 11 to 13 billion years.

For those of us interested in dark matter, their findings regarding the degree of magnification by dark matter are also interesting. They find “strong lensing” or magnification in the range of 5 to 21 times for 4 sources that allowed for lens modeling. The other sources do not have magnifications measured, but they are presumed to be of the same order of magnitude of around 10 times or so, to within a factor of 2 either way.

It is only because the lensing is so substantial that they are able to measure these galaxies with sufficient fidelity to arrive at their results. So not only is dark matter key to galaxy formation and evolution, it is key to allowing us to study galaxies in the early universe. Dark matter forms galaxies and then helps us understand how they form!

Reference

B. Gullberg et al. 2015, ”The nature of the [CII] emission in dusty star-forming galaxies from the SPT survey” to be published, Monthly Notices of the Royal Astronomical Society, http://arxiv.org/pdf/1501.06909v2.pdf

C.M. Casey, D. Narayanan, A. Cooray 2015, “Dusty Star-Forming Galaxies at High Redshift”, http://arxiv.org/abs/1402.1456

## X-raying Dark Matter

I was at the dentist this week. Don’t ask, but they took 3 digital X-Rays.

One of the most significant methods by which we detect the presence of dark matter is through the use of X-ray telescopes. The energy associated with these X-rays is typically around an order of magnitude less than those zapped into your mouth when you visit the dentist.

Around 50 years ago scientists at American Science and Engineering flew the first imaging X-ray telescope on a small rocket. At a later date, I worked part-time at AS&E, as we called it, while in graduate school. One major project was a solar X-Ray telescope mounted in SkyLab, America’s first space station. This gave me the wonderful opportunity to work in the control rooms at the NASA Johnson Space Center in Houston.

X-rays are absorbed in the Earth’s atmosphere, so today X-ray astronomy is performed from orbiting satellites. X-ray telescopes use the principle of grazing incidence reflection; the X-rays impinge at shallow angles onto gold or iridium-coated metallic surfaces and are reflected to the focal plane and the detector electronics.

Schematic of grazing incidence mirrors used in the Chandra X-ray Observatory. Credit NASA/CXC/SAO; obtained from chandra.harvard.edu.

How does dark matter result in X-rays being produced? Indirectly, as a consequence of its gravitational effects.

One of the main mechanisms for X-ray production in the universe is known as thermal bremsstrahlung. Bremsstrahlung is a German word meaning ‘decelerated radiation’. A gas which is hot enough to give off X-rays will be ionized. That is, the electrons will be stripped from the nuclei and move about freely. As electrons fly around near ions (protons and helium nuclei primarily) their mutual electromagnetic attraction will result in some of the electrons’ kinetic energy being transferred to radiation.

The speed at which the electrons are moving around determines how energetic the produced photons will be. We talk about the temperature of such an ionized gas, and that is proportional to the square of the average speed of the electrons. A gas with a temperature of 10 million degrees will give off approximately 1 kiloVolt X-rays (hereafter we use the KeV abbreviation), and a gas with a temperature of 100 million degrees will radiate 10 KeV X-rays. One eV converts to 11,605 degrees Kelvin (or we can just say Kelvins).

Chandra X-ray Observatory prior to launch in the Space Shuttle Columbia in 1999. NASA image.

So how can we produce gas hot enough to give off X-Rays by this mechanism? Gravity, and lots of it. The potential energy of the gravitational field is proportional to the amount of matter (total mass) coalesced into a region and inversely proportional to the characteristic scale of that region. GM/R, simple Newtonian mechanics, is sufficient; no general relativistic calculation is needed at this point. G is the gravitational constant and M and R are the cluster mass and characteristic radius, respectively.

A lot of mass in a confined region – how about large groups of clusters and galaxies? It turns out we need of order 1000 galaxies for a rich cluster and this will do the trick. But only because there is dark matter as well as ordinary matter. There are three main matter components to consider: galaxies, hot intracluster gas found between galaxies, and dark matter. The cluster forms from gravitational self-collapse from a region that was of above average density in the early universe. All the over dense regions are subject to collapse.

The “Bullet Cluster” is actually two colliding clusters. The bluish color shows the distribution of dark matter as determined from the gravitational lensing effect on background galaxy images. The reddish color depicts the hot X-ray emitting gas measured by the Chandra X-ray Observatory.

(X-ray: NASA/CXC/CfA/M.Markevitch Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/D.Clowe)

The optically visible galaxies are the least important contributor to the cluster mass, only around 1%! Galaxy clusters are made of dark matter much more than they are made out of galaxies. And secondarily, they are made out of hot gas. The ordinary matter contained within galaxies is only the third most important component. The table below gives the typical 90 / 9 / 1 proportions for dark matter, hot gas, and galaxies, respectively.

Three main components of a galaxy cluster (Table derived from Wikipedia article on galaxy clusters)

Component                      Mass fraction             Description

Galaxies                           1%                         Optical/infrared observations

Intergalactic gas              9%                         High temperature ionized gas – thermal bremsstrahlung

Dark matter                     90%                        Dominates, inferred through gravitational interactions

The intracluster gas has two sources. A major portion of it is primordial gas that never formed galaxies, but falls into the gravitational potential well of the cluster. As it falls in toward the cluster center, it heats. The kinetic energy of infall is converted to random motions of the ionized gas. An additional portion of the gas is recycled material expelled from galaxies. It mixes with the primordial gas and heats up as well through frictional processes. The gas is supported against further collapse by its own pressure as the density and temperature increase in the cluster core.

The temperature which characterizes the X-ray emission is a measure of gravitational potential strength and proportional to the ratio of the mass of the cluster to its size. Typical X-Ray temperatures measured for rich clusters are around 3 to 12 KeV, which corresponds to temperatures in the range of 30 to 130 million Kelvins.

There is another way to measure the strength of the cluster’s gravitational potential well. That is by measuring the speed of galaxies as they move around in somewhat random fashion inside the cluster. The assumption, which is valid for well-formed clusters after they have been around for billions of years, is that the galaxies are not just falling into the center of the cluster, but that their motions are “virialized”. This is the method used by Fritz Zwicky in the 1930s for the original discovery of dark matter. He found that in a certain well known cluster, the Coma cluster, that the average speed of galaxies relative to the cluster centroid was of order 1000 kilometers/sec, much higher than the expected 300 km/sec based on the visible light from the cluster galaxies. This implied 10 times as much dark matter as galactic matter. This early, rather crude measurement, was on the right track, but fell short of the actual ratio of dark matter to galactic matter since we now know that galaxies themselves have large dark matter halos. The X-ray emission from clusters was discovered much later, starting in the 1970s.

The two methods of measuring the amount of dark matter in Galaxy clusters generally agree. Both the galaxies and the hot intracluster gas are acting as tracers of the overall mass distribution, which is dominated by dark matter. Galaxy clusters play a major role in increasing our understanding of dark matter and how it affects the formation and evolution of galaxies.

In fact if dark matter was not 5 times as abundant by mass as ordinary matter, most galaxy clusters would never have formed, and galaxies such as our own Milk Way would be much smaller.

References

Wikipedia article “galaxy clusters”.

“X-ray Temperatures of Distant Clusters of Galaxies”, S. C. Perrenod,  J. P. Henry 1981, Astrophysical Journal, Letters to the Editor, vol. 247, p. L1-L4.

“The X-ray Luminosity – Velocity Dispersion Relation in the REFLEX Cluster Survey”, A. Ortiz-Gil, L. Guzzo, P. Schuecker, H. Boehringer, C.A. Collins 2004, Mon.Not.Roy.Astron.Soc. 348, 325; http://arxiv.org/abs/astro-ph/0311120v1

## Super Colliders in Space: Dark Matter not Colliding

What’s bigger and more powerful than the Large Hadron Collider at CERN? Why colliding galaxy clusters of course.

A cluster of galaxies consists of hundreds or even thousands of galaxies bound together by their mutual gravitation. Both dark matter and ordinary matter in and between galaxies is responsible for the gravitational field of a cluster. And typically there is about 5 times as much dark matter as ordinary matter. The main component of ordinary matter is hot intracluster gas; only a small percentage of the mass is locked up in stars.

One stunning example of dark matter detection is the Bullet Cluster. This is the canonical example found revealing dark matter separation from ordinary matter in a pair of clusters colliding and merging. The dark matter just passes right through, apparently unaffected by the collision. The hot gas (ordinary matter) is seen through its X-ray emission, since the gas is heated by collisions to of order 100 million degrees. The Chandra X-ray Observatory (satellite) provided these measurements.

Bullet Cluster. The blue color shows the distribution of dark matter, which passed through the collision without slowing down. The purple color shows the hot X-ray emitting gas. Image courtesy of Chandra X-ray Observatory

The distribution of matter overall in the Bullet Cluster or other clusters is traced by gravitational lensing effects; general relativity tells us that  background galaxies will have their images displaced, distorted, and magnified as their light passes through a cluster on its way to Earth. The magnitude of these effects can be used to “weigh” the dark matter. These measurements are made with the Hubble Space Telescope.

In the Bullet Cluster the dark matter is displaced from the ordinary matter. The interpretation is that the ordinary matter from the two clusters, principally in the form of hot gas, is slowed by frictional, collisional processes as the clusters interact and form a larger single cluster of galaxies. Another six or so examples of galaxy clusters showing the displacement between the dark matter and the ordinary matter in gas and stars have been found to date.

Now, a team of astrophysicists based in the U.K. and Switzerland have examined 30 additional galaxy clusters with data from both Chandra and Hubble, and with redshifts typically 0.2 to 0.6. In aggregate there are 72 collisions in the 30 systems, since some have more than two subclusters. The offsets between the gas and dark matter are quite substantial, and in aggregate indicate the existence of dark matter in these clusters with over 7 standard deviations of statistical significance (probability of the null hypothesis of no dark matter is 1 in 30 trillion).

They then look at the possible drag force on the dark matter due to dark matter particles colliding with other dark matter particles. There are already much more severe constraints on ordinary matter – dark matter interactions from Earth-based laboratory measurements. But the dark matter mutual collision cross section could potentially be large enough to result in a drag. They measure the relative positions of hot gas, galaxies, and dark matter for all of the 72 subclusters.

From paper “The non-gravitational interactions of dark matter in colliding galaxy clusters” D. Harvey et al. 2015

The gas should and does lag the most, relative to the direction of the galaxies in a collision. If there is a dark matter drag, then dark matter should lag behind the positions of the stars. They find no lag of the dark matter average position, which allows them to place a new, tighter constraint on the mutual interaction cross-section for dark matter.

Their constraint is σ(DM)/m < 0.47 cm^2/g at 95% confidence level, where σ (sigma) is the cross-section and m is the mass of a single dark matter particle. This limit is over twice as tight as that previously obtained from the Bullet Cluster. And some dark matter models predict a cross section per unit mass of 0.6 cm^2/g, so these models are potentially ruled out by these new measurements.

In summary, using Nature’s massive particle colliders, the authors have found further highly significant evidence for the existence of dark matter in clusters of galaxies, and they have placed useful constraints on the dark matter self-interaction cross-section. Dark matter continues to be highly elusive.

Reference:

D. Harvey et al. 2015 “The non-gravitational interactions of dark matter in colliding galaxy clusters” http://arxiv.org/pdf/1503.07675v1.pdf

## Caught in the Cosmic Web – Dark Matter Structure Revealed

NASA/ESA Hubblecast 58

This video reports on a very impressive research effort resulting in the first 3-D mapping of dark matter for a galaxy cluster. A massive galaxy cluster over 5 billion light-years from Earth is the first to have such a full 3-dimensional map of its dark matter distribution. The dark matter is the dominant component of the cluster’s mass. The cluster, known as MACS J0717, is still in the formation stage. The Hubble Space Telescope and a number of ground-based telescopes on Mauna Kea in Hawaii were used to determine the spatial distribution. The longest filament of dark matter discovered by the international team of astronomers stretches across 60 million light-years. Gravitational lensing of galaxy images (as Einstein predicted) and redshift measurements for a large number of galaxies were required in order to uncover the 3-D shape and characteristics of the filament.