Tag Archives: gravitational lensing

Emergent Gravity: Verlinde’s Proposal

In a previous blog entry I give some background around Erik Verlinde’s proposal for an emergent, thermodynamic basis of gravity. Gravity remains mysterious 100 years after Einstein’s introduction of general relativity – because it is so weak relative to the other main forces, and because there is no quantum mechanical description within general relativity, which is a classical theory.

On reason that it may be so weak is because it is not fundamental at all, that it represents a statistical, emergent phenomenon. There has been increasing research into the idea of emergent spacetime and emergent gravity and the most interesting proposal was recently introduced by Erik Verlinde at the University of Amsterdam in a paper “Emergent Gravity and the Dark Universe”.

A lot of work has been done assuming anti-de Sitter (AdS) spaces with negative cosmological constant Λ – just because it is easier to work under that assumption. This year, Verlinde extended this work from the unrealistic AdS model of the universe to a more realistic de Sitter (dS) model. Our runaway universe is approaching a dark energy dominated dS solution with a positive cosmological constant Λ.

The background assumption is that quantum entanglement dictates the structure of spacetime, and its entropy and information content. Quantum states of entangled particles are coherent, observing a property of one, say the spin orientation, tells you about the other particle’s attributes; this has been observed in long distance experiments, with separations exceeding 100 kilometers.

400px-SPDC_figure.pngIf space is defined by the connectivity of quantum entangled particles, then it becomes almost natural to consider gravity as an emergent statistical attribute of the spacetime. After all, we learned from general relativity that “matter tells space how to curve, curved space tells matter how to move” – John Wheeler.

What if entanglement tells space how to curve, and curved space tells matter how to move? What if gravity is due to the entropy of the entanglement? Actually, in Verlinde’s proposal, the entanglement entropy from particles is minor, it’s the entanglement of the vacuum state, of dark energy, that dominates, and by a very large factor.

One analogy is thermodynamics, which allows us to represent the bulk properties of the atmosphere that is nothing but a collection of a very large number of molecules and their micro-states. Verlinde posits that the information and entropy content of space are due to the excitations of the vacuum state, which is manifest as dark energy.

The connection between gravity and thermodynamics has been around for 3 decades, through research on black holes, and from string theory. Jacob Bekenstein and Stephen Hawking determined that a black hole possesses entropy proportional to its area divided by the gravitational constant G. String theory can derive the same formula for quantum entanglement in a vacuum. This is known as the AdS/CFT (conformal field theory) correspondence.

So in the AdS model, gravity is emergent and its strength, the acceleration at a surface, is determined by the mass density on that surface surrounding matter with mass M. This is just the inverse square law of Newton. In the more realistic dS model, the entropy in the volume, or bulk, must also be considered. (This is the Gibbs entropy relevant to excited states, not the Boltzmann entropy of a ground state configuration).

Newtonian dynamics and general relativity can be derived from the surface entropy alone, but do not reflect the volume contribution. The volume contribution adds an additional term to the equations, strengthening gravity over what is expected, and as a result, the existence of dark matter is ‘spoofed’. But there is no dark matter in this view, just stronger gravity than expected.

This is what the proponents of MOND have been saying all along. Mordehai Milgrom observed that galactic rotation curves go flat at a characteristic low acceleration scale of order 2 centimeters per second per year. MOND is phenomenological, it observes a trend in galaxy rotation curves, but it does not have a theoretical foundation.

Verlinde’s proposal is not MOND, but it provides a theoretical basis for behavior along the lines of what MOND states.

Now the volume in question turns out to be of order the Hubble volume, which is defined as c/H, where H is the Hubble parameter denoting the rate at which galaxies expand away from one another. Reminder: Hubble’s law is v = H \cdot d where v is the recession velocity and the d the distance between two galaxies. The lifetime of the universe is approximately 1/H.


The value of c / H is over 4 billion parsecs (one parsec is 3.26 light-years) so it is in galaxies, clusters of galaxies, and at the largest scales in the universe for which departures from general relativity (GR) would be expected.

Dark energy in the universe takes the form of a cosmological constant Λ, whose value is measured to be 1.2 \cdot 10^{-56} cm^{-2} . Hubble’s parameter is 2.2 \cdot 10^{-18} sec^{-1} . A characteristic acceleration is thus H²/ Λ or 4 \cdot 10^{-8}  cm per sec per sec (cm = centimeters, sec = second).

One can also define a cosmological acceleration scale simply by c \cdot H , the value for this is about 6 \cdot 10^{-8} cm per sec per sec (around 2 cm per sec per year), and is about 15 billion times weaker than Earth’s gravity at its surface! Note that the two estimates are quite similar.

This is no coincidence since we live in an approximately dS universe, with a measured  Λ ~ 0.7 when cast in terms of the critical density for the universe, assuming the canonical ΛCDM cosmology. That’s if there is actually dark matter responsible for 1/4 of the universe’s mass-energy density. Otherwise Λ could be close to 0.95 times the critical density. In a fully dS universe, \Lambda \cdot c^2 = 3 \cdot H^2 , so the two estimates should be equal to within sqrt(3) which is approximately the difference in the two estimates.

So from a string theoretic point of view, excitations of the dark energy field are fundamental. Matter particles are bound states of these excitations, particles move freely and have much lower entropy. Matter creation removes both energy and entropy from the dark energy medium. General relativity describes the response of area law entanglement of the vacuum to matter (but does not take into account volume entanglement).

Verlinde proposes that dark energy (Λ) and the accelerated expansion of the universe are due to the slow rate at which the emergent spacetime thermalizes. The time scale for the dynamics is 1/H and a distance scale of c/H is natural; we are measuring the time scale for thermalization when we measure H. High degeneracy and slow equilibration means the universe is not in a ground state, thus there should be a volume contribution to entropy.

When the surface mass density falls below c \cdot H / (8 \pi \cdot G) things change and Verlinde states the spacetime medium becomes elastic. The effective additional ‘dark’ gravity is proportional to the square root of the ordinary matter (baryon) density and also to the square root of the characteristic acceleration c \cdot H.

This dark gravity additional acceleration satisfies the equation g _D = sqrt  {(a_0 \cdot g_B / 6 )} , where g_B is the usual Newtonian acceleration due to baryons and a_0 = c \cdot H is the dark gravity characteristic acceleration. The total gravity is g = g_B + g_D . For large accelerations this reduces to the usual g_B and for very low accelerations it reduces to sqrt  {(a_0 \cdot g_B / 6 )} .

The value a_0/6 at 1 \cdot 10^{-8} cm per sec per sec derived from first principles by Verlinde is quite close to the MOND value of Milgrom, determined from galactic rotation curve observations, of 1.2 \cdot 10^{-8} cm per sec per sec.

So suppose we are in a region where g_B is only 1 \cdot 10^{-8} cm per sec per sec. Then g_D takes the same value and the gravity is just double what is expected. Since orbital velocities go as the square of the acceleration then the orbital velocity is observed to be sqrt(2) higher than expected.

In terms of gravitational potential, the usual Newtonian potential goes as 1/r, resulting in a 1/r^2 force law, whereas for very low accelerations the potential now goes as log(r) and the resultant force law is 1/r. We emphasize that while the appearance of dark matter is spoofed, there is no dark matter in this scenario, the reality is additional dark gravity due to the volume contribution to the entropy (that is displaced by ordinary baryonic matter).


Flat to rising rotation curve for the galaxy M33

Dark matter was first proposed by Swiss astronomer Fritz Zwicky when he observed the Coma Cluster and the high velocity dispersions of the constituent galaxies. He suggested the term dark matter (“dunkle materie”). Harold Babcock in 1937 measured the rotation curve for the Andromeda galaxy and it turned out to be flat, also suggestive of dark matter (or dark gravity). Decades later, in the 1970s and 1980s, Vera Rubin (just recently passed away) and others mapped many rotation curves for galaxies and saw the same behavior. She herself preferred the idea of a deviation from general relativity over an explanation based on exotic dark matter particles. One needs about 5 times more matter, or about 5 times more gravity to explain these curves.

Verlinde is also able to derive the Tully-Fisher relation by modeling the entropy displacement of a dS space. The Tully-Fisher relation is the strong observed correlation between galaxy luminosity and angular velocity (or emission line width) for spiral galaxies, L \propto v^4 .  With Newtonian gravity one would expect M \propto v^2 . And since luminosity is essentially proportional to ordinary matter in a galaxy, there is a clear deviation by a ratio of v².


 Apparent distribution of spoofed dark matter,  for a given ordinary (baryonic) matter distribution

When one moves to the scale of clusters of galaxies, MOND is only partially successful, explaining a portion, coming up shy a factor of 2, but not explaining all of the apparent mass discrepancy. Verlinde’s emergent gravity does better. By modeling a general mass distribution he can gain a factor of 2 to 3 relative to MOND and basically it appears that he can explain the velocity distribution of galaxies in rich clusters without the need to resort to any dark matter whatsoever.

And, impressively, he is able to calculate what the apparent dark matter ratio should be in the universe as a whole. The value is \Omega_D^2 = (4/3) \Omega_B where \Omega_D is the apparent mass-energy fraction in dark matter and \Omega_B is the actual baryon mass density fraction. Both are expressed normalized to the critical density determined from the square of the Hubble parameter, 8 \pi G \rho_c = 3 H^2 .

Plugging in the observed \Omega_B \approx 0.05 one obtains \Omega_D \approx 0.26 , very close to the observed value from the cosmic microwave background observations. The Planck satellite results have the proportions for dark energy, dark matter, ordinary matter as .68, .27, and .05 respectively, assuming the canonical ΛCDM cosmology.

The main approximations Verlinde makes are a fully dS universe and an isolated, static (bound) system with a spherical geometry. He also does not address the issue of galaxy formation from the primordial density perturbations. At first guess, the fact that he can get the right universal \Omega_D suggests this may not be a great problem, but it requires study in detail.

Breaking News!

Margot Brouwer and co-researchers have just published a test of Verlinde’s emergent gravity with gravitational lensing. Using a sample of over 33,000 galaxies they find that general relativity and emergent gravity can provide an equally statistically good description of the observed weak gravitational lensing. However, emergent gravity does it with essentially no free parameters and thus is a more economical model.

“The observed phenomena that are currently attributed to dark matter are the consequence of the emergent nature of gravity and are caused by an elastic response due to the volume law contribution to the entanglement entropy in our universe.” – Erik Verlinde


Erik Verlinde 2011 “On the Origin of Gravity and the Laws of Newton” arXiv:1001.0785

Stephen Perrenod, 2013, 2nd edition, “Dark Matter, Dark Energy, Dark Gravity” Amazon, provides the traditional view with ΛCDM  (read Dark Matter chapter with skepticism!)

Erik Verlinde 2016 “Emergent Gravity and the Dark Universe arXiv:1611.02269v1

Margot Brouwer et al. 2016 “First test of Verlinde’s theory of Emergent Gravity using Weak Gravitational Lensing Measurements” arXiv:1612.03034v


Modified Newtonian Dynamics – Is there something to it?

You are constantly accelerating. The Earth’s gravity is pulling you downward at g = 9.8 meters per second per second. It wants to take your velocity up to about 10 meters per second after only the first second of free fall. Normally you don’t fall, because the floor is solid due to electromagnetic forces and also it is electromagnetic forces that give your body structural integrity and power your muscles, resisting the pull of gravity.

You are also accelerating due to the Earth’s spin and its revolution about the Sun.


International Space Station, image credit: NASA

Our understanding of gravity comes primarily from these large accelerations, such as the Earth’s pull on ourselves and on satellites, the revolution of the Moon about the Earth, and the planetary orbits about the Sun. We also are able to measure the solar system’s velocity of revolution about the galactic center, but with much lower resolution, since the timescale is of order 1/4 billion years for a single revolution with an orbital radius of about 25,000 light-years!

It becomes more difficult to determine if Newtonian dynamics and general relativity still hold for very low accelerations, or at very large distance scales such as the Sun’s orbit about the galactic center and beyond.

Modified Newtonian Dynamics (MOND) was first proposed by Mordehai Milgrom in the early 1980s as an alternative explanation for flat galaxy rotation curves, which are normally attributed to dark matter. At that time the best evidence for dark matter came from spiral galaxy rotation curves, although the need for dark matter (or some deviation from Newton’s laws) was originally seen by Fritz Zwicky in the 1930s while studying clusters of galaxies.


NGC 3521. Image Credit: ESA/Hubble & NASA and S. Smartt (Queen’s University Belfast); Acknowledgement: Robert Gendler 


Galaxy Rotation Curve for M33. Public Domain, By Stefania.deluca – Own work,  https://commons.wikimedia.org/w/index.php?curid=34962949

If general relativity is always correct, and Newton’s laws of gravity are correct for non-relativistic, weak gravity conditions, then one expects the orbital velocities of stars in the outer reaches of galaxies to drop in concert with the fall in light from stars and/or radio emission from interstellar gas, reflecting decreasing baryonic matter density. (Baryonic matter is ordinary matter, dominated by protons and neutrons). As seen in the image above for M33, the orbital velocity does not drop, it continues to rise well past the visible edge of the galaxy.

To first order, assuming a roughly spherical distribution of matter, the square of the velocity at a given distance from the center is proportional to the mass interior to that distance divided by the distance (signifying the gravitational potential), thus

   v² ~ G M / r

where G is the gravitational constant, and M is the galactic mass within a spherical volume of radius r. This potential corresponds to the familiar 1/r² dependence of the force of gravity according to Newton’s laws.  In other words, at the outer edge of a galaxy the velocity of stars should fall as the square root of the increasing distance, for Newtonian dynamics.

Instead, for the vast majority of galaxies studied, it doesn’t – it flattens out, or falls off very slowly with increasing distance, or even continues to rise, as for M33 above. The behavior is roughly as if gravity followed an inverse distance law for the force (1/r) in the outer regions, rather than an inverse square law with distance (1/r²).

So either there is more matter at large distances from galactic centers than expected from the light distribution, or the gravitational law is modified somehow such that gravity is stronger than expected. If there is more matter, it gives off little or no light, and is called unseen, or dark, matter.

It must be emphasized that MOND is completely empirical and phenomenological. It is curve fitted to the existing rotational curves, rather successfully, but not based on a theoretical construct for gravity. It has a free parameter for weak acceleration, and for very small accelerations, gravity is stronger than expected. It turns out that this free parameter, a_0 , is of the same order as the ‘Hubble acceleration’ c \cdot H. (The Hubble distance is c / H and is 14 billion light-years; H has units of inverse time and the age of the universe is 1/H to within a few percent).

The Hubble acceleration is approximately .7 nanometers / sec / sec or 2 centimeters / sec / year  (a nanometer is a billionth of a meter, sec = second).

Milgrom’s fit to rotation curves found a best fit at .12 nanometers/sec/sec, or about 1/6 of a_0 . This is very small as compared to the Earth’s gravity, for example. It’s the ratio between 80 years and one second, or about 2.5 billion. So you can imagine how such a variation could have escaped detection for a long time, and would require measurements at the extragalactic scale.

The TeVeS – tensor, vector, scalar theory is a theoretical construct that modifies gravity from general relativity. General relativity is a tensor theory that reduces to Newtonian dynamics for weak gravity. TeVeS has more free parameters than general relativity, but can be constructed in a way that will reproduce galaxy rotation curves and MOND-like behavior.

But MOND, and by implication, TeVeS, have a problem. They work well, surprisingly well, at the galactic scale, but come up short for galaxy clusters and for the very largest extragalactic scales as reflected in the spatial density perturbations of the cosmic microwave background radiation. So MOND as formulated doesn’t actually fully eliminate the requirement for dark matter.


Horseshoe shaped Einstein Ring

Image credit: ESA/Hubble and NASA

Any alternative to general relativity also must explain gravitational lensing, for which there are a large number of examples. Typically a background galaxy image is distorted and magnified as its light passes through a galaxy cluster, due to the large gravity of the cluster. MOND proponents do claim to reproduce gravitational lensing in a suitable manner.

Our conclusion about MOND is that it raises interesting questions about gravity at large scales and very low accelerations, but it does not eliminate the requirement for dark matter. It is also very ad hoc. TeVeS gravity is less ad hoc, but still fails to reproduce the observations at the scale of galaxy clusters and above.

Nevertheless the rotational curves of spirals and irregulars are correlated with the visible mass only, which is somewhat strange if there really is dark matter dominating the dynamics. Dark matter models for galaxies depend on dark matter being distributed more broadly than ordinary, baryonic, matter.

In the third article of this series we will take a look at Erik Verlinde’s emergent gravity concept, which can reproduce the Tully-Fisher relation and galaxy rotation curves. It also differs from MOND both in terms of being a theory, although incomplete, rather than empiricism, and apparently in being able to more successfully address the dark matter issues at the scale of galaxy clusters.


Wikipedia MOND entry: https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics

M. Milgrom 2013, “Testing the MOND Paradigm of Modified Dynamics with Galaxy-Galaxy Gravitational Lensing” https://arxiv.org/abs/1305.3516

R. Reyes et al. 2010, “Confirmation of general relativity on large scales from weak lensing and galaxy velocities” https://arxiv.org/abs/1003.2185

“In rotating galaxies, distribution of normal matter precisely determines gravitational acceleration” https://www.sciencedaily.com/releases/2016/09/160921085052.htm

WIMPs or MACHOs or Primordial Black Holes

A decade or more ago, the debate about dark matter was, is it due to WIMPs (weakly interacting massive particles) or MACHOs (massive compact halo objects)? WIMPs would be new exotic particles, while MACHOs are objects formed from ordinary matter but very hard to detect due to their limited electromagnetic radiation emission.


Schwarzenegger (MACHO), not Schwarzschild (Black Holes)

Image credit: Georges Biard, CC BY-SA 3.0

Candidates in the MACHO category such as white dwarf or brown dwarf stars have been ruled out by observational constraints. Black holes formed in the very early universe, dubbed primordial black holes, were thought by many to have been ruled out as well, at least across many mass ranges, such as between the mass of the Moon and the mass of the Sun.

The focus during recent years, and most of the experimental searches, has shifted to WIMPs or other exotic particles (axions or sterile neutrinos primarily). But the WIMPs, which were motivated by supersymmetric extensions to the Standard Model of particle physics, have remained elusive. Most experiments have only placed stricter and stricter limits on their possible abundance and interaction cross-sections. The Large Hadron Collider has not yet found any evidence for supersymmetric particles.

Have primordial black holes (PBHs) as the explanation for dark matter been given short shrift? The recent detections by the LIGO instruments of two gravitational wave events, well explained by black hole mergers, have sparked new interest. A previous blog entry addressed this possibility:


The black holes observed in these events have masses in a range from about 8 to about 36 solar masses, and they could well be primordial.

There are a number of mechanisms to create PBHs in the early universe, prior to the very first second and the beginning of Big Bang nucleosynthesis. At any era, if there is a total mass M confined within a radius R, such that

2*GM/R > c^2 ,

then a black hole will form. The above equation defines the Schwarzschild limit (G is the gravitational constant and c the speed of light). A PBH doesn’t even have to be formed from matter whether ordinary or exotic; if the energy and radiation density is high enough in a region, it can also result in collapse to a black hole.


Cosmic Strings

Image credit: David Daverio, Université de Genève, CSCS supercomputer simulation data

The mechanisms for PBH creation include:

  1. Cosmic string loops – If string theory is correct the very early universe had very long strings and many short loops of strings. These topological defects intersect and form black holes due to the very high density at their intersection points. The black holes could have a broad range of masses.
  2. Bubble collisions from symmetry breaking – As the very early universe expanded and cooled, the strong force, weak force and electromagnetic force separated out. Bubbles would nucleate at the time of symmetry breaking as the phase of the universe changed, just as bubbles form in water as it boils to the surface. Collisions of bubbles could lead to high density regions and black hole formation. Symmetry breaking at the GUT scale (for the strong force separation) would yield BHs of mass around 100 kilograms. Symmetry breaking of the weak force from the electromagnetic force would yield BHs with a mass of around our Moon’s mass ~ 10^25 kilograms.
  3. Density perturbations – These would be a natural result of the mechanisms in #1 and #2, in any case. When observing the cosmic microwave background radiation, which dates from a time when the universe was only 380,000 years old, we see density perturbations at various scales, with amplitudes of only a few parts in a million. Nevertheless these serve as the seeds for the formation of the first galaxies when the universe was only a few hundred million years old. Some perturbations could be large enough on smaller distance scales to form PBHs ranging from above a solar mass to as high as 100,000 solar masses.

For a PBH to be an effective dark matter contributor, it must have a lifetime longer than the age of the universe. BHs radiate due to Hawking radiation, and thus have finite lifetimes. For stellar mass BHs, the lifetimes are incredibly long, but for smaller BHs the lifetimes are much shorter since the lifetime is proportional to the cube of the BH mass. Thus a minimum mass for PBHs surviving to the present epoch is around a trillion kilograms (a billion tons).

Carr et al. (paper referenced below) summarized the constraints on what fraction of the matter content of the universe could be in the form of black holes. Traditional black holes, of several solar masses, created by stellar collapse and detectable due to their accretion disks, do not provide enough matter density. Neither do supermassive black holes of over a million solar masses found at the centers of most galaxies. PBHs may be important in seeding the formation of the supermassive black holes, however.

Limits on the PBH abundance in our galaxy and its halo (which is primarily composed of dark matter) are obtained from:

  1. Cosmic microwave background measurements
  2. Microlensing measurements (gravitational lensing)
  3. Gamma-ray background limits
  4. Neutral hydrogen clouds in the early universe
  5. Wide binaries (disruption limits)

Microlensing surveys such as MACHO and EROS have searched for objects in our galactic halo that act as gravitational lenses for light originating from background stars in the Magellanic Clouds or the Andromeda galaxy. The galactic halo is composed primarily of dark matter.

A couple of dozen of objects with less than a solar mass have been detected.  Based on these surveys the fraction of dark matter which can be PBHs with less than a solar mass is 10% at most. The constraints from 1 solar mass up to 30 solar masses are weaker, and a PBH explanation for most of the galactic halo mass remains possible.

Similar studies conducted toward distant quasars and compact radio sources address the constraint in the supermassive black hole domain, apparently ruling out an explanation due to PBHs with from 1 million to 100 million solar masses.

Lyman-alpha clouds are neutral hydrogen clouds (Lyman-alpha is an important ultraviolet absorption line for hydrogen) that are found in the early universe at redshifts above 4. Simulations of the effect of PBH number density fluctuations on the distribution of Lyman-alpha clouds appear to limit the PBH contribution to dark matter for a characteristic PBH mass above 10,000 solar masses.

Distortions in the cosmic microwave background are expected if PBHs above 10 solar masses contributed substantially to the dark matter component. However these limits assume that PBH masses do not change. Merging and accretion events after the recombination era, when the cosmic microwave background was emitted, can allow a spectrum of PBH masses that were initially less than a solar mass before recombination evolve to one dominated by PBHs of tens, hundreds and thousands of solar masses today. This could be a way around some of the limits that appear to be placed by the cosmic microwave background temperature fluctuations.

Thus it appears could be a window in the region 30 to several thousand solar masses for PBHs as an explanation of cold dark matter.

As the Advanced LIGO gravitational wave detectors come on line, we expect many more black hole merger discoveries that will help to elucidate the nature of primordial black holes and the possibility that they contribute substantially to the dark matter component of our Milky Way galaxy and the universe.


B. Carr, K. Kohri, Y. Sendouda, J. Yokoyama, 2010 arxiv.org/pdf/0912.5297v2 “New cosmological constraints on primordial black holes”

S. Cleese and J. Garcia-Bellido, 2015 arxiv.org/pdf/1501.07565v1.pdf “Massive Primordial Black Holes from Hybrid Inflation as Dark Matter and the Seeds of Galaxies”

P. Frampton, 2015 arxiv.org/pdf/1511.08801.pdf “The Primordial Black Hole Mass Range”

P. Frampton, 2016 arxiv.org/pdf/1510.00400v7.pdf “Searching for Dark Matter Constituents with Many Solar Masses”

Green, A., 2011 https://www.mpifr-bonn.mpg.de/1360865/3rd_WG_Green.pdf “Primordial Black Hole Formation”

P. Pani, and A. Loeb, 2014 http://xxx.lanl.gov/pdf/1401.3025v1.pdf “Exclusion of the remaining mass window for primordial black holes as the dominant constituent of dark matter”

S. Perrenod, 2016 https://darkmatterdarkenergy.com/2016/06/17/primordial-black-holes-as-dark-matter/

NEW BOOK just released:

S. Perrenod, 2016, 72 Beautiful Galaxies (especially designed for iPad, iOS; ages 12 and up)


Dark Lenses Magnify Star Formation in Dusty Galaxies

Dusty star-forming galaxies (DSFGs) are found in abundance in the early universe. They are especially bright because they are experiencing a large burst of high-rate star formation. Since they are mainly at higher redshifts, we are seeing them well in the past; the high star formation rates occur typically during the early life of a galaxy.

The optical light from new and existing stars in such galaxies is heavily absorbed by interstellar dust interior to the galaxy. The dust is quite cold, normally well below 100 Kelvins. It reradiates the absorbed energy thermally at low temperatures. As a result the galaxy becomes bright in the infrared and far infrared portions of the spectrum.

Dark matter has two roles here. First of all, each dusty star-forming galaxy would have formed from a “halo” dominated by dark matter. Secondly, dark matter lenses magnify the DSFGs significantly, allowing us to observe them and get decent measurements in the first place.

An international team of 27 astronomers has observed half a dozen DSFGs at 3.6 micron and 4.5 micron infrared wavelengths with the space-borne Spitzer telescope. These objects were originally identified at far infrared wavelengths with the Herschel telescope. Combining the infrared and far infrared measurements allows the researchers to determine the galaxy stellar masses and the star formation rates.

The six DSFGs observed by the team have redshifts ranging from 1.0 to 3.3 (corresponding to  look back times of roughly 8 to 12 billion years). Each of the 6 DSFGs has been magnified by “Einstein” lenses. The lensing effect is due to intervening foreground galaxies, which are also dominated by dark matter, and thus possessing sufficient gravitational fields that are able to significantly deflect and magnify the DSFG images. Each of the 6 DSFGs is therefore magnified by a lens that is mostly dark.

The lenses can result in the images of the DSFGs appearing as ring-shaped or arc-shaped. Multiple images are also possible. The magnification factors are quite large, ranging from a factor of 4 to a factor of more than 16 times. (Without dark matter’s contribution the magnification would be very much less).

It is a delicate process to subtract out the foreground galaxy, which is much brighter. The authors build a model for the foreground galaxy light profile and gravitational lensing effect in each case. They remove the light from the foreground galaxy computationally in order to reveal the residual light from the background DSFG. And they calculate the magnification factors so that they can determine the intrinsic luminosity of the DSFGs.

The stellar masses for these 6 DSFGs are found to be in the range of 80 to 400 billion solar masses, and their star formation rates are in the range of 100 to 500 solar masses per year.

One of the 6 galaxies, nicknamed HLock12, is shown in the Spitzer infrared image below, along with the foreground galaxy. The model of the foreground galaxy is subtracted out, such that in the rightmost panes, the DSFG image is more apparent. There are two rows of images, the top row shows measurements at 3.6 microns, and the bottom row is for observations at 4.5 microns.

This particular DSFG among the six was found to have a stellar mass of 300 billion solar masses and a total mass in dust of 3 billion solar masses. So the dust component is just about 1% of the stellar component. The estimated star formation rate is 500 solar masses per year, which is hundreds of times larger than the current star formation rate in our own Milky Way galaxy.

It is only because of the significant magnification through gravitational lensing (“dark lenses”) that researchers are able to obtain good measurements of these DSFGs. This lensing due to intervening dark matter allows astronomers to advance our understanding of galaxy formation and early evolution, much more quickly than would otherwise be possible.


The figure 6 is from the paper referenced below. The top row shows (a) a Hubble telescope image of the field in the near infrared at 1.1 microns, and (b) the field at 3.6 microns from the Spitzer telescope. The arc is quite visible in the Hubble image in the upper right quadrant just adjacent to the foreground galaxy in the center. The model for the foreground galaxy is in column (c) and after subtraction the background galaxy image is in column (d), along with several other faint objects. The corresponding images in the bottom row are from Spitzer observations at 4.5 microns.


B. Ma et al. 2015, “Spitzer Imaging of Strongly-lensed Herschel-selected Dusty Star Forming Galaxies” http://arxiv.org/pdf/1504.05254v3.pdf

Dusty Star-Forming Galaxies Brightened by Dark Matter

The first galaxies were formed within the first billion years of the Universe’s history. Our Milky Way galaxy contains very old stars with ages indicating formation around 500 or 600 million years after the Big Bang.

Astronomers are very eager to study galaxies in the early universe, in order to understand galaxy formation and evolution. They can do this by looking at the most distant galaxies. With the expanding universe of the Big Bang, the farther away a galaxy is, the farther back in time we are looking. Astronomers often use redshift to measure the distance, and hence age, of a galaxy. The larger the redshift, z, the farther back in time, and the closer to a galaxy’s birth and the universe’s birth.

The interstellar medium of a galaxy consists of gas and dust. The gas can be hot or cold, and in atomic or molecular form. Atomic gas may be ionized by ultraviolet starlight, or X-radiation from neutron stars or black holes (not the black holes themselves, but hot matter near the black hole), from cosmic rays or from other astrophysical mechanisms. Our Milky Way galaxy is rich in gas and dust, and contains thousands of molecular clouds. These are very cold clouds composed mainly of molecular hydrogen but also many other molecular species. Molecular clouds are the primary sites of new star formation. The Horsehead Nebula is an example of a molecular cloud in the constellation of Orion.

"Hubble Sees a Horsehead of a Different Color" by ESA/Hubble. Licensed under CC BY 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Hubble_Sees_a_Horsehead_of_a_Different_Color.jpg#/media/File:Hubble_Sees_a_Horsehead_of_a_Different_Color.jpg

“Hubble Sees a Horsehead of a Different Color” by ESA/Hubble. Licensed under CC BY 3.0 via Wikimedia Commons 

During their most active phase of star formation, a large galaxy might give birth to over 1000 solar masses worth of stars per year. By comparison, in the Milky Way galaxy, the new star formation rate is only of order 1 solar mass per year, the equivalent of 1 Sun, or, say, 2 stars with half the mass of our Sun, per annum. Over its entire 13 billion year life the Milky Way has formed many hundreds of billions of stars, so clearly the star formation rate was higher in the past.

Before the first stars and galaxies form, the universe contains only hydrogen and helium, and no heavier elements. Those are produced by thermonuclear reactions in stellar interiors. This is a wonderful thing, because carbon, oxygen and other heavy elements are essential to life.

After a galaxy produces its first generation of massive stars, its interstellar medium will begin to contain carbon, nitrogen, oxygen and other heavy elements (heavy means anything above helium, in this context). Massive stars (above a few solar masses) evolve rapidly, with timescales in the millions of years, rather than billions, and explode as supernovae at the end of their lives. A large portion of their material, now containing heavy elements as well as hydrogen and helium, is expelled at high velocity and mixed into the interstellar medium. The carbon, nitrogen and oxygen which is then in the respective galaxy’s interstellar medium can be detected in atomic (including ionized) or molecular forms. The relative abundance of heavy elements grows with time as more stars are formed, evolve, and recycle matter into the interstellar medium.

High-redshift (z > 2) galaxies with active star formation are best observed in the infrared. The gas and dust in molecular clouds is quite cold, usually less than 100 K (100 degrees above absolute zero). And their radiation is shifted further toward the far infrared and sub-millimeter portions of the spectrum by the redshift factor of 1+z. So radiation emitted at 100 microns is detected at the Earth at 400 microns for a source at z = 3.

These are difficult measurements to make, because if the galaxy is very distant, it is also very faint. However the possibility of getting good measurements is helped by two things. One is that galaxies with very active star formation are intrinsically brighter.

And the other reason is that intervening clusters of galaxies are massive and contain mostly dark matter. As we look far back through the universe toward an early galaxy, there is a good chance that the line of sight passes through a cluster of galaxies. Clusters of galaxies contain hundreds or even thousands of galaxies, and are dominated by dark matter. Most of the infrared radiation can pass through the intracluster medium – the space between galaxies – without being absorbed; it does not interact with dark matter. The clusters are sufficiently massive to bend the light, however, according to general relativity. As the background galaxy’s light passes through the cluster during its multi-billion year journey to the Earth and our telescopes, the cluster’s gravitational potential modifies the light ray’s path. Actually the intervening cluster of galaxies does more than displace the light, it acts as a lens, causing the image to brighten by as much as 10 times or more. This makes it much easier to gather enough photons from the target galaxy to obtain good quality results.

An international research team with participants from Germany, the U.S., Chile, the U.K. and Canada has identified 20 high redshift “dusty star forming galaxies” at very high redshift (DSFG is a technical term for galaxies with high star formation rates and lots of dust) from the South Pole Telescope infrared galaxy survey. They have been able to further elucidate the nature of 17 of these early galaxies by measuring C II emission from singly ionized atomic carbon, and CO emission from carbon monoxide molecules for 11 of those. They have also determined the total far infrared luminosity for these target galaxies. Their results allow them to place constraints on the nature of the interstellar medium and the properties of molecular clouds.

The galaxies’ high redshifts, ranging from z = 2.1  to 5.7, actually makes it possible to make Earth-bound measurements in most cases. At lower redshifts the observations would not be possible from Earth because the Earth’s atmosphere is highly opaque at the observation frequencies. But it is much more transparent at longer wavelengths, so as the redshift exceeds z = 3, Earth-based observations are possible from favorable locations, in this case the Chilean desert. For three sources with redshifts around 2 the atmosphere prohibits ground-based observations and the team therefore made observations from the orbiting Herschel Space Telescope, designed for infrared work.

The figure labelled Figure 3 below is taken from their paper. It indicates the redshift z on the x-axis (logarithmically) and the far infrared luminosity of the galaxy on the y-axis (as the log) as well. The 17 galaxies studied by the authors are indicated with red dots and labelled “SPT DSFGs”. Their very high luminosities are in the range of 10 to 100 trillion times the Sun’s luminosity. Note that the luminosities must be very high for detection at such a high redshift (distance from Earth). Also these luminosities are uncorrected for the lensing magnification, so the true luminosities are around an order of magnitude lower.


The redshift range covered in this research corresponds to ages for the universe of around 1 billion years old (z = 5.7) to a little over 3 billion years old (z = 2.1). So the lookback time is roughly 11 to 13 billion years.

For those of us interested in dark matter, their findings regarding the degree of magnification by dark matter are also interesting. They find “strong lensing” or magnification in the range of 5 to 21 times for 4 sources that allowed for lens modeling. The other sources do not have magnifications measured, but they are presumed to be of the same order of magnitude of around 10 times or so, to within a factor of 2 either way.

It is only because the lensing is so substantial that they are able to measure these galaxies with sufficient fidelity to arrive at their results. So not only is dark matter key to galaxy formation and evolution, it is key to allowing us to study galaxies in the early universe. Dark matter forms galaxies and then helps us understand how they form!


B. Gullberg et al. 2015, ”The nature of the [CII] emission in dusty star-forming galaxies from the SPT survey” to be published, Monthly Notices of the Royal Astronomical Society, http://arxiv.org/pdf/1501.06909v2.pdf

C.M. Casey, D. Narayanan, A. Cooray 2015, “Dusty Star-Forming Galaxies at High Redshift”, http://arxiv.org/abs/1402.1456

X-raying Dark Matter

I was at the dentist this week. Don’t ask, but they took 3 digital X-Rays.

One of the most significant methods by which we detect the presence of dark matter is through the use of X-ray telescopes. The energy associated with these X-rays is typically around an order of magnitude less than those zapped into your mouth when you visit the dentist.

Around 50 years ago scientists at American Science and Engineering flew the first imaging X-ray telescope on a small rocket. At a later date, I worked part-time at AS&E, as we called it, while in graduate school. One major project was a solar X-Ray telescope mounted in SkyLab, America’s first space station. This gave me the wonderful opportunity to work in the control rooms at the NASA Johnson Space Center in Houston.

X-rays are absorbed in the Earth’s atmosphere, so today X-ray astronomy is performed from orbiting satellites. X-ray telescopes use the principle of grazing incidence reflection; the X-rays impinge at shallow angles onto gold or iridium-coated metallic surfaces and are reflected to the focal plane and the detector electronics.


Schematic of grazing incidence mirrors used in the Chandra X-ray Observatory. Credit NASA/CXC/SAO; obtained from chandra.harvard.edu.

How does dark matter result in X-rays being produced? Indirectly, as a consequence of its gravitational effects.

One of the main mechanisms for X-ray production in the universe is known as thermal bremsstrahlung. Bremsstrahlung is a German word meaning ‘decelerated radiation’. A gas which is hot enough to give off X-rays will be ionized. That is, the electrons will be stripped from the nuclei and move about freely. As electrons fly around near ions (protons and helium nuclei primarily) their mutual electromagnetic attraction will result in some of the electrons’ kinetic energy being transferred to radiation.

The speed at which the electrons are moving around determines how energetic the produced photons will be. We talk about the temperature of such an ionized gas, and that is proportional to the square of the average speed of the electrons. A gas with a temperature of 10 million degrees will give off approximately 1 kiloVolt X-rays (hereafter we use the KeV abbreviation), and a gas with a temperature of 100 million degrees will radiate 10 KeV X-rays. One eV converts to 11,605 degrees Kelvin (or we can just say Kelvins).


Chandra X-ray Observatory prior to launch in the Space Shuttle Columbia in 1999. NASA image.

So how can we produce gas hot enough to give off X-Rays by this mechanism? Gravity, and lots of it. The potential energy of the gravitational field is proportional to the amount of matter (total mass) coalesced into a region and inversely proportional to the characteristic scale of that region. GM/R, simple Newtonian mechanics, is sufficient; no general relativistic calculation is needed at this point. G is the gravitational constant and M and R are the cluster mass and characteristic radius, respectively.

A lot of mass in a confined region – how about large groups of clusters and galaxies? It turns out we need of order 1000 galaxies for a rich cluster and this will do the trick. But only because there is dark matter as well as ordinary matter. There are three main matter components to consider: galaxies, hot intracluster gas found between galaxies, and dark matter. The cluster forms from gravitational self-collapse from a region that was of above average density in the early universe. All the over dense regions are subject to collapse.


The “Bullet Cluster” is actually two colliding clusters. The bluish color shows the distribution of dark matter as determined from the gravitational lensing effect on background galaxy images. The reddish color depicts the hot X-ray emitting gas measured by the Chandra X-ray Observatory.

(X-ray: NASA/CXC/CfA/M.Markevitch Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/D.Clowe)

The optically visible galaxies are the least important contributor to the cluster mass, only around 1%! Galaxy clusters are made of dark matter much more than they are made out of galaxies. And secondarily, they are made out of hot gas. The ordinary matter contained within galaxies is only the third most important component. The table below gives the typical 90 / 9 / 1 proportions for dark matter, hot gas, and galaxies, respectively.

Three main components of a galaxy cluster (Table derived from Wikipedia article on galaxy clusters)

Component                      Mass fraction             Description

Galaxies                           1%                         Optical/infrared observations

Intergalactic gas              9%                         High temperature ionized gas – thermal bremsstrahlung

Dark matter                     90%                        Dominates, inferred through gravitational interactions

The intracluster gas has two sources. A major portion of it is primordial gas that never formed galaxies, but falls into the gravitational potential well of the cluster. As it falls in toward the cluster center, it heats. The kinetic energy of infall is converted to random motions of the ionized gas. An additional portion of the gas is recycled material expelled from galaxies. It mixes with the primordial gas and heats up as well through frictional processes. The gas is supported against further collapse by its own pressure as the density and temperature increase in the cluster core.

The temperature which characterizes the X-ray emission is a measure of gravitational potential strength and proportional to the ratio of the mass of the cluster to its size. Typical X-Ray temperatures measured for rich clusters are around 3 to 12 KeV, which corresponds to temperatures in the range of 30 to 130 million Kelvins.

There is another way to measure the strength of the cluster’s gravitational potential well. That is by measuring the speed of galaxies as they move around in somewhat random fashion inside the cluster. The assumption, which is valid for well-formed clusters after they have been around for billions of years, is that the galaxies are not just falling into the center of the cluster, but that their motions are “virialized”. This is the method used by Fritz Zwicky in the 1930s for the original discovery of dark matter. He found that in a certain well known cluster, the Coma cluster, that the average speed of galaxies relative to the cluster centroid was of order 1000 kilometers/sec, much higher than the expected 300 km/sec based on the visible light from the cluster galaxies. This implied 10 times as much dark matter as galactic matter. This early, rather crude measurement, was on the right track, but fell short of the actual ratio of dark matter to galactic matter since we now know that galaxies themselves have large dark matter halos. The X-ray emission from clusters was discovered much later, starting in the 1970s.

The two methods of measuring the amount of dark matter in Galaxy clusters generally agree. Both the galaxies and the hot intracluster gas are acting as tracers of the overall mass distribution, which is dominated by dark matter. Galaxy clusters play a major role in increasing our understanding of dark matter and how it affects the formation and evolution of galaxies.

In fact if dark matter was not 5 times as abundant by mass as ordinary matter, most galaxy clusters would never have formed, and galaxies such as our own Milk Way would be much smaller.


Wikipedia article “galaxy clusters”.

“X-ray Temperatures of Distant Clusters of Galaxies”, S. C. Perrenod,  J. P. Henry 1981, Astrophysical Journal, Letters to the Editor, vol. 247, p. L1-L4.

“The X-ray Luminosity – Velocity Dispersion Relation in the REFLEX Cluster Survey”, A. Ortiz-Gil, L. Guzzo, P. Schuecker, H. Boehringer, C.A. Collins 2004, Mon.Not.Roy.Astron.Soc. 348, 325; http://arxiv.org/abs/astro-ph/0311120v1

Super Colliders in Space: Dark Matter not Colliding

What’s bigger and more powerful than the Large Hadron Collider at CERN? Why colliding galaxy clusters of course.

A cluster of galaxies consists of hundreds or even thousands of galaxies bound together by their mutual gravitation. Both dark matter and ordinary matter in and between galaxies is responsible for the gravitational field of a cluster. And typically there is about 5 times as much dark matter as ordinary matter. The main component of ordinary matter is hot intracluster gas; only a small percentage of the mass is locked up in stars.

One stunning example of dark matter detection is the Bullet Cluster. This is the canonical example found revealing dark matter separation from ordinary matter in a pair of clusters colliding and merging. The dark matter just passes right through, apparently unaffected by the collision. The hot gas (ordinary matter) is seen through its X-ray emission, since the gas is heated by collisions to of order 100 million degrees. The Chandra X-ray Observatory (satellite) provided these measurements.

Image courtesy of Chandra X-ray Observatory

Bullet Cluster. The blue color shows the distribution of dark matter, which passed through the collision without slowing down. The purple color shows the hot X-ray emitting gas. Image courtesy of Chandra X-ray Observatory

The distribution of matter overall in the Bullet Cluster or other clusters is traced by gravitational lensing effects; general relativity tells us that  background galaxies will have their images displaced, distorted, and magnified as their light passes through a cluster on its way to Earth. The magnitude of these effects can be used to “weigh” the dark matter. These measurements are made with the Hubble Space Telescope.

In the Bullet Cluster the dark matter is displaced from the ordinary matter. The interpretation is that the ordinary matter from the two clusters, principally in the form of hot gas, is slowed by frictional, collisional processes as the clusters interact and form a larger single cluster of galaxies. Another six or so examples of galaxy clusters showing the displacement between the dark matter and the ordinary matter in gas and stars have been found to date.

Now, a team of astrophysicists based in the U.K. and Switzerland have examined 30 additional galaxy clusters with data from both Chandra and Hubble, and with redshifts typically 0.2 to 0.6. In aggregate there are 72 collisions in the 30 systems, since some have more than two subclusters. The offsets between the gas and dark matter are quite substantial, and in aggregate indicate the existence of dark matter in these clusters with over 7 standard deviations of statistical significance (probability of the null hypothesis of no dark matter is 1 in 30 trillion).

They then look at the possible drag force on the dark matter due to dark matter particles colliding with other dark matter particles. There are already much more severe constraints on ordinary matter – dark matter interactions from Earth-based laboratory measurements. But the dark matter mutual collision cross section could potentially be large enough to result in a drag. They measure the relative positions of hot gas, galaxies, and dark matter for all of the 72 subclusters.

From paper "The non-gravitational interactions of dark matter in colliding galaxy clusters"

From paper “The non-gravitational interactions of dark matter in colliding galaxy clusters” D. Harvey et al. 2015

The gas should and does lag the most, relative to the direction of the galaxies in a collision. If there is a dark matter drag, then dark matter should lag behind the positions of the stars. They find no lag of the dark matter average position, which allows them to place a new, tighter constraint on the mutual interaction cross-section for dark matter.

Their constraint is σ(DM)/m < 0.47 cm^2/g at 95% confidence level, where σ (sigma) is the cross-section and m is the mass of a single dark matter particle. This limit is over twice as tight as that previously obtained from the Bullet Cluster. And some dark matter models predict a cross section per unit mass of 0.6 cm^2/g, so these models are potentially ruled out by these new measurements.

In summary, using Nature’s massive particle colliders, the authors have found further highly significant evidence for the existence of dark matter in clusters of galaxies, and they have placed useful constraints on the dark matter self-interaction cross-section. Dark matter continues to be highly elusive.


D. Harvey et al. 2015 “The non-gravitational interactions of dark matter in colliding galaxy clusters” http://arxiv.org/pdf/1503.07675v1.pdf

Caught in the Cosmic Web – Dark Matter Structure Revealed

NASA/ESA Hubblecast 58

This video reports on a very impressive research effort resulting in the first 3-D mapping of dark matter for a galaxy cluster. A massive galaxy cluster over 5 billion light-years from Earth is the first to have such a full 3-dimensional map of its dark matter distribution. The dark matter is the dominant component of the cluster’s mass. The cluster, known as MACS J0717, is still in the formation stage. The Hubble Space Telescope and a number of ground-based telescopes on Mauna Kea in Hawaii were used to determine the spatial distribution. The longest filament of dark matter discovered by the international team of astronomers stretches across 60 million light-years. Gravitational lensing of galaxy images (as Einstein predicted) and redshift measurements for a large number of galaxies were required in order to uncover the 3-D shape and characteristics of the filament.

Dark Energy Survey First Light!

Last month the Dark Energy Survey project achieved first light from its remote location in Chile’s Atacama Desert. The term first light is used by astronomers to refer to the first observation by a new instrument.

And what an instrument this is! It is in fact the world’s most powerful digital camera. This Dark Energy Camera, or DECam, is a 570 Megapixel optical survey camera with a very wide field of view. The field of view is over 2 degrees, which is rather unusual in optical astronomy. And the camera requires special CCDs that are sensitive in the red and infrared parts of the spectrum. This is because distant galaxies have their light shifted toward the red and the infrared by the cosmological expansion. If the galaxy redshift is one,  the light travels for about 8 billion years and the wavelength of light that the DECam detects is doubled, relative to what it was when it was originally emitted.

Dark Energy Camera

Image: DECam, near center of image, is deployed at the focus of the 4-meter Victor M. Blanco optical telescope in Chile (Credit: Dark Energy Survey Collaboration)

The DECam has been deployed to further our understanding of dark energy through not just one experimental method, but in fact four different methods. That’s how you solve tough problems – by attacking them on multiple fronts.

It’s taken 8 years to get to this point, and there have been some delays, as normal for large projects. But now this new instrument is mounted at the focal plane of the existing 4-meter telescope of the National Science Foundation’s Cerro Tololo Inter-American observatory in Chile. It will begin its program of planned measurements of several hundred million galaxies starting in December after several weeks of testing and calibration. Each image from the camera-telescope combination can capture up to 100,000 galaxies out to distances of up to 8 billion light years. This is over halfway back to the origin of the universe almost 14 billion years ago.

In a previous blog entry I talked about the DES and the 4 methods in some detail. In brief they are based on observations of:

  1. Type 1a supernova (the method used to first detect dark energy)
  2. Very large scale spatial correlations of galaxies separated by 500 million light-years (this experiment is known as Baryon Acoustic Oscillations since the galaxy separations reflect the imprint of sound waves in the very early universe, prior to galaxy formation)
  3. The number of clusters of galaxies as a function of redshift (age of the universe)
  4. Gravitational lensing, i.e. distortion of background images by gravitational effects of foreground clusters in accordance with general relativity

NGC 1365

Image: NGC 1365, a barred spiral galaxy located in the Fornax cluster located 60 million light years from Earth (Credit: Dark Energy Survey Collaboration)

What does the Dark Energy Survey team, which has over 120 members from over 20 countries, hope to learn about dark energy? We already have a good handle on its magnitude, at around 73% presently of the universe’s total mass-energy density.

The big issue is does it behave as a cosmological constant or as something more complex? In other words, how does the dark energy vary over time and is there possibly some spatial variation as well? And what is its equation of state, or relationship between its pressure and density?

With a cosmological constant explanation the relationship is Pressure = – Energy_density, a negative pressure, which is necessary in any model of the dark energy, in order for it to drive the accelerated expansion seen for the universe. Current observations from other experiments, especially those measuring the cosmic microwave background, support an equation of state parameter within around 5% of the value -1, as represented in the equation in the previous sentence. This is consistent with the interpretation as a pressure resulting from the vacuum. Dark energy appears also to have a constant or nearly constant density per unit volume of space. It is unlike ordinary matter and dark matter, that both drop in mass density (and thus energy density) as the volume of the universe grows. Thus dark energy becomes ever more dominant over dark matter and ordinary matter as the universe continues to expand.

We can’t wait to see the first publication of results from research into the nature of dark energy using the DECam.


http://www.noao.edu/news/2012/pr1204.php – Press release from National Optical Astronomical Observatory on DECam first light


http://www.ctio.noao.edu/noao/ – Cerro Tololo Inter-American Observatory page

http://lambda.gsfc.nasa.gov/product/map/dr4/pub_papers/sevenyear/basic_results/wmap_7yr_basic_results.pdf – WMAP 7 year results on cosmic microwave background


Dark Matter Bridge Discovered

A team of astronomers claims to have detected an enormous bridge or filament of dark matter, with a mass estimated to be of order 100 trillion solar masses, and connecting two clusters of galaxies. The two clusters, known as Abell 222 and Abell 223, are about 2.8 billion light-years away and separated from one another by 400 million light-years. Each cluster has around 150 galaxies; actually one of the pair is itself a double cluster.

Clusters of galaxies are gravitationally bound collections of hundreds to a thousand or more galaxies. Often a cluster will be found in the vicinity of other clusters to which it is also gravitationally bound. The universe as a whole is gravitationally unbound – the matter, including the dark matter – is insufficient to stop the continued expansion, which is driven to acceleration in fact, by dark energy.

Dark matter bridge

Figure: Subaru telescope optical photo with mass density shown in blue and statistical significance contours superimposed. In the filament area found near the center of the image, the contours indicate four standard deviations of significance in the detection of dark matter. The cluster Abell 222 is in the south, and Abell 223 is the double cluster in the north of the image. The distance between the two clusters is about 14 arc-minutes, or about ½ the apparent size of the Moon.

Dark matter was originally called “missing matter”, and was first posited by Fritz Zwicky (http://en.wikipedia.org/wiki/Fritz_Zwicky) in the 1930s because of his studies of the kinematics of galaxies and galaxy clusters. He measured the velocities of galaxies moving around inside a cluster and found they were significantly greater than expected from the amount of ordinary matter seen in the galaxies themselves. This implied there was more matter than seen in galaxies because the velocities of the galaxies would be determined by the total gravitational field in a cluster, and the questions have been where is, and what is, the “missing matter” inferred by the gravitational effects. X-ray emission has been detected from most clusters of galaxies, and this is due to an additional component of matter outside of galaxies, namely hot gas between galaxies. But it is still insufficient to explain the total mass of clusters as revealed by both the galaxy velocities and the temperature of the hot gas itself, since both are a reflection of the gravitational field in the cluster.

Dark matter is ubiquitous, found on all scales and is generally less clumped than ordinary matter, so it is not surprising that significant dark matter would be found between two associated galaxy clusters. In fact the researchers in this study point out that “It is a firm prediction of the concordance Cold Dark Matter cosmological model that galaxy clusters live at the intersection of large-scale structure filaments.”

The technique used to map the dark matter is gravitational lensing, which is a result of general relativity. The gravitational lensing effect is well established; it has been seen in many clusters of galaxies to date. In gravitational lensing, light is deflected away from a straight-line path by matter in its vicinity.

In this case the gravitational field of the dark matter filament and the galaxy clusters deflect light passing nearby. The image of a background galaxy located behind the cluster will be distorted as the light moves through or nearby the foreground cluster. The amount of distortion depends on the mass of the cluster (or dark matter bridge) and how near the line of sight passes to the cluster center.

There is also a well-detected bridge of ordinary matter in the form of hot X-ray emitting gas connecting the two clusters and in the same location as the newly discovered dark matter bridge.  The scientists used observations from the XMM-Newton satellite to map the X-ray emission from the two clusters Abell 222 and Abell 223 and the hot gas bridge connecting them. Because of the strong gravitational fields of galaxy clusters, the gas interior to galaxy clusters (but exterior to individual galaxies within the cluster) is heated to very high temperatures by frictional processes, resulting in thermal X-ray emission from the clusters.

The research team, led by Jörg Dietrich at the University of Michigan, then performed a gravitational lensing analysis, focusing on the location of the bridge as determined from the X-ray observations. The gravitational lensing work is based on optical observations obtained from the Subaru telescope (operated by the Japanese government, but located on the Big Island of Hawaii) to map the total matter density profile around and between the two clusters. This method detects the sum of dark matter and ordinary matter.

They analyzed the detailed orientations and shapes of over forty thousand background galaxies observable behind the two clusters and the bridge. This work allowed them to determine the contours of the dark matter distribution. They state a 98% confidence in the existence of a bridge or filament dominated by dark matter.

The amount of dark matter is shown to be much larger than that of ordinary matter, representing over 90% of the total in the filament region, so the gravitational lensing effects are primarily due to the dark matter. Less than 9% of the mass in the filament is in the form of hot gas (ordinary matter). The estimated total mass in the filament is about 1/3 of the mass of either of the galaxy clusters, each of which is also dominated by dark matter.

Observations of galaxy distributions show that galaxies are found in groups, clusters, and filaments connecting regions of galaxy concentration. Cosmological simulations of the evolution of the universe on supercomputers indicate that the distribution of dark matter should have a filamentary structure as well. So although the result is in many ways not surprising, it represents the first detection of such a structure to date.



http://ns.umich.edu/new/releases/20623-dark-matter-scaffolding-of-universe-detected-for-the-first-time – press release from the University of Michigan

http://www.gizmag.com/dark-matter-filaments-found/23281/ “Dark matter filaments detected for the first time”

J. Dietrich et al. 2012 http://arxiv.org/abs/1207.0809 “A filament of dark matter between two clusters of galaxies”