Distant Galaxy Rotation Curves Appear Newtonian

One of the main ways in which dark matter was postulated, primarily in the 1970s, by Vera Rubin (recently deceased) and others, was by looking at the rotation curves for spiral galaxies in their outer regions. Although that was not the first apparent dark matter discovery, which was by Fritz Zwicky from observations of galaxy motion in the Coma cluster of galaxies during the 1930s.

Most investigations of spiral galaxies and star-forming galaxies have been relatively nearby, at low redshift, because of the difficulty in measuring these accurately at high redshift. For what is now a very large sample of hundreds of nearby galaxies, there is a consistent pattern. Galaxy rotation curves flatten out.

M64

M64, image credit: NASA, ESA, and the Hubble Heritage Team (AURA/STScI)

If there were only ordinary matter one would expect the velocities to drop off as one observes the curve far from a galaxy’s center. This is virtually never seen at low redshifts, the rotation curves consistently flatten out. There are only two possible explanations: dark matter, or modification to the law of gravity at very low accelerations (dark gravity).

Dark matter, unseen matter, would case rotational velocities to be higher than otherwise expected. Dark, or modified gravity, additional gravity beyond Newtonian (or general relativity) would do the same.

Now a team of astronomers (Genzel et al. 2017) have measured the rotation curves of six individual galaxies at moderately high redshifts ranging from about 0.9 to 2.4.

Furthermore, as presented in a companion paper, they have stacked a sample of 97 galaxies with redshifts from 0.6 to 2.6  to derive an average high-redshift rotation curve (P. Lang et al. 2017). While individually they cannot produce sufficiently high quality rotation curves, they are able to produce a mean normalized curve for the sample as a whole with sufficiently good statistics.

In both cases the results show rotation curves that fall off with increasing distance from the galaxy center, and in a manner consistent with little or no dark matter contribution (Keplerian or Newtonian style behavior).

In the paper with rotation curves of 6 galaxies they go on to explain their falling rotation curves as due to “first, a large fraction of the massive high-redshift galaxy population was strongly baryon-dominated, with dark matter playing a smaller part than in the local Universe; and second, the large velocity dispersion in high-redshift disks introduces a substantial pressure term that leads to a decrease in rotation velocity with increasing radius.” 

So in essence they are saying that the central regions of galaxies were relatively more dominated in the past by baryons (ordinary matter), and that since they are measuring Hydrogen alpha emission from gas clouds in this study that they must also take into account the turbulent gas cloud behavior, and this is generally seen to be larger at higher redshifts.

Stacy McGaugh, a Modified Newtonian Dynamics (MOND) proponent, criticizes their work saying that their rotation curves just don’t go far enough out from the galaxy centers to be meaningful. But his criticism of their submission of their first paper to Nature (sometimes considered ‘lightweight’ for astronomy research results) is unfounded since the second paper with the sample of 97 galaxies has been sent to the Astrophysical Journal and is highly detailed in its observational analysis.

The father of MOND, Mordehai Milgrom, takes a more pragmatic view in his commentary. Milgrom calculates that the observed accelerations at the edge of these galaxies are several times higher than the value at which rotation curves should flatten. In addition to this criticism he notes that half of the galaxies have low inclinations, which make the observations less certain, and that the velocity dispersion of gas in galaxies that provides pressure support and allows for lower rotational velocities, is difficult to correct for.

As in MOND, in Erik Verlinde’s emergent gravity there is an extra acceleration (only apparent when the ordinary Newtonian acceleration is very low) of order. This spoofs the behavior of dark matter, but there is no dark matter. The extra ‘dark gravity’ is given by:

g _D = sqrt  {(a_0 \cdot g_B / 6 )}

In this equation a0 = c*H, where H is the Hubble parameter and gB is the usual Newtonian acceleration from the ordinary matter (baryons). Fundamentally, though, Verlinde derives this as the interaction between dark energy, which is an elastic, unequilibrated medium, and baryonic matter.

One could consider that this dark gravity effect might be weaker at high redshifts. One possibility is that density of dark energy evolves with time, although at present no such evolution is observed.

Verlinde assumes a dark energy dominated de Sitter model universe for which the cosmological constant is much larger than the matter contribution and approaches unity, Λ = 1 in units of the critical density. Our universe does not yet fully meet that criteria, but has Λ about 0.68, so it is a reasonable approximation.

At redshifts around z = 1 and 2 this approximation would be much less appropriate. We do not yet have a Verlindean cosmology, so it is not clear how to compute the expected dark gravity in such a case, but it may be less than today, or greater than today. Verlinde’s extra acceleration goes as the square root of the Hubble parameter. That was greater in the past and would imply more dark gravity. But  in reality the effect is due to dark energy, so it may go with the one-fourth power  of an unvarying cosmological constant and not change with time (there is a relationship that goes as H² ∝ Λ in the de Sitter model) or change very slowly.

At very large redshifts matter would completely dominate over the dark energy and the dark gravity effect might be of no consequence, unlike today. As usual we await more observations, both at higher redshifts, and further out from the galaxy centers at moderate redshifts.

References:

R. Genzel et al. 2017, “Strongly baryon-dominated disk galaxies at the peak of galaxy formation ten billion years ago”, Nature 543, 397–401, http://www.nature.com/nature/journal/v543/n7645/full/nature21685.html

P. Lang et al. 2017, “Falling outer rotation curves of star-forming galaxies at 0.6 < z < 2.6 probed with KMOS^3D and SINS/ZC-SINF” https://arxiv.org/abs/1703.05491

Stacy McGaugh 2017, https://tritonstation.wordpress.com/2017/03/19/declining-rotation-curves-at-high-redshift/

Mordehai Milgrom 2017, “High redshift rotation curves and MOND” https://arxiv.org/abs/1703.06110v2

Erik Verlinde 2016, “Emergent Gravity and the Dark Universe” https;//arXiv.org/abs/1611.02269v1

Advertisements

Emergent Gravity in the Solar System

In a prior post I outlined Erik Verlinde’s recent proposal for Emergent Gravity that may obviate the need for dark matter.

Emergent gravity is a statistical, thermodynamic phenomenon that emerges from the underlying quantum entanglement of micro states found in dark energy and in ordinary matter. Most of the entropy is in the dark energy, but the presence of ordinary baryonic matter can displace entropy in its neighborhood and the dark energy exerts a restoring force that is an additional contribution to gravity.

Emergent gravity yields both an area entropy term that reproduces general relativity (and Newtonian dynamics) and a volume entropy term that provides extra gravity. The interesting point is that this is coupled to the cosmological parameters, basically the dark energy term which now dominates our de Sitter-like universe and which acts like a cosmological constant Λ.

In a paper that appeared in arxiv.org last month, a trio of astronomers Hees, Famaey and Bertone claim that emergent gravity fails by seven orders of magnitude in the solar system. They look at the advance of the perihelion for six planets out through Saturn and claim that Verlinde’s formula predicts perihelion advances seven orders of magnitude larger than should be seen.

hst_saturn_nicmos

No emergent gravity needed here. Image credit: NASA GSFC

But his formula does not apply in the solar system.

“..the authors claiming that they have ruled out the model by seven orders of magnitude using solar system data. But they seem not to have taken into account that the equation they are using does not apply on solar system scales. Their conclusion, therefore, is invalid.” – Sabine Hossenfelder, theoretical physicist (quantum gravity) Forbes blog 

Why is this the case? Verlinde makes 3 main assumptions: (1) a spherically symmetric, isolated system, (2) a system that is quasi-static, and (3) a de Sitter spacetime. Well, check for (1) and check for (2) in the case of the Solar System. However, the Solar System is manifestly not a dark energy-dominated de Sitter space.

It is overwhelmingly dominated by ordinary matter. In our Milky Way galaxy the average density of ordinary matter is some 45,000 times larger than the dark energy density (which corresponds to only about 4 protons per cubic meter). And in our Solar System it is concentrated in the Sun, but on average out to the orbit of Saturn is a whopping 3.7 \cdot 10^{17} times the dark energy density.

The whole derivation of the Verlinde formula comes from looking at the incremental entropy (contained in the dark energy) that is displaced by ordinary matter. Well with over 17 orders of magnitude more energy density, one can be assured that all of the dark energy entropy was long ago displaced within the Solar System, and one is well outside of the domain of Verlinde’s formula, which only becomes relevant when acceleration drops near to or below  c * H. The Verlinde acceleration parameter takes the value of 1.1 \cdot 10^{-8}  centimeters/second/second for the observed value of the Hubble parameter. The Newtonian acceleration at Saturn is .006 centimeters/second/second or 50,000 times larger.

The conditions where dark energy is being displaced only occur when the gravity has dropped to much smaller values; his approximation is not simply a second order term that can be applied in a domain where dark energy is of no consequence.

There is no entropy left to displace, and thus the Verlinde formula is irrelevant at the orbit of Saturn, or at the orbit of Pluto, for that matter. The authors have not disproven Verlinde’s proposal for emergent gravity.

 

 

 

 

 


Leo II Dwarf Orbits Milky Way: Dark Matter or Emerging Gravity

In a prior blog, “The Curiously Tangential Dwarf Galaxies”, I reported on results from Cautun and Frenk that indicate that a set of 10 dwarf satellite galaxies near the Milky Way with measured proper motions have much more tangential velocity than expected by random. Formally, there is a 5 standard deviation negative velocity anisotropy with over 80% of the kinetic energy in tangential motion.

While in no way definitive, this result appears inconsistent with the canonical cold dark matter assumptions. So one speculation is that the tangential motions are reflective of the theory of emergent gravity, for which dark matter is not required, but for which the gravitational force changes (strengthens) at very low accelerations, of order c \cdot H, where H is the Hubble parameter, and the value at which the force begins to strengthen works out to be accelerations of only less than about 2 centimeters per second per year.

One of the 10 dwarf galaxies in the sample is Leo II. The study of its proper motion has been reported by Piatek, Pryor, and Olszewski. They find that the galactocentric radial and tangential velocity components are 22 and 127 kilometers per second, respectively. While there is a rather large uncertainty in the tangential component, for their measured values some 97% of the kinetic energy is in the tangential motion.

local_group_and_nearest_galaxies

Artist’s rendering of the Local Group of galaxies. This representation is centered on the Milky Way, you can see a large number of dwarf galaxies near the Milky Way and many near the Andromeda Galaxy as well. Leo II is in the swarm around our Milky Way. Image credit: Antonio Ciccolella. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

So let’s look at the implications for this dwarf galaxy, assuming that it is in a low-eccentricity, nearly circular orbit about the Milky Way, which seems possible. We can compare calculations for Newtonian gravity with the implications from Verlinde’s emergent gravity framework.

Under the assumption of a near circular orbit, either there is a lot of dark matter in the Milky Way explaining the high tangential orbital velocity of Leo II, or there is excess gravity. So what do the two alternatives look like?

Let’s look at the dark matter case first. The ordinary matter mass of the Milky Way is measured to be 60 billion solar masses, mostly in stars, but considering gas as well. The distance to the Leo II dwarf galaxy is 236 kiloparsecs (770,000 light-years), well beyond the Milky Way’s outer radius.

So to first order, for a roughly spherical Milky Way, including a dark matter halo, we can evaluate what the total mass including dark matter would be required to hold Leo II in a circular orbit. This is determined by equating the centripetal acceleration v²/R to the gravitational acceleration inward GM/R². So the gravitational mass under Newtonian physics required for velocity v at distance R for a circular orbit is M = R v² / G. Using the tangential velocity and the distance measures above yields a required mass of 870 billion solar masses.

This is 14 times larger than the Milky way’s known ordinary matter mass from stars and gas. Now there are some other dwarf galaxies such as the Magellanic Clouds within the sphere of influence, but they are very much smaller, so this estimate of the total mass required is reasonable to first order. The assumption of circularity is a larger uncertainty. But what this says is something like 13 times as much dark matter as ordinary matter would be required.

Now let’s look at the emergent gravity situation. In this case there is no dark matter, but there is extra acceleration over and above the acceleration due to Newtonian gravity.  To be clear, emergent gravity predicts both general relativity and an extra acceleration term. When the acceleration is modest general relativity reduces to Newtonian dynamics. And when it is very low the total acceleration in the emergent gravity model includes both a Newtonian term and an extra term related to the volume entropy contribution.

In other words, gT = gN + gE is the total acceleration, with gN = GM/R² the Newtonian term and gE the extra term in the emergent gravity formulation. The gN term is calculated using the ordinary mass of 60 billion solar masses, and one gets a tiny acceleration of gN = 1.5 \cdot 10^{-11} centimeters / second / second (cm/s/s).

The extra, or emergent gravity, acceleration is given by the formula gE = sqrt (gN \cdot c \cdot H / 6 ), where H is the Hubble parameter (here we use 70 kilometers/second/Megaparsec). The value of c \cdot H / 6 turns out to be 1.1 \cdot 10^{-8} cm/s/s. This is just a third of a centimeter per second per year.

The extra emergent gravity term from Verlinde’s paper is the square root of the product of 1.1 \cdot 10^{-8} and the Newtonian term amounting to 1.5 \cdot 10^{-11} . Thus the extra gravity is 4.1 \cdot 10^{-10} cm/s/s, which is 27 times larger than the Newtonian acceleration. The total gravity is about 28 times that or 4.3 \cdot 10^{-10} cm/s/s. Now a 28 times larger gravitational acceleration leads to tangential orbital velocities over 5 times greater than expected in the Newtonian case.

Setting v²/R = 4.3 \cdot 10^{-10} cm/s/s and using the distance to Leo II results in an orbital velocity of 177 kilometers/second. With the Newtonian gravity and ordinary matter mass of the Milky Way, one would expect only 33 km/s, a velocity over 5 times lower.

Now the observed tangential velocity is 127 km/s, so the calculated number with emergent gravity is a bit high, but there is no guarantee of a circular orbit. Also, Verlinde’s model assumes quasi-static conditions, and this assumption may break down for a dynamically young system. The time to traverse the distance to Leo II using its radial velocity is of order 10 billion years, so the system may not have settled down sufficiently. There could also be tidal effects from neighbors, or possibly from Andromeda.

This is not a clear argument demonstrating that the Leo II dwarf galaxy’s observed tangential velocity is explained by emergent gravity. But it is a plausible alternative explanation, and made here to show how the calculations work out in this sample case.

So the main alternatives are a Milky Way dominated by dark matter and with a mass close to a trillion solar masses, or a Milky Way of ordinary matter only amounting to 60 billion solar masses. But in that latter case, the Milky Way exerts an extra gravitational force due to emergent gravity that only becomes apparent at very small accelerations less than about 10^{-8} cm/s/s.

Future work with the Hubble and future telescopes is expected to determine many more proper motions in the Local Group so that a fuller dynamical picture of the system can be developed. This will help to discriminate between the emergent gravity and dark matter alternatives.

 

 

 

 


Emergent Gravity: Verlinde’s Proposal

In a previous blog entry I give some background around Erik Verlinde’s proposal for an emergent, thermodynamic basis of gravity. Gravity remains mysterious 100 years after Einstein’s introduction of general relativity – because it is so weak relative to the other main forces, and because there is no quantum mechanical description within general relativity, which is a classical theory.

One reason that it may be so weak is because it is not fundamental at all, that it represents a statistical, emergent phenomenon. There has been increasing research into the idea of emergent spacetime and emergent gravity and the most interesting proposal was recently introduced by Erik Verlinde at the University of Amsterdam in a paper “Emergent Gravity and the Dark Universe”.

A lot of work has been done assuming anti-de Sitter (AdS) spaces with negative cosmological constant Λ – just because it is easier to work under that assumption. This year, Verlinde extended this work from the unrealistic AdS model of the universe to a more realistic de Sitter (dS) model. Our runaway universe is approaching a dark energy dominated dS solution with a positive cosmological constant Λ.

The background assumption is that quantum entanglement dictates the structure of spacetime, and its entropy and information content. Quantum states of entangled particles are coherent, observing a property of one, say the spin orientation, tells you about the other particle’s attributes; this has been observed in long distance experiments, with separations exceeding 100 kilometers.

400px-SPDC_figure.pngIf space is defined by the connectivity of quantum entangled particles, then it becomes almost natural to consider gravity as an emergent statistical attribute of the spacetime. After all, we learned from general relativity that “matter tells space how to curve, curved space tells matter how to move” – John Wheeler.

What if entanglement tells space how to curve, and curved space tells matter how to move? What if gravity is due to the entropy of the entanglement? Actually, in Verlinde’s proposal, the entanglement entropy from particles is minor, it’s the entanglement of the vacuum state, of dark energy, that dominates, and by a very large factor.

One analogy is thermodynamics, which allows us to represent the bulk properties of the atmosphere that is nothing but a collection of a very large number of molecules and their micro-states. Verlinde posits that the information and entropy content of space are due to the excitations of the vacuum state, which is manifest as dark energy.

The connection between gravity and thermodynamics has been around for 3 decades, through research on black holes, and from string theory. Jacob Bekenstein and Stephen Hawking determined that a black hole possesses entropy proportional to its area divided by the gravitational constant G. String theory can derive the same formula for quantum entanglement in a vacuum. This is known as the AdS/CFT (conformal field theory) correspondence.

So in the AdS model, gravity is emergent and its strength, the acceleration at a surface, is determined by the mass density on that surface surrounding matter with mass M. This is just the inverse square law of Newton. In the more realistic dS model, the entropy in the volume, or bulk, must also be considered. (This is the Gibbs entropy relevant to excited states, not the Boltzmann entropy of a ground state configuration).

Newtonian dynamics and general relativity can be derived from the surface entropy alone, but do not reflect the volume contribution. The volume contribution adds an additional term to the equations, strengthening gravity over what is expected, and as a result, the existence of dark matter is ‘spoofed’. But there is no dark matter in this view, just stronger gravity than expected.

This is what the proponents of MOND have been saying all along. Mordehai Milgrom observed that galactic rotation curves go flat at a characteristic low acceleration scale of order 2 centimeters per second per year. MOND is phenomenological, it observes a trend in galaxy rotation curves, but it does not have a theoretical foundation.

Verlinde’s proposal is not MOND, but it provides a theoretical basis for behavior along the lines of what MOND states.

Now the volume in question turns out to be of order the Hubble volume, which is defined as c/H, where H is the Hubble parameter denoting the rate at which galaxies expand away from one another. Reminder: Hubble’s law is v = H \cdot d where v is the recession velocity and the d the distance between two galaxies. The lifetime of the universe is approximately 1/H.

clusters_1280.abell1835.jpg

The value of c / H is over 4 billion parsecs (one parsec is 3.26 light-years) so it is in galaxies, clusters of galaxies, and at the largest scales in the universe for which departures from general relativity (GR) would be expected.

Dark energy in the universe takes the form of a cosmological constant Λ, whose value is measured to be 1.2 \cdot 10^{-56} cm^{-2} . Hubble’s parameter is 2.2 \cdot 10^{-18} sec^{-1} . A characteristic acceleration is thus H²/ Λ or 4 \cdot 10^{-8}  cm per sec per sec (cm = centimeters, sec = second).

One can also define a cosmological acceleration scale simply by c \cdot H , the value for this is about 6 \cdot 10^{-8} cm per sec per sec (around 2 cm per sec per year), and is about 15 billion times weaker than Earth’s gravity at its surface! Note that the two estimates are quite similar.

This is no coincidence since we live in an approximately dS universe, with a measured  Λ ~ 0.7 when cast in terms of the critical density for the universe, assuming the canonical ΛCDM cosmology. That’s if there is actually dark matter responsible for 1/4 of the universe’s mass-energy density. Otherwise Λ could be close to 0.95 times the critical density. In a fully dS universe, \Lambda \cdot c^2 = 3 \cdot H^2 , so the two estimates should be equal to within sqrt(3) which is approximately the difference in the two estimates.

So from a string theoretic point of view, excitations of the dark energy field are fundamental. Matter particles are bound states of these excitations, particles move freely and have much lower entropy. Matter creation removes both energy and entropy from the dark energy medium. General relativity describes the response of area law entanglement of the vacuum to matter (but does not take into account volume entanglement).

Verlinde proposes that dark energy (Λ) and the accelerated expansion of the universe are due to the slow rate at which the emergent spacetime thermalizes. The time scale for the dynamics is 1/H and a distance scale of c/H is natural; we are measuring the time scale for thermalization when we measure H. High degeneracy and slow equilibration means the universe is not in a ground state, thus there should be a volume contribution to entropy.

When the surface mass density falls below c \cdot H / (8 \pi \cdot G) things change and Verlinde states the spacetime medium becomes elastic. The effective additional ‘dark’ gravity is proportional to the square root of the ordinary matter (baryon) density and also to the square root of the characteristic acceleration c \cdot H.

This dark gravity additional acceleration satisfies the equation g _D = sqrt  {(a_0 \cdot g_B / 6 )} , where g_B is the usual Newtonian acceleration due to baryons and a_0 = c \cdot H is the dark gravity characteristic acceleration. The total gravity is g = g_B + g_D . For large accelerations this reduces to the usual g_B and for very low accelerations it reduces to sqrt  {(a_0 \cdot g_B / 6 )} .

The value a_0/6 at 1 \cdot 10^{-8} cm per sec per sec derived from first principles by Verlinde is quite close to the MOND value of Milgrom, determined from galactic rotation curve observations, of 1.2 \cdot 10^{-8} cm per sec per sec.

So suppose we are in a region where g_B is only 1 \cdot 10^{-8} cm per sec per sec. Then g_D takes the same value and the gravity is just double what is expected. Since orbital velocities go as the square of the acceleration then the orbital velocity is observed to be sqrt(2) higher than expected.

In terms of gravitational potential, the usual Newtonian potential goes as 1/r, resulting in a 1/r^2 force law, whereas for very low accelerations the potential now goes as log(r) and the resultant force law is 1/r. We emphasize that while the appearance of dark matter is spoofed, there is no dark matter in this scenario, the reality is additional dark gravity due to the volume contribution to the entropy (that is displaced by ordinary baryonic matter).

M33_rotation_curve_HI.gif

Flat to rising rotation curve for the galaxy M33

Dark matter was first proposed by Swiss astronomer Fritz Zwicky when he observed the Coma Cluster and the high velocity dispersions of the constituent galaxies. He suggested the term dark matter (“dunkle materie”). Harold Babcock in 1937 measured the rotation curve for the Andromeda galaxy and it turned out to be flat, also suggestive of dark matter (or dark gravity). Decades later, in the 1970s and 1980s, Vera Rubin (just recently passed away) and others mapped many rotation curves for galaxies and saw the same behavior. She herself preferred the idea of a deviation from general relativity over an explanation based on exotic dark matter particles. One needs about 5 times more matter, or about 5 times more gravity to explain these curves.

Verlinde is also able to derive the Tully-Fisher relation by modeling the entropy displacement of a dS space. The Tully-Fisher relation is the strong observed correlation between galaxy luminosity and angular velocity (or emission line width) for spiral galaxies, L \propto v^4 .  With Newtonian gravity one would expect M \propto v^2 . And since luminosity is essentially proportional to ordinary matter in a galaxy, there is a clear deviation by a ratio of v².

massdistribution.jpeg

 Apparent distribution of spoofed dark matter,  for a given ordinary (baryonic) matter distribution

When one moves to the scale of clusters of galaxies, MOND is only partially successful, explaining a portion, coming up shy a factor of 2, but not explaining all of the apparent mass discrepancy. Verlinde’s emergent gravity does better. By modeling a general mass distribution he can gain a factor of 2 to 3 relative to MOND and basically it appears that he can explain the velocity distribution of galaxies in rich clusters without the need to resort to any dark matter whatsoever.

And, impressively, he is able to calculate what the apparent dark matter ratio should be in the universe as a whole. The value is \Omega_D^2 = (4/3) \Omega_B where \Omega_D is the apparent mass-energy fraction in dark matter and \Omega_B is the actual baryon mass density fraction. Both are expressed normalized to the critical density determined from the square of the Hubble parameter, 8 \pi G \rho_c = 3 H^2 .

Plugging in the observed \Omega_B \approx 0.05 one obtains \Omega_D \approx 0.26 , very close to the observed value from the cosmic microwave background observations. The Planck satellite results have the proportions for dark energy, dark matter, ordinary matter as .68, .27, and .05 respectively, assuming the canonical ΛCDM cosmology.

The main approximations Verlinde makes are a fully dS universe and an isolated, static (bound) system with a spherical geometry. He also does not address the issue of galaxy formation from the primordial density perturbations. At first guess, the fact that he can get the right universal \Omega_D suggests this may not be a great problem, but it requires study in detail.

Breaking News!

Margot Brouwer and co-researchers have just published a test of Verlinde’s emergent gravity with gravitational lensing. Using a sample of over 33,000 galaxies they find that general relativity and emergent gravity can provide an equally statistically good description of the observed weak gravitational lensing. However, emergent gravity does it with essentially no free parameters and thus is a more economical model.

“The observed phenomena that are currently attributed to dark matter are the consequence of the emergent nature of gravity and are caused by an elastic response due to the volume law contribution to the entanglement entropy in our universe.” – Erik Verlinde

References

Erik Verlinde 2011 “On the Origin of Gravity and the Laws of Newton” arXiv:1001.0785

Stephen Perrenod, 2013, 2nd edition, “Dark Matter, Dark Energy, Dark Gravity” Amazon, provides the traditional view with ΛCDM  (read Dark Matter chapter with skepticism!)

Erik Verlinde 2016 “Emergent Gravity and the Dark Universe arXiv:1611.02269v1

Margot Brouwer et al. 2016 “First test of Verlinde’s theory of Emergent Gravity using Weak Gravitational Lensing Measurements” arXiv:1612.03034v


The Curiously Tangential Dwarf Galaxies

There are some 50 or so satellite galaxies around the Milky Way, the most famous of which are the Magellanic Clouds. Somewhat incredibly, half of these have been discovered within the last 2 years, since they are small, faint, and have low surface brightness. The image below shows only the well known ‘classical’ satellites. The satellites are categorized primarily as dwarf spheroidals, and most are low in gas content.

640px-satellite_galaxies-svg

Image credit: Wikipedia, Richard Powell, Creative Commons Attribution-Share Alike 2.5 Generic

“Satellite galaxies that orbit from 1,000 ly (310 pc) of the edge of the disc of the Milky Way Galaxy to the edge of the dark matter halo of the Milky Way at 980×103 ly (300 kpc) from the center of the galaxy, are generally depleted in hydrogen gas compared to those that orbit more distantly. The reason is the dense hot gas halo of the Milky Way, which strips cold gas from the satellites. Satellites beyond that region still retain copious quantities of gas.” – Wikipedia article

In a recent paper “The tangential velocity excess of the Milky Way satellites“, Marius Cautun and Carlos Frenk find that a sample of satellites (drawn from those known for more than a few years) deviates from the predictions of the canonical Λ – Cold Dark Matter (ΛCDMcosmology. (Λ refers to the cosmological constant, or dark energy).

“We estimate the systemic orbital kinematics of the Milky Way classical satellites and compare them with predictions from the Λ cold dark matter (ΛCDM) model derived from a semi-analytical galaxy formation model applied to high resolution cosmological N-body simulations. We find that the Galactic satellite system is atypical of ΛCDM systems. The subset of 10 Galactic satellites with proper motion measurements has a velocity anisotropy, β = −2.2 ± 0.4, that lies in the 2.9% tail of the ΛCDM distribution. Individually, the Milky Way satellites have radial velocities that are lower than expected for their proper motions, with 9 out of the 10 having at most 20% of their orbital kinetic energy invested in radial motion. Such extreme values are expected in only 1.5% of ΛCDM satellites systems. This tangential motion excess is unrelated to the existence of a Galactic ‘disc of satellites’. We present theoretical predictions for larger satellite samples that may become available as more proper motion measurements are obtained.”

Radial velocities are easy, we get those from redshifts. Tangential velocities are much tougher, but can be obtained from relatively nearby objects by measuring their proper motions. That is, how much do their apparent positions change on the sky after many years have passed. It’s all the more tough when your object is not a point object, but a fuzzy galaxy!

For a ‘random’ distribution of velocities in accordance with ΛCDM cosmology, one would expect the two components of tangential velocity to be each roughly equal on average to the radial component, and thus 2/3 of the kinetic energy would be tangential and 1/3 would be radial. But rather than 33% of the kinetic energy being in radial motion, they find that the Galactic satellites have only about 1/2 that amount in radial, and over 80% of their kinetic energy in tangential motion.

Formally, they find a negative velocity anistropy, β, which as it is defined in practice, should be around zero for a ΛCDM distribution. They find that β differs from zero by 5 standard deviations.

One possible explanation is that the dwarf galaxies are mainly at their perigee or apogee points of their orbits. But why should this be the case? Another explanation: “alternatively indicate that the Galactic satellites have orbits that are, on average, closer to circular than is typical in ΛCDM. This would mean that MW halo mass estimates based on satellite orbits (e.g. Barber et al. 2014) are biased low.” Perhaps the Milky Way halo mass estimate is too low. Or, they also posit, without elaborating, do the excess tangential motions “indicate new physics in the dark sector”?

So one speculation is that the tangential motions are reflective of emergent gravity class of theories, for which dark matter is not required, but for which the gravitational force changes (strengthens) at low accelerations, of order c \cdot H, where H is the Hubble parameter, and the value works out to be around 2 centimeters per second per year. And it does this in a way that ‘spoofs’ the existence and gravitational affect of dark matter. This is also what is argued for in Modified Newtonian Dynamics, which is an empirical observation about galaxy light curves.

In the next article of this series we will look at Erik Verlinde’s emergent gravity proposal, which he has just enhanced, and will attempt to explain it as best we can. If you want to prepare yourself for this challenging adventure, first read his 2011 paper, “On the Origin of Gravity and the Laws of Newton”.


Modified Newtonian Dynamics – Is there something to it?

You are constantly accelerating. The Earth’s gravity is pulling you downward at g = 9.8 meters per second per second. It wants to take your velocity up to about 10 meters per second after only the first second of free fall. Normally you don’t fall, because the floor is solid due to electromagnetic forces and also it is electromagnetic forces that give your body structural integrity and power your muscles, resisting the pull of gravity.

You are also accelerating due to the Earth’s spin and its revolution about the Sun.

1024px-STS132_undocking_iss2.jpg

International Space Station, image credit: NASA

Our understanding of gravity comes primarily from these large accelerations, such as the Earth’s pull on ourselves and on satellites, the revolution of the Moon about the Earth, and the planetary orbits about the Sun. We also are able to measure the solar system’s velocity of revolution about the galactic center, but with much lower resolution, since the timescale is of order 1/4 billion years for a single revolution with an orbital radius of about 25,000 light-years!

It becomes more difficult to determine if Newtonian dynamics and general relativity still hold for very low accelerations, or at very large distance scales such as the Sun’s orbit about the galactic center and beyond.

Modified Newtonian Dynamics (MOND) was first proposed by Mordehai Milgrom in the early 1980s as an alternative explanation for flat galaxy rotation curves, which are normally attributed to dark matter. At that time the best evidence for dark matter came from spiral galaxy rotation curves, although the need for dark matter (or some deviation from Newton’s laws) was originally seen by Fritz Zwicky in the 1930s while studying clusters of galaxies.

newly-released-hubble-image-shows-spiral-galaxy-ngc-3521

NGC 3521. Image Credit: ESA/Hubble & NASA and S. Smartt (Queen’s University Belfast); Acknowledgement: Robert Gendler 

M33_rotation_curve_HI.gif

Galaxy Rotation Curve for M33. Public Domain, By Stefania.deluca – Own work,  https://commons.wikimedia.org/w/index.php?curid=34962949

If general relativity is always correct, and Newton’s laws of gravity are correct for non-relativistic, weak gravity conditions, then one expects the orbital velocities of stars in the outer reaches of galaxies to drop in concert with the fall in light from stars and/or radio emission from interstellar gas, reflecting decreasing baryonic matter density. (Baryonic matter is ordinary matter, dominated by protons and neutrons). As seen in the image above for M33, the orbital velocity does not drop, it continues to rise well past the visible edge of the galaxy.

To first order, assuming a roughly spherical distribution of matter, the square of the velocity at a given distance from the center is proportional to the mass interior to that distance divided by the distance (signifying the gravitational potential), thus

   v² ~ G M / r

where G is the gravitational constant, and M is the galactic mass within a spherical volume of radius r. This potential corresponds to the familiar 1/r² dependence of the force of gravity according to Newton’s laws.  In other words, at the outer edge of a galaxy the velocity of stars should fall as the square root of the increasing distance, for Newtonian dynamics.

Instead, for the vast majority of galaxies studied, it doesn’t – it flattens out, or falls off very slowly with increasing distance, or even continues to rise, as for M33 above. The behavior is roughly as if gravity followed an inverse distance law for the force (1/r) in the outer regions, rather than an inverse square law with distance (1/r²).

So either there is more matter at large distances from galactic centers than expected from the light distribution, or the gravitational law is modified somehow such that gravity is stronger than expected. If there is more matter, it gives off little or no light, and is called unseen, or dark, matter.

It must be emphasized that MOND is completely empirical and phenomenological. It is curve fitted to the existing rotational curves, rather successfully, but not based on a theoretical construct for gravity. It has a free parameter for weak acceleration, and for very small accelerations, gravity is stronger than expected. It turns out that this free parameter, a_0 , is of the same order as the ‘Hubble acceleration’ c \cdot H. (The Hubble distance is c / H and is 14 billion light-years; H has units of inverse time and the age of the universe is 1/H to within a few percent).

The Hubble acceleration is approximately .7 nanometers / sec / sec or 2 centimeters / sec / year  (a nanometer is a billionth of a meter, sec = second).

Milgrom’s fit to rotation curves found a best fit at .12 nanometers/sec/sec, or about 1/6 of a_0 . This is very small as compared to the Earth’s gravity, for example. It’s the ratio between 80 years and one second, or about 2.5 billion. So you can imagine how such a variation could have escaped detection for a long time, and would require measurements at the extragalactic scale.

The TeVeS – tensor, vector, scalar theory is a theoretical construct that modifies gravity from general relativity. General relativity is a tensor theory that reduces to Newtonian dynamics for weak gravity. TeVeS has more free parameters than general relativity, but can be constructed in a way that will reproduce galaxy rotation curves and MOND-like behavior.

But MOND, and by implication, TeVeS, have a problem. They work well, surprisingly well, at the galactic scale, but come up short for galaxy clusters and for the very largest extragalactic scales as reflected in the spatial density perturbations of the cosmic microwave background radiation. So MOND as formulated doesn’t actually fully eliminate the requirement for dark matter.

lensshoe_hubble_900

Horseshoe shaped Einstein Ring

Image credit: ESA/Hubble and NASA

Any alternative to general relativity also must explain gravitational lensing, for which there are a large number of examples. Typically a background galaxy image is distorted and magnified as its light passes through a galaxy cluster, due to the large gravity of the cluster. MOND proponents do claim to reproduce gravitational lensing in a suitable manner.

Our conclusion about MOND is that it raises interesting questions about gravity at large scales and very low accelerations, but it does not eliminate the requirement for dark matter. It is also very ad hoc. TeVeS gravity is less ad hoc, but still fails to reproduce the observations at the scale of galaxy clusters and above.

Nevertheless the rotational curves of spirals and irregulars are correlated with the visible mass only, which is somewhat strange if there really is dark matter dominating the dynamics. Dark matter models for galaxies depend on dark matter being distributed more broadly than ordinary, baryonic, matter.

In the third article of this series we will take a look at Erik Verlinde’s emergent gravity concept, which can reproduce the Tully-Fisher relation and galaxy rotation curves. It also differs from MOND both in terms of being a theory, although incomplete, rather than empiricism, and apparently in being able to more successfully address the dark matter issues at the scale of galaxy clusters.

References

Wikipedia MOND entry: https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics

M. Milgrom 2013, “Testing the MOND Paradigm of Modified Dynamics with Galaxy-Galaxy Gravitational Lensing” https://arxiv.org/abs/1305.3516

R. Reyes et al. 2010, “Confirmation of general relativity on large scales from weak lensing and galaxy velocities” https://arxiv.org/abs/1003.2185

“In rotating galaxies, distribution of normal matter precisely determines gravitational acceleration” https://www.sciencedaily.com/releases/2016/09/160921085052.htm


Dark Gravity: Is Gravity Thermodynamic?

This is the first in a series of articles on ‘dark gravity’ that look at emergent gravity and modifications to general relativity. In my book Dark Matter, Dark Energy, Dark Gravity I explained that I had picked Dark Gravity to be part of the title because of the serious limitations in our understanding of gravity. It is not like the other 3 forces; we have no well accepted quantum description of gravity. And it is some 33 orders of magnitude weaker than those other forces.
I noted that:

The big question here is ~ why is gravity so relatively weak, as compared to the other 3 forces of nature? These 3 forces are the electromagnetic force, the strong nuclear force, and the weak nuclear force. Gravity is different ~ it has a dark or hidden side. It may very well operate in extra dimensions… http://amzn.to/2gKwErb

My major regret with the book is that I was not aware of, and did not include a summary of, Erik Verlinde’s work on emergent gravity. In emergent gravity, gravity is not one of the fundamental forces at all.

Erik Verlinde is a leading string theorist in the Netherlands who in 2009 proposed that gravity is an emergent phenomenon, resulting from the thermodynamic entropy of the microstates of quantum fields.

 In 2009, Verlinde showed that the laws of gravity may be derived by assuming a form of the holographic principle and the laws of thermodynamics. This may imply that gravity is not a true fundamental force of nature (like e.g. electromagnetism), but instead is a consequence of the universe striving to maximize entropy. – Wikipedia article “Erik Verlinde”

This year, Verlinde extended this work from an unrealistic anti-de Sitter model of the universe to a more realistic de Sitter model. Our runaway universe is approaching a dark energy dominated deSitter solution.

He proposes that general relativity is modified at large scales in a way that mimics the phenomena that have generally been attributed to dark matter. This is in line with MOND, or Modified Newtonian Dynamics. MOND is a long standing proposal from Mordehai Milgrom, who argues that there is no dark matter, rather that gravity is stronger at large distances than predicted by general relativity and Newton’s laws.

In a recent article on cosmology and the nature of gravity Dr.Thanu Padmanabhan lays out 6 issues with the canonical Lambda-CDM cosmology based on general relativity and a homogeneous, isotropic, expanding universe. Observations are highly supportive of such a canonical model, with a very early inflation phase and with 1/3 of the mass-energy content in dark energy and 2/3 in matter, mostly dark matter.

And yet,

1. The equation of state (pressure vs. density) of the early universe is indeterminate in principle, as well as in practice.

2. The history of the universe can be modeled based on just 3 energy density parameters: i) density during inflation, ii) density at radiation – matter equilibrium, and iii) dark energy density at late epochs. Both the first and last are dark energy driven inflationary de Sitter solutions, apparently unconnected, and one very rapid, and one very long lived. (No mention of dark matter density here).

3. One can construct a formula for the information content at the cosmic horizon from these 3 densities, and the value works out to be 4π to high accuracy.

4. There is an absolute reference frame, for which the cosmic microwave background is isotropic. There is an absolute reference scale for time, given by the temperature of the cosmic microwave background.

5. There is an arrow of time, indicated by the expansion of the universe and by the cooling of the cosmic microwave background.

6. The universe has, rather uniquely for physical systems, made a transition from quantum behavior to classical behavior.

“The evolution of spacetime itself can be described in a purely thermodynamic language in terms of suitably defined degrees of freedom in the bulk and boundary of a 3-volume.”

Now in fluid mechanics one observes:

“First, if we probe the fluid at scales comparable to the mean free path, you need to take into account the discreteness of molecules etc., and the fluid description breaks down. Second, a fluid simply might not have reached local thermodynamic equilibrium at the scales (which can be large compared to the mean free path) we are interested in.”

Now it is well known that general relativity as a classical theory must break down at very small scales (very high energies). But also with such a thermodynamic view of spacetime and gravity, one must consider the possibility that the universe has not reached a statistical equilibrium at the largest scales.

One could have reached equilibrium at macroscopic scales much less than the Hubble distance scale c/H (14 billion light-years, H is the Hubble parameter) but not yet reached it at the Hubble scale. In such a case the standard equations of gravity (general relativity) would apply only for the equilibrium region and for accelerations greater than the characteristic Hubble acceleration scale of  c \cdot H (2 centimeters per second / year).

This lack of statistical equilibrium implies the universe could behave similarly to non-equilibrium thermodynamics behavior observed in the laboratory.

The information content of the expanding universe reflects that of the quantum state before inflation, and this result is 4π in natural units by information theoretic arguments similar to those used to derive the entropy of a black hole.

The black hole entropy is  S = A / (4 \cdot Lp^2) where A is the area of the black hole using the Schwarzschild radius formula and Lp is the Planck length, G \hbar / c^3 , where G is the gravitational constant, \hbar  is Planck’s constant.

This beautiful Bekenstein-Hawking entropy formula connects thermodynamics, the quantum world  and gravity.

This same value of the universe’s entropy can also be used to determine the number of e-foldings during inflation to be 6 π² or 59, consistent with the minimum value to enforce a sufficiently homogeneous universe at the epoch of the cosmic microwave background.

If inflation occurs at a reasonable ~ 10^{15}  GeV, one can derive the observed value of the cosmological constant (dark energy) from the information content value as well, argues Dr. Padmanhaban.

This provides a connection between the two dark energy driven de Sitter phases, inflation and the present day runaway universe.

The table below summarizes the 4 major phases of the universe’s history, including the matter dominated phase, which may or may not have included dark matter. Erik Verlinde in his new work, and Milgrom for over 3 decades, question the need for dark matter.

Epoch  /  Dominated  /   Ends at  /   a-t scaling  /   Size at end

Inflation /  Inflaton (dark energy) / 10^{-32} seconds / e^{Ht} (de Sitter) / 10 cm

Radiation / Radiation / 40,000 years / \sqrt t /  10 million light-years

Matter / Matter (baryons) Dark matter? /  9 billion light-years / t^{2/3} /  > 100 billion light-years

Runaway /  Dark energy (Cosmological constant) /  “Infinity” /  e^{Ht} (de Sitter) / “Infinite”

In the next article I will review the status of MOND – Modified Newtonian Dynamics, from the phenomenology and observational evidence.

References

E. Verlinde. “On the Origin of Gravity and the Laws of Newton”. JHEP. 2011 (04): 29 http://arXiv.org/abs/1001.0785

T. Padmanabhan, 2016. “Do We Really Understand the Cosmos?” http://arxiv.org/abs/1611.03505v1

S. Perrenod, 2011. https://darkmatterdarkenergy.com/2011/07/04/dark-energy-drives-a-runaway-universe/

S. Perrenod, 2011. Dark Matter, Dark Energy, Dark Gravity 2011  http://amzn.to/2gKwErb

S. Carroll and G. Remmen, 2016, http://www.preposterousuniverse.com/blog/2016/02/08/guest-post-grant-remmen-on-entropic-gravity/


Axions, Inflation and Baryogenesis: It’s a SMASH (pi)

Searches for direct detection of dark matter have focused primarily on WIMPs (weakly interacting massive particles) and more precisely on LSPs (the lightest supersymmetric particle). These are hypothetical particles such as neutralinos that are least massive members of the hypothesized family of supersymmetric partner particles.

But supersymmetry may be dead. There have been no supersymmetric particles detected at the Large Hadron Collider at CERN as of yet, leading many to say that this is a crisis in physics.

At the same time as CERN has not been finding evidence for supersymmetry, WIMP dark matter searches have been coming up empty as well. These searches keep increasing in sensitivity with larger and better detectors and the parameter space for supersymmetric WIMPs is becoming increasingly constrained. Enthusiasm unabated, the WIMP dark matter searchers continue to refine their experiments.

1-worldsmostse

LUX dark matter detector in a mine in Lead, South Dakota is not yet detecting WIMPs. Credit: Matt Kapust/ Sanford Underground Research Facility

What if there is no supersymmetry? Supersymmetry adds a huge number of particles to the particle zoo. Is there a simpler explanation for dark matter?

Alternative candidates under consideration for dark matter, including sterile neutrinos, axions, and primordial black holes, and are now getting more attention.

From a prior blog I wrote about axions as dark matter candidates:

Axions do not require the existence of supersymmetry. They have a strong theoretical basis in the Standard Model as an outgrowth of the necessity to have charge conjugation plus parity conserved in the strong nuclear force (quantum chromodynamics of quarks, gluons). This conservation property is known as CP-invariance. (While CP-invariance holds for the strong force, the weak force is CP violating).

In addition to the dark matter problem, there are two more outstanding problems at the intersection of cosmology and particle physics. These are baryogenesis, the mechanism by which matter won out over antimatter (as a result of CP violation of Charge and Parity), and inflation. A period of inflation very early on in the universe’s history is necessary to explain the high degree of homogeneity (uniformity) we see on large scales and the near flatness of the universe’s topology. The cosmic microwave background is at a uniform temperature of 2.73 Kelvins to better than one part in a hundred thousand across the sky, and yet, without inflation, those different regions could never have been in causal contact.

A team of European physicists have proposed a model SMASH that does not require supersymmetry and instead adds a few particles to the Standard Model zoo, one of which is the axion and is already highly motivated from observed CP violation. SMASH (Standard Model Axion Seesaw Higgs portal inflation) also adds three right-handed heavy neutrinos (the three known light neutrinos are all left-handed). And it adds a complex singlet scalar field which is the primary driver of inflation although the Higgs field can play a role as well.

The SMASH model is of interest for new physics at around 10^11 GeV or 100 billion times the rest mass of the proton. For comparison, the Planck scale is near 10^19 GeV and the LHC is exploring up to around 10^4 GeV (the proton rest mass is just under 1 GeV and in this context GeV is short hand for GeV divided by the speed of light squared).

smashcmb

Figure 1 from Ballesteros G. et al. 2016. The colored contours represent observational limits from the Planck satellite and other sources regarding the tensor-to-scalar power ratio of primordial density fluctuations (r, y-axis) and the spectral index of these fluctuations (ns, x-axis). These constraints on primordial density fluctuations in turn constrain the inflation models. The dashed lines ξ = 1, .1, .01, .001 represent a key parameter in the assumed slow-roll inflation potential function. The near vertical lines labelled 50, 60, 70, 80 indicate the number N of e-folds to the end of inflation, i.e. the universe inflates by a factor of e^N in each of 3 spatial dimensions during the inflation phase.

So with a single model, with a few extensions to the Standard Model, including heavy right-handed (sterile) neutrinos, an inflation field, and an axion, the dark matter, baryogenesis and inflation issues are all addressed. There is no need for supersymmetry in the SMASH model and the axion and heavy neutrinos are already well motivated from particle physics considerations and should be detectable at low energies. Baryogenesis in the SMASH model is a result of decay of the massive right-handed neutrinos.

Now the mass of the axion is extremely low, of order 50 to 200 μeV (millionths of an eV) in their model (by comparison, neutrino mass limits are of order 1 eV), and detection is a difficult undertaking.

There is currently only one active terrestrial axion experiment for direct detection, ADMX. It has its primary detection region at lower masses than the SMASH model is suggesting, and has placed interesting limits in the 1 to 10 μeV range. It is expected to push its range up to around 30 μeV in a couple of years. But other experiments such as MADMAX and ORPHEUS are coming on line in the next few years that will explore the region around 100 μeV, which is more interesting for the SMASH model.

Not sure why the researchers didn’t call this the SMASHpie model (Standard Model Axion Seesaw Higgs portal inflation), because it’s a pie in the face to Supersymmetry!

stooges_heavenlydaze

It would be wonderfully economical to explain baryogenesis, inflation, and dark matter with a handful of new particles, and to finally detect dark matter particles directly.

Reference

“Unifying inflation with the axion, dark matter, baryogenesis and the seesaw mechanism” Ballesteros G., Redondo J., Ringwald A., and Tamarit C. 2016  https://arxiv.org/abs/1608.05414


Supernovae Destroy Dwarf Galaxies: Dark Matter is Safe

The existence of dark matter has not exactly been under threat – the ratio of dark matter to ordinary matter in the universe is well established, at about 5:1 in favor of dark matter. Consistent results are found between observations of the cosmic microwave background, observations of clusters of galaxies, and observations of the rotation curves of galaxies. (The MOND theory as an alternative to dark matter does not do well at scales greater than that of individual galaxy rotation curves.)

But there has been an issue around galaxy formation. It has been expected that many more dwarf galaxies should be seen in our Local Group, which is dominated by the Andromeda Galaxy (#1) and our Milky Way Galaxy (#2, sorry folks), along with the aptly named Triangulum Galaxy (#3).

Where are the Dwarfs?

Our Milky Way has only around 30 dwarf galaxies as companions, the best known of which are the Large and Small Magellanic Clouds. While a few more have been discovered only recently, simulations of galaxy formation have previously suggested this number ought to be more than 1000! This posed a problem for both our understanding of dark matter and our understanding of galaxy formation.

Now, from CalTech comes a much more detailed simulation of how galaxies similar to the Milky Way are formed. The researchers used over 700,000 CPU hours of supercomputer time to create the most detailed simulation ever of the galaxy formation and evolution processes.

“In a galaxy, you have 100 billion stars, all pulling on each other, not to mention other components we don’t see like dark matter. To simulate this, we give a supercomputer equations describing those interactions and then let it crank through those equations repeatedly and see what comes out at the end.”  – Caltech’s Phil Hopkins, associate professor of theoretical astrophysics.

Death by Supernova

Postdoc Andrew Wetzel and Prof. Hopkins paid special attention to the effects of supernovae. When supernovae explode they release tremendous amounts of kinetic energy. They generate powerful winds that reach speeds of over a thousand kilometers per second.

In a dwarf galaxy an individual supernova can have substantial effect. The researchers’ simulations indicate that dwarf galaxies can actually be destroyed by the effect of even a single supernova during their early history. Stars and gas that would form future stars can both be blown out of the dwarf galaxies. In addition, many dwarf galaxies in the Milky Way’s neighborhood would have been destroyed by the gravitational tidal forces of the Milky Way, the simulations show.

These advanced galaxy evolution simulations appear to solve the dark matter and dwarf galaxy problem. The authors plan to refine their results and develop even greater understanding of galaxy formation with simulations of even greater power in the future.

simgalaxy-face-on-hr

Simulated View of Milky Way Galaxy
The formation and evolution of the galaxy were done on a supercomputer. Credit: Hopkins Research Group/Caltech

References:

https://www.caltech.edu/news/recreating-our-galaxy-supercomputer-51995

https://youtu.be/e7KuwjGGxBw


Dark Matter Clumps Tear up Clusters

What is destroying globular clusters?

Globular clusters were formed early in the history of our Milky Way’s history; the 150 or so globular clusters in our galaxy contain many of its oldest stars. Globular clusters are round (hence their name), dense, gravitationally bound collections of stars and can contain hundreds of thousands of stars.

What’s older than globular clusters? Dark matter subhalos! Our galaxy is dominated by dark matter distributed in a halo. Massive supercomputer simulations have shown that regions of higher dark matter density known as subhalos were the seeds for the formation of the galaxy. These subhalos, with millions of solar masses, formed first and supplied the gravity necessary for galaxies to subsequently begin their formation.

Palomar 5 is smaller than most globulars. It was detected only in 1950, in part due to its low mass of only 16,000 solar masses. Palomar 5 is far above the Milky Way’s disk, residing in the dark matter dominated halo, and has been heavily influenced tidally over the past 11 billion years. In its next encounter with the disk, some 100 million plus years into the future, it may even be  completely torn apart by tidal interactions.

Palomar 5 shows significant tidal disruption, with a very long stream of stars trailing out of the cluster, pulled out by tidal forces. The length of the stream is several tens of degrees across the sky, some 30,000 light-years in extent. This is greater than the distance from the Sun to the center of the Milky Way. The stream’s mass is 5000 times that of the Sun.

Of great significance are two well defined gaps in the stream. These gaps are very intriguing to astrophysicists, because they may be probes of the nature of the dark matter in our galaxy’s halo.

perturbationsubhaloespal5

Upper portion of Figure 9 from Erkal et al. (referenced below). The two gaps are centered on the dotted lines.

Recently three astrophysicists from the Institute of Astronomy at Cambridge University have modeled the stream and these two gaps and described three main possible causes of the gaps: the Milky Way’s bar (our galaxy has spiral arms leading into a central bar), giant molecular clouds, and/or dark matter halos. (Erkal et al. paper in References below).

They find that gravitational interaction from giant molecular clouds might explain the smaller gap, but not the larger one. Interaction from the Milky Way’s bar is another possibility, but might not be the best for producing such clean gaps, that on the face of it, seem to be due to discrete encounters with smaller structures.

Because of the well defined nature of the gaps, the researchers’ preliminary conclusion is that dark matter halos caused both, and especially so in the case of the larger gap. The smaller gap could be caused by giant molecular clouds, but probably not the larger gap.

The leading tail of the star stream (shown on the left side of the figure) has a two degree gap; this is consistent with an interaction from a dark matter subhalo of 1 to 10 million solar masses. The trailing tail has a nine degree gap that is consistent with perturbation of the stream due to a dark matter subhalo of 10 to 100 million solar masses.

Additional data from several planned experiments should allow better discrimination between the possible causes of the gaps. It is very interesting to note that if the smaller gap is due to a sub halo of a few million solar masses, that knowledge in turn can be used to constrain the mass of the dark matter particles to be greater than 2% of the electron rest mass. This would rule out axions as the dominant contributor to dark matter; the axion mass is expected to be much less than 1 electron-Volt (eV) whereas the electron mass is 511,000 eV.

References:

Erkal, D., Koposov S., and Belokurov V. 2016 “A sharper view of Pal 5’s tails” http://arxiv.org/pdf/1609.01282v1.pdf

Kupper, A. et al. 2015 “Globular Cluster Streams as Galactic High-Precision Scales” https://arxiv.org/abs/1502.02658

Kuzma, P. et al. 2014 “Palomar 5 and its Tidal Tails” https://arxiv.org/abs/1411.0776

http://www.dailygalaxy.com/my_weblog/2016/09/-gigantic-compared-to-our-solar-system-two-massive-holes-caused-by-dark-matter-observed-just-outside.html

https://darkmatterdarkenergy.com/2014/02/09/axions-as-cold-dark-matter/