2011 Nobel Prize for Dark Energy Discovery

Measurements of Dark Energy and Matter content of Universe

Dark Energy and Matter content of Universe: The intersection of the supernova (SNe), cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) ellipses indicate a topologically flat universe composed 74% of dark energy (y-axis) and 26% of dark matter plus normal matter (x-axis).

The 2011 Nobel Prize in Physics, the most prestigious award given in the physics field, was announced on October 4. The winners are astronomers and astrophysicists who produced the first clear evidence of an accelerating universe. Not only is our universe as a whole expanding rapidly, it is in fact speeding up! It is not often that astronomers win the Nobel Prize since there is not a separate award for their discipline. The discovery of the acceleration in the universe’s expansion was made more or less simultaneously by two competing teams of astronomers at the end of the 20th century, in 1998, so the leaders of both teams share this Nobel Prize.

The new Nobel laureates, Drs. Saul Perlmutter, Adam Riess, and Brian Schmidt, were the leaders of the two  teams studying distant supernovae, in remote galaxies, as cosmological indicators. Cosmology is the study of the properties of the universe on the largest scales of space and time. Supernovae are exploding stars at the ends of their lives. They only occur about once each fifty to one hundred years or so in a given galaxy, thus one must study a very large number of galaxies in an automated fashion to find a sufficient number to be useful. The two teams introduced new automated search techniques to find enough supernovae and achieve their results.

During a supernova explosion, driven by rapid nuclear fusion of heavy elements, the supernova can temporarily become as bright as the entire galaxy in which it resides. The astrophysicists studied a particular type of supernova known as Type Ia. These are due to white dwarf stellar remnants exceeding a critical mass. Typically these white dwarfs would be found in binary stellar systems with another, more normal, star as a companion. If a white dwarf grabs enough material from the companion via gravitational tidal effects, that matter can “push it over the edge” and cause it to go supernova. Since all occurrences of this type of supernova event have the same mass for the exploding star (about 1.4 times the Sun’s mass), the resultant supernova has a consistent brightness or luminosity from one event to the next.

This makes them very useful as so-called standard candles. We know the absolute brightness, which we can calibrate for this class of supernova, and thus we can calculate the distance (called the luminosity distance) by comparing the observed brightness to the absolute. An alternative measure of the distance can be obtained by measuring the redshift of the companion galaxy. The redshift is due to the overall expansion of the universe, and thus the light from galaxies when it reaches us is stretched out to longer, or “redder” wavelengths. The amount of the shift provides what we call the redshift distance.

Comparing these two different distance techniques provides a cosmological test of the overall properties of the universe: the expansion rate, the shape or topology, and whether the expansion is slowing down, as was expected, or not. The big surprise is that the expansion from the original Big Bang has stopped slowing down due to gravity and has instead been accelerating in recent years! The Nobel winners did not expect such a result, thought they had made errors in their analyses and checked and rechecked. The acceleration did not go away. And when they compared the results between the two teams, they realized they had confirmed each others’ profound discovery of the reality of a dark energy driven acceleration.

The acceleration result is now well founded since it can be seen in the high spatial resolution measurements of the cosmic microwave background radiation as well. This is the radiation left over from the Big Bang event associated with the origin of our universe.

The acceleration is now increasingly important, dominating during the past 5 billion years of the 14 billion year history of the universe. Coincidentally, this is about how long our Earth and Sun have been in existence. The acceleration has to overcome the self-gravitational attraction of all the matter of the universe upon itself, and is believed to be due to a nonzero energy field known as dark energy that pervades all of space. As the universe expands to create more volume, more dark energy is also created! Empty space is not empty, due to the underlying quantum physics realities. The details, and why dark energy has the observed strength, are not yet understood.

Amazingly, Einstein had added a cosmological constant term, which acts as a dark energy, to his equations of General Relativity even before the Big Bang itself was discovered. But he later dropped the term and called it his worst blunder, after the expansion of the universe was first demonstrated by Edwin Hubble over 80 years ago. It turns out Einstein was in fact right; his simple term explains the observed data and the Perlmutter, Riess, and Schmidt measurements indicate that ¾ of the mass-energy content of the universe is found in dark energy, with only ¼ in matter.

Our universe is slated to expand in an exponential fashion for trillions of years and more, unless some other physics that we don’t yet understand kicks in. This is rather like the ever-increasing pace of modern technology and modern life and the continuing inflation of prices.

We honor the achievements of Drs. Perlmutter, Riess, and Schmidt and of their research teams in increasing our understanding of our universe and its underlying physics. Interestingly, only a few weeks ago, a very important supernova in the nearby M101 galaxy was discovered, and it is also a Type 1a. Because it is so close, only 25 million light years away, it is yielding a lot of high quality data. Perhaps this celestial fireworks display was a harbinger of their Nobel Prize?

References:

http://www.nytimes.com/aponline/2011/10/04/science/AP-EU-SCI-Nobel-Physics.html?_r=2&hp

http://www.nobelprize.org/nobel_prizes/physics/laureates/2011/press.html

http://www.nobelprize.org/mediaplayer/index.php?id=1633 (Telephone interview with Adam Reiss)

http://supernova.lbl.gov/ (Supernova Cosmology Project)

https://darkmatterdarkenergy.wordpress.com/2011/08/31/m101-supernova-and-the-cosmic-distance-ladder/

https://darkmatterdarkenergy.wordpress.com/2011/07/04/dark-energy-drives-a-runaway-universe/

 


M101 Supernova and the Cosmic Distance Ladder

Last week, on August 24, there was a very fortuitous and major discovery by UC Berkeley and Lawrence Berkeley National Lab astronomers of a nearby Type 1a supernova, named PTF 11kly, in the nearby Pinwheel Galaxy. This galaxy in Ursa Major is also known as M101 (the 101st member of the Messier catalog). Type 1a supernovae are key to measuring the cosmological distance scale since they act as “standard candles”, that is, they all have more or less the same absolute brightness. Dark energy was first discovered through Type 1a supernovae measurements. These supernovae are due to certain white dwarf runaway thermonuclear explosions.

Supernova in M101

Supernova in M101 (Credit: Lawrence Berkeley National Laboratory, Palomar Transient Factory team)

Three photos on 3 successive nights, with the supernova not detectable on 22 August (left image), detectable (pointed to by green arrow) on 23 August (middle image) and brighter on 24 August (right image).

A white dwarf is the evolutionary end state for most stars, including our Sun eventually, after it exhausts the hydrogen and helium in its core via thermonuclear fusion. Some of the star’s outer envelope is ejected during a nova phase but the remaining portion of the star collapses dramatically, until it is only about the size of the Earth. This is due to the lack of pressure support that previously was generated by nuclear fusion at high temperatures. This is not a supernova event; it is the prior phase that forms the white dwarf. The white dwarf core is usually composed primarily of carbon and oxygen. The collapse of the core is halted by electron degeneracy pressure. The electron degenerate matter, of which a white dwarf is composed, has its pressure determined by quantum rules that require that no two electrons can occupy the same state of position and momentum.

A Type 1a supernova is formed when a white dwarf of a particular mass undergoes a supernova explosion. It was shown in the 1930s by Chandrasekhar that the maximum mass supportable in the white dwarf state is 1.38 solar masses (1.38 times our Sun’s mass). In essence, at this limit, the electrons are pushed as close together as possible. If the white dwarf is near this limit and sufficient mass is added to the white dwarf, it will ignite thermonuclear burning of carbon and oxygen nuclei during a very rapid interval of a few seconds and explode as a supernova. The explosion is catastrophic, with all or nearly all of the star’s matter being thrown out into space. The supernova at maximum is very bright, for a while perhaps as bright as an entire galaxy. The additional mass that triggers the supernova is typically supplied by tidal accretion from a companion star found in a binary system with the white dwarf.

Because they all explode with the same mass, Type 1a supernovae all have more or less the same absolute brightness. This is key to their usefulness as standard candles.

In 1998 two teams of astronomers used these Type 1a supernovae to make the most significant observational discovery in cosmology since the detection of the cosmic microwave background radiation over 30 years earlier. They searched for these standard candle supernovae in very distant galaxies in order to measure the evolution and topology of the universe. Both teams determined the need for a non-zero cosmological constant, or dark energy term, in the equations of general relativity and with their initial results and others gathered later, its strength is seen to be nearly 3/4 of the total mass-energy density of the universe. These results have been confirmed by other techniques, including via detailed studies of the cosmic microwave background.

One needs two measurements for each galaxy to perform this test: a measurement of the redshift distance and a measure of the luminosity distance. The redshift distance is determined by the amount of shift toward the red portion of major identifier lines in the host galaxy’s spectrum, due to the expansion of the universe (the host galaxy is the galaxy in which a given supernova is contained.) The apparent brightness of the supernova relative to its absolute brightness provides the luminosity distance. Basically the two teams found that the distant galaxies were further away than expected, implying a greater rate of continuing expansion – indeed an acceleration – of the universe during the past several billion years compared to what would occur without dark energy.

What is exciting about the M101 supernova discovery last week is that it is so nearby, so easy to measure, and was caught very soon after the initial explosion. By studying how bright it is each day as the supernova explosion progresses, first brightening and then fading (this is known as the light curve), it can help us tie down more tightly the determination of the distance. This in turn helps to provide further precision and confidence around the measurement of the strength of dark energy.

References:

http://www.dailycal.org/2011/08/28/uc-berkeley-researchers-find-brightest-closest-supernova-in-years/

http://arxiv.org/abs/astro-ph/9812133  Perlmutter et al. 1999 “Measurements of Omega and Lambda from 42 High-Redshift Supernovae” Astrophys.J.517:565-586

 http://arxiv.org/abs/astro-ph/9805201  Reiss et al. 1998 “Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant” Astron.J.116:1009-1038

http://en.wikipedia.org/wiki/White_dwarf

http://en.wikipedia.org/wiki/Type_1a_supernova

http://en.wikipedia.org/wiki/Cosmic_distance_ladder


Dark Energy Drives Runaway Universe

Accelerating universe

Accelerating universe graphic. Credit: NASA/STSci/Ann Field

Dark energy was first introduced as a possibility as a result of the formulation of Einstein’s equations of general relativity. When he considered how the universe as a whole would behave under the general relativity description of gravity, he added a term to his equations, known as the cosmological constant. At the time the prevailing view was that the universe was static, and neither expanding nor contracting. The term was intended to balance the self-gravitational energy of the universe, and it thus acts as a repulsive force, rather than an attractive one. His basis for introduction of the cosmological constant was erroneous in two respects. The first problem is that the static solution was unstable, as if balanced on a knife edge. If you nudged it a little bit by increasing the matter density in some region slightly, that region would collapse, or if you lowered the density ever so slightly, that region would expand indefinitely. The second problem is that by 1929 Edwin Hubble had demonstrated the universe is actually expanding at a significant rate overall.

Subsequently, Einstein called the introduction of the cosmological constant his “greatest blunder”. After the realization that we live in an expanding universe, while the possibility of the cosmological constant having a non-zero value was sometimes entertained in cosmological theory, it was mostly ignored (set to zero). Over the next several decades, attention turned to better measuring the expansion rate of the universe and the inventory of matter, both ordinary matter and the dark matter, with the amount of the latter implied by long range gravitational effects seen both within galaxies and between galaxies. Was there enough matter of both types to halt the expansion? It seemed not, rather that there was only about 1/4 of the required density of matter, and that was mostly in the form of dark, not ordinary matter. Matter of either type would slow down the expansion of the universe due to its gravitational effects.

After 1980, the inflationary version of the Big Bang gained acceptance due to its ability to explain the flat topology of the universe and the homogeneity of the cosmic microwave background radiation, the relic light from the Big Bang itself. The inflationary model strongly indicated that the total energy density should be about 4 times greater than seen from the matter components alone. It is the total of energy and matter (the energy content of matter) which determines the universe’s fate, since E = mc^2.

In 1998 the astounding discovery was made that the universe’s expansion rate is accelerating! This was determined by two different teams, each of which were making measurements of distant supernovae (exploding stars). And it was confirmed by measurements of tiny fluctuations in the intensity of the microwave background radiation. The two techniques are consistent, and a third technique based on X-ray emission from clusters of galaxies, as well as a fourth technique based on very large scale measurements of relative galaxy positions, also give results consistent with the previous two techniques. The inflationary predictions are satisfied with dark energy presently three times more dominant than the rest mass energy equivalent from dark matter plus ordinary matter. Further measurements have refined our understanding of the relative strength of dark energy in comparison to dark matter and ordinary matter. The best estimates are that, today, dark energy is 74% of the universe’s total mass-energy balance.

In the cosmological constant formulation, dark energy is constant in time, while the matter density drops as the universe expands, in proportion to the cube of the scale factor. So if we consider the universe in its early days the energy contained in the dark matter would have dominated over dark energy, as the mass density would have been much greater than today. The crossover from matter dominated to dark energy dominated came after the universe was about 9 billion years old, or about 5 billion years ago. This emergence of dark energy as the dominant force, due to its nature as a repulsive property of “empty” space-time, results in an accelerating expansion of the universe, which has been called the “runaway universe”. Our universe is apparently slated to become hugely larger than its current enormous size.

Why is dark energy important then? Since five billion years ago, and on into the indefinite future, it has dominated the mass-energy content of the universe. It drives a re-acceleration of the universe. It inhibits the re-collapse (“Big Crunch”) of our entire universe or even substantial portions of the universe. Thus it naturally extends the life of the entire universe to trillions of years or much more – far beyond what would occur were the universe to be dominated by matter only and with density at the critical value or above. Dark energy thus works to maximize the available time and space for life to develop and to evolve on planets found throughout the universe.


Do we have a CoGeNT direct detection of Dark Matter?

CoGeNT detector during installation

CoGeNT detector during installation (Credit: Pacific Northwest National Laboratories)

(cogent = clear, logical, convincing)

The race to demonstrate direct detection of WIMP (weakly interacting massive particle) dark matter is heating up with this month’s release of results from the CoGeNT experiment, located in a mine in northern Minnesota*. They have just published results collected during the first 15 months of data taking. CoGeNT, as the Ge in the name indicates, uses a detector made of germanium.

There are quite a few such experiments that seek to measure the impact of WIMP dark matter as it directly strikes nucleons, that is, protons and neutrons, in some target material. The cross sections expected for such direct impact are extremely low, thus the experiments require relatively large detectors, high sensitivity, and long runs to gather sufficient statistical evidence of impacts and separate good events from background events due to other causes. The most favored candidate is a WIMP with mass somewhat under 10 GeV to perhaps as high as 200 GeV (the proton rest mass is .938 GeV, a GeV is a billion electron volts, and the mass is stated in energy equivalent units).

While XENON, CDMS (located in the same Soudan laboratory in Minnesota) and other direct experiments have not detected dark matter, for a number of years the DAMA/LIBRA project in Italy has been claiming the detection of an annual modulation of a dark matter signal. The modulation is said to be due to movement of the Earth toward and then away from the galactic center as it orbits the Sun each year, with the signal peaking in the second quarter of the year.

The CoGeNT experiment is also now claiming a detection of an annual modulation with about 2.7 or 2.8 sigma (standard deviations) of statistical significance, which is at the margin of a good detection. DAMA/LIBRA, which uses a thallium-doped sodium iodide crystal (salt) detector, claims a very high statistical significance of 8.9 sigma. Generally, 3 sigma of significance is considered sufficient for a good detection and 5 sigma would be considered a solid detection. The DAMA/LIBRA events have until now been unconfirmed, and have appeared to be in conflict with limits from other experiments including XENON and CDMS.

The CoGeNT results are consistent with DAMA/LIBRA in two respects. First, they together imply a relatively low mass of 5 to 12 GeV for the dark matter WIMP. Second, both the CoGeNT and DAMA experiments are consistent with an annual modulation peak occurring sometime between late April and the end of May, as is expected based on the Earth’s orbit combined with the Sun’s movement relative to the galactic center.

While the CDMS results appear to set limits which contradict both the CoGeNT and DAMA results, there are a number of uncertainties in the actual sensitivity of the respective experiments that may allow resolution of the apparent discrepancy.

We eagerly await further results from CoGeNT and from other experiments including CRESST and COUPP that are well suited to measurement of a relatively low mass WIMP particle such as CoGeNT is claiming to have detected.

*The mine is located in a state park, and tours down into the mine run during the summer months. It is also near to the beautiful Boundary Waters Canoe area that crosses into Canada, where I took a ten day canoe excursion as a Boy Scout, decades ago.

References:


http://www.spacedaily.com/reports/New_data_still_have_scientists_in_dark_over_dark_matter_999.html


http://cogent.pnnl.gov/

C. Aalseth et al. 2011, “Search for an Annual Modulation in a P-type Point Contact Germanium Dark Matter Detector” http://arXiv.org/pdf/1106.0650

D. Hooper and C. Kelso 2011 “Implications of CoGeNT’s New Results for Dark Matter” http://arXiv.org/pdf/1106.1066v1


Dark Matter Powered Stars

Gamma Ray Burster 070125

GRB (gamma ray burster) 070125. Credit: B. Cenko, et al. and the W. M. Keck Observatory.

So what is a “dark star”? It is not a Newtonian black hole as suggested by John Michell in the 18th century who used the term while postulating that gravity could prevent light escaping from a very massive, compact star. It is not a “dark energy star”, which is related to a black hole, but rather than having a singularity at the center, quantum effects cause infalling matter to be converted to vacuum state energy, dark energy. It is not a comic book, science fiction comedy film, or Grateful Dead song.

In this blog entry we are writing about dark matter powered stars. These would be the very first stars, formed within the first few hundred million years of the universe’s existence. The working assumption is that dark matter consists of WIMPs – weakly interacting massive particles, in particular the favored candidate is the neutralino. The neutralino is the lightest particle among the postulated supersymmetric companions to the Standard Model suite of particles. As such it would be stable, would not ordinarily decay and is being searched for with XENON, CDMS, DAMA, AMS-02 and many other experiments.

The first stars are thought by astrophysicists to have been formed from clouds of ordinary hydrogen and helium as well as dark matter, with dark matter accounting for 5/6 of the total mass. These clouds, called “dark matter halos” are considered to have contained from about one million to 100 million times the Sun’s mass. The ordinary matter would settle towards the center as it cooled via radiation processes and the dark matter (which does not radiate) would be more diffuse. The stars forming at the center would be overwhelmingly composed of ordinary matter (hydrogen and helium nuclei and electrons).

Without any dark matter at all, ordinary matter stars up to about 120 to 150 solar masses could form; above this limit they would have very hot surfaces and their own radiation would inhibit further growth by infall of matter from the halo. But if as little as one part in a thousand of the protostar’s mass was in the form of dark matter this limitation goes away. The reason is that the neutralino WIMPs will, from time to time, meet one another inside the star and mutually annihilate since the neutralino is its own anti-particle. The major fraction of the energy produced in the annihilation remains inside the star, but some escapes in the form of neutrinos (not neutralinos).

Annihilation of these neutralinos is a very efficient heating mechanism throughout the volume of the star, creating a great amount of heat and pressure support, basically puffing up the star to a very large size. The stellar surface is, as a result, much cooler than in the no dark matter case, radiation pressure is insignificant, and accretion of significantly more material onto the star can occur. Stars could grow to be 1000 solar masses, or 10,000 solar masses, potentially even up to one million solar masses. Their sizes would be enormous while they were in the dark matter powered phase. Even the relatively small 1000 solar mass star, if placed at the Sun’s location, would extend through much of our Solar System, beyond the orbit of Saturn.

We have mentioned the neutralino meets neutralino annihilation mechanism. A second mechanism for heating the interior of the star would be direct impact of neutralinos onto protons and helium nuclei. This second mechanism could help sustain the duration as a dark matter powered star potentially even beyond a billion years.

Eventually the dark matter fuel would be exhausted, and the heat and pressure support from this source lost. The star would then collapse until the core was hot enough for nuclear fusion burning. Stars of 1000 solar masses would burn hydrogen, and later helium, and evolve extremely rapidly because of the high density and temperature in their cores. After their hydrogen and helium fusion cycles completed there would be no source of sufficient pressure support and they would collapse to black holes (or maybe dark energy stars).

It is calculated with detailed simulations that the dark star mechanism allows for much more massive stars than could be formed otherwise, and this provides a potentially natural explanation for the creation of massive black holes. Our own Milky Way has a black hole around 3 million solar masses at its center, and it appears a majority of galaxies have large black holes. The image at the top of this blog is of a gamma ray burst detection that may have come from a large black hole formation event.

References:

en.wikipedia.org/wiki/Dark_star_(dark_matter)

Freese, K. et al 2008, Dark Stars: the First Stars in the Universe may be powered by Dark Matter Heating, http://arxiv.org/pdf/0812.4844v1

Freese, K. et al 2010, Supermassive Dark Stars, http://arxiv.org/abs/1002.2233

http://news.discovery.com/space/did-dark-stars-spawn-supermassive-black-holes.html


Direct Search for Dark Matter: XENON100

The direct detection of putative dark matter particles, as opposed to measuring their collective gravitational effects, remains a significant challenge. A number of experiments are actively searching for WIMPs (= Weakly Interacting Massive Particles) as the currently favored candidates for dark matter. Particle physics models with supersymmetric extensions to the Standard Model suggest that the most abundant particle of dark matter would have a mass significantly greater than the proton. The mass is expected to lie somewhere in the range of under 10 times the mass of a proton to possibly as much as 10,000 times the mass of a proton (around 10 to 10,000 GeV/c^2 where GeV is a billion electron volts of energy and we divide by the square of the speed of light to convert to mass). The WIMP name reflects that these particles would only interact with other matter via the weak nuclear force and via gravity. They do not react via either the strong nuclear force or electromagnetism.

It is believed that WIMPs are produced in the Big Bang as a decay mode from the massive release of energy during the inflation phase. The currently most favored candidate WIMP is the proposed least massive supersymmetric particle (LSP), which is expected to be stable. Supersymmetric particles are considered to have large masses and would have the same quantum numbers (properties) as corresponding Standard Model particles, except for their spins, that would differ by 1/2 from their partners. The local density of dark matter is estimated to be about .3 GeV / cc (GeV per cubic centimeter). If the WIMP mass is 100 GeV/ c^2 there would be about 3 particles per liter.

Two major techniques are being employed to search for cosmic WIMPs. One of these is to detect the direct impact of WIMPs with atomic nuclei (via elastic scattering) in underground laboratories here on Earth. These would be very rare events, so large detectors are required and experiments must gather data for a long time. Such an impact leaves products from the interaction and it is these products that are actually detected in an experiment. A second technique is to look for gamma rays, which are produced in the galactic halo of the Milky Way or also the Sun’s interior, when dark matter (WIMP) collisions with ordinary matter occur at those locations. The gamma rays produced in this way can in principle be detected with satellites in Earth orbit.

Beyond these two general techniques to detect WIMPs there is the hope of actually creating these dark matter particles via high energy collisions at the Large Hadron Collider.

One recent set of results is from the XENON collaboration, which is funded by the US government and 6 European nations. The XENON100 experiment is located underground in Italy, in the Gran Sasso National Laboratory. The heart of the detector consists of cooled Xenon of quantity 65 kilograms. The target is in both the liquid and gas phases. When a WIMP strikes a Xenon atom directly, electrons are either knocked out of the Xenon atom or boosted to higher energy orbital levels in the atom. Both scintillation light, due to subsequent decay of the electron orbital, and ionization electrons, are thus generated. The 100 days of exposure of XENON100 analyzed to date have yielded 3 events, but one expects 2 events from background neutrons producing similar signatures, so there is statistically no detection. This result does allow the placement of upper limits on the WIMP cross-section for interaction as a function of mass.

The result appears to be in conflict with another experiment, also located at the same Gran Sasso laboratory, run by the DAMA team. The DAMA/Libra experiment claims a statistically significant detection of an annually modulated “WIMP wind” which reflects the variation in the Earth’s orbital direction with respect to the diffuse background of WIMP particles. The intensity is well above XENON100 limits for certain possible mass ranges of the WIMP major constituent particle.

The race is on to secure the direct detection of dark matter particles, beyond their extensive apparent gravitational effects. Rapid progress in enhancing the sensitivity of detection methods, typically including the use of larger detectors, will increase the probability of better WIMP detection and mass determination in the future.

References:

M. Drees, G. Gerbier and the Particle Data Group, 2010. “Dark Matter” Journal of Physics G37(7A) pp 255-260

J. Feng, 2010. Ann. Rev. Astron. Astrophys. 48: 495, “Dark Matter Candidates from Particle Physics and Methods of Detection” (also available at: arxiv.org/pdf/1003.0904)

S. Perrenod, 2011. Dark Matter, Dark Energy, Dark Gravity, chapter 4, BookBrewer Publishing

http://www.wikipedia.org/wiki/Dark_matter

XENON Dark Matter Project


Dark Matter

Dark matter is like the hidden part of an iceberg found below the water line. The hidden part is the dominant portion of the mass and supports the structure apparent from the visible portion above. Dark matter couples to ordinary matter through the gravitational force. The ordinary visible matter, which we detect through light from galaxies and stars, is analogous to the portion of an iceberg above the water line.

Why is dark matter important? It dominates the mass-energy density of the universe during the early part of its lifetime. Just after the epoch of the cosmic microwave background (CMB) the universe is composed of mostly dark matter (dark energy comes to dominate much later, during the most recent 5 billion years). But also there is the ordinary matter, which at that time is a highly uniform gas of hydrogen and helium atoms, with slightly overdense and slightly underdense regions.

The existence of a large amount of dark matter promotes much more efficient gravitational collapse of the overdense regions. This is a self-gravitational process in which regions that are slightly denser than the critical mass density (which is also the average mass density of the universe) at a given time will collapse away from the overall expansion that continues around them. Both the dark and ordinary matter in such a region collapse together, but it is the ordinary matter that forms the first stars since it interacts via various physical processes (think friction, radiation, etc.) to a much greater degree allowing for collapse. The dark matter, interacting only via gravity and perhaps the weak nuclear force, but not through electromagnetism, remains more spread out, more diffuse. The dark matter promotes the collapse process though, through increasing the self-gravity of a given region and this results in more efficient formation of stars and galaxies. There are many more stars and galaxies formed at early times than would be the case in the absence of substantial dark matter.

From cosmological observations including the CMB and high redshift (distant) supernovae, we find dark matter is about 1/4 of the mass-energy density of the universe. Dark matter is composed of either faint ordinary matter or, more likely, exotic matter that interacts through the weak and gravitational forces only. Dark matter is clearly detected by its gravitational effects on galaxy rotation curves, and is inferred from the kinematics of clusters of galaxies and the temperature measures of X-ray emission from very hot gas between galaxies in these same clusters. Dark matter is also detected through gravitational lensing effects in our galactic halo and in very large-scale cosmological structures. The abundance of deuterium, which is produced from Big Bang nucleosynthesis, in conjunction with the universe’s now well-known expansion rate, severely constrains the density of baryons (amount of ordinary nucleonic matter, i.e. protons and neutrons) in the universe and leads to the conclusion that over 80% of matter is nonbaryonic.

The MACHO (MAssive Compact Halo Objects) alternative, which refers to potential ordinary matter, is thus limited. The WIMP (Weakly Interacting Massive Particles) contribution is dominant. Hot WIMPS (e.g. neutrinos) are ruled out because they inhibit clumpiness and galaxy formation in the early universe. Cold nonbaryonic dark matter is the best candidate, with the primary candidate being an hypothesized, undiscovered particle. Neutralinos are thought by many particle physicists to be the best candidate for this, and have an expected mass of order 50 to 250 times the mass of the proton. It must be emphasized that no neutralino or similar particles (known as supersymmetric particles) have ever been detected directly. It is hoped that the Large Hadron Collider newly operational at CERN near Geneva may do this.

There may be direct detection of an annual modulation of the WIMP wind in a large scintillation array. There is also a possible indirect detection which manifests as an excess of 1 GeV gamma rays in our galactic halo. Significantly more sensitive detectors are needed to find these elusive particles and to provide a stronger foundation for supersymmetric physics, which postulates many new and heavy particles. An important experiment AMS-02 (Alpha Magnetic Spectrometer, 2nd generation) is scheduled to be carried to the International Space Station on the last Endeavour Shuttle mission scheduled for April 19, 2011. See http://www.ams02.org and http://cosmiclog.msnbc.msn.com/_news/2011/04/04/6403905-will-space-jam-delay-shuttle-launch

The next decade should allow us to shed new light on dark matter. Whatever it is made of, without the existence of substantial amounts of dark matter, there wouldn’t be nearly as many stars and galaxies in the universe, and we very likely wouldn’t be here.


Inflation

History of the Universe - WMAP

Graphic for History of the Universe (Credit: NASA/WMAP Science Team)

The Big Bang theory found great success explaining the general features of the universe, including the approximate age, the expansion history after the first second, the relative atomic abundances from cosmic nucleosynthesis, and of course the cosmic microwave background radiation. And it required only general relativity, a smooth initial state, and some well-understood atomic and nuclear physics. It assumed matter, both seen and unseen, was dominating and slowing the expansion via gravity. In this model the universe could expand forever, or recollapse on itself, depending on whether the average density was less than or greater than a certain value determined only by the present value of the Hubble constant.

However, during the late 20th century there remained some limitations and concerns with the standard Big Bang. Why is today’s density so relatively close to this critical value for recollapse, since it would have had to be within 1 part in 1000 trillion of the critical density at the time of the microwave background to yield that state? How did galaxies form given only the tiny density fluctuations observed in the microwave background emitted at the age of 380,000 years for the universe? And why was the microwave background so uniform anyway? In the standard Big Bang model, regions only a few degrees away from each other would not be casually connected (no communication even with light between the regions would be possible).

There are four known fundamental forces of nature. These are electromagnetism and gravity and two types of nuclear forces, known as the strong force and the weak force. Physicists believe all the forces but gravity unify at energies around  10,000 trillion times the rest mass-energy (using E = mc^2) of the proton (1 Giga-electron-Volt). At some point very early in the life of the universe, at even higher energies equal to the Planck energy of 10 million trillion times the proton mass, all of the four forces would have been unified as a single force or interaction. Gravity would separate from the others first as the universe’s expansion began and the effective temperature dropped, and next the strong force would decouple.

We also must consider the vacuum field, that represents the non-zero energy of empty space. Even empty space is filled with virtual particles, and thus energy. At very early times the energy density of the vacuum would be expected to be very high. During the very earliest period of the development of the universe, it could have decayed to a lower energy state in conjunction with the decoupling of the strong force from the unified single force, and this would also have driven an enormous expansion of space and deposited a large amount of energy into the creation of matter.

In the inflationary Big Bang model postulated by Alan Guth and others, the decay of the vacuum field would release massive amounts of energy and drive an enormous inflation (hyperinflation really) during a very short period of time. The inflation might have started one trillionth of one trillionth of one trillionth of a second after the beginning. And it might have lasted until only the time of one billionth of one trillionth of one trillionth of a second. But it would have driven the size of the entire universe to grow from an extremely microscopic scale up to the macroscopic scale. At the end of the inflation, what was originally a tiny bubble of space-time would have grown to perhaps one meter in size. And at the end of the inflationary period, the universe would have been filled with radiation and matter in the form of a quark-gluon plasma. Quarks are the constituent particles of ordinary matter such as protons and neutrons and gluons carry the strong force.

The doubling time was extremely short, so during this one billionth of one trillionth of one trillionth of a second the universe doubled around 100 times. In each of the 3 spatial dimensions it grew by roughly one million times one trillion times one trillion in size! This is much greater than even Zimbabwe’s inflation and happens in a nearly infinitesimal time! The inflationary period drove the universe to be very flat topologically, which is observed. And it implies that the little corner of the universe we can observe, and think of as our own, is only one trillionth of one trillionth of the entire universe, or less. There is good observational support for the inflationary Big Bang model from the latest observations concerning the flatness of the universe, given that the mass-energy density is so close to the critical value, and also from the weight of the evidence concerning the growth of original density fluctuations to form stars and galaxies.


Dark Energy Survey

DES logo

Dark Energy Survey logo

The Dark Energy Survey (DES) is a ground-based cosmology experiment led by astronomers from the US, Brazil and Europe. It has begun its trip to Chile where it is scheduled to begin observations in November, 2011 using the 4 meter Victor M. Blanco telescope in the Atacama desert. It uses a new highly sensitive camera design called DECam, with resolution totaling 570 Megapixels and employing very large pixels, and it emphasizes sensitivity in the red and infrared portions of the spectrum, in order to measure galaxies out to redshifts of 1 and beyond. Galaxies in the early universe are far away from us and have high redshift values. Light which they would have originally emitted in the blue or yellow portions of the optical spectrum has shifted toward the red or infrared, thus the emphasis on detection of infrared photons for this work.

The DES uses a 4-pronged attack to improve the measurement of the dark energy and other cosmological parameters. These 4 tests are:

  1. Supernovae – Type 1a supernovae are thought to occur when a white dwarf in a binary stellar system accretes mass from its companion. Once enough mass is accreted, the white dwarf is pushed over the Chandrasekhar limit of 1.4 solar masses, the gravity of the star’s mass overwhelms the pressure support from its ‘degenerate electron’ matter, and the white dwarf undergoes core collapse and becomes a supernova. It is temporarily as bright as an entire galaxy. Such a supernova can be detected at large distances (high redshifts) and very importantly, since the mass of the supernova is always the same, the absolute brightness of this type of supernova is essentially expected to be the same as well. This allows us to use them as standard candles for distance measurement and thus for cosmological tests.
  2. Baryon acoustic oscillations – This test looks at the statistics of galaxy separations at very large scales. In the early universe, sound waves were established in the hot dense plasma, reflecting pressure generated by the interaction of photons and ordinary matter. Dark matter does not participate except gravitationally. A “sound horizon” is expected with a present size of about 500 million light years, and this acts as a standard ruler as the universe expands. A bump in the correlation function, which measures the probability of one galaxy being near another, is expected at this characteristic distance.
  3. Galaxy cluster counts – This test of how many galaxy clusters are detectable versus redshift was apparently first proposed by myself in 1980, in the context of X-ray emission from the very hot diffuse gas found between galaxies in galaxy clusters. This approach offers certain advantages in comparison to simple galaxy counts versus redshift. In this case it will be performed in the infrared and red, observing the galaxies themselves. Galaxy clusters contain up to 1000 or more galaxies within a single cluster. The number of clusters that can be seen at a given redshift is dependent on the cosmological model and the mass of the cluster, since dark matter promotes cluster formation through gravitational attraction. Dark energy inhibits cluster formation, so this helps to measure the relative strength of dark energy at earlier times. The team expects to detect over 100,000 galaxy clusters, out to redshifts of 1.5.
  4. Weak lensing – This refers to gravitational lensing. This occurs when a source galaxy is behind an intervening galaxy cluster and the gravity of the cluster bends the light from the source galaxy in accordance with general relativity. By surveying a very large number of galaxies, a strong statistical measure of this bending, also known as cosmic shear, can be taken. The amount of shear will be measured as a function of redshift (distance). This shear is sensitive to both the shape of the universe and the way in which structure develops over time.

More info: http://www.quantumdiaries.org/2011/02/11/des-first-light-countdown-9-months-to-go-decam-on-telescope-simulator/

http://en.wikipedia.org/wiki/Dark_Energy_Survey

http://news.medill.northwestern.edu/chicago/news.aspx?id=182835


The Big Bang model

CMB spectrum (COBE)

Cosmic Microwave Background spectrum (credit: NASA)

The Big Bang theory describing the origin and expansion of the universe from a very tiny and energetic initial state was developed initially in the 1920s as a solution for Einstein’s equations of general relativity. It assumed, correctly, a uniform (homogeneous) density of matter and energy. While the universe around us today appears highly non-uniform, with visible matter apparently concentrated in groups of galaxies, and in individual galaxies, gaseous nebulae, and star clusters, stars, and planets, all the evidence indicates that matter was very uniformly distributed throughout the first one million years of existence. At that time there were no stars or galaxies, rather the universe consisted of hot dense, but expanding, gas and photons (light). Even today, on the largest scales of 500 million light years and beyond, the universe appears to be quite uniform on average.

The first great support for the Big Bang came from the detection of what we call the Hubble expansion, named for Edwin Hubble, who in 1929 first demonstrated that galaxy recession predominates and depends on distance from us. Galaxies on average are all moving away from each other, unless they are gravitationally bound to their neighbors. The rate of expansion is simply proportional to the distance to the galaxy; this is known as Hubble’s law. Every galaxy moves away from every other galaxy regardless of its position in the universe; this implies a global and uniform expansion.

How do we determine this relationship? The light from these distant galaxies is shifted to be redder than normal in proportion to the velocity away from our galaxy. The redshift is a measure of the velocity of recession and the velocity is found to be proportional to the distance from our Milky Way to the galaxy in question. To be clear, the galaxy velocity and distance follow a linear relation. If we were located in another galaxy, we would observe the same effect. Most of the galaxies would be receding from us as well, at rates proportional to their distance. This is just what one expects for a universe which is isotropic – the same in each direction – and which is expanding uniformly. Each dimension of three-dimensional space is getting larger with time. The gravitationally bound objects, such as the galaxies themselves, are not expanding, but the space between the galaxies is stretching and has been since the Big Bang initial event.

Since the rate of the expansion is proportional to distance, one can take the proportionality constant, known as Hubble’s constant, and by inverting that determine an approximate age of the universe. It amounts to ‘running the movie backward.’ The age works out to 14 billion years, which is very close to the current best estimate of the age of 13.8 billion years, about 3 times the age of the Sun and the Earth.

Another great success of the Big Bang model was in its prediction of the helium abundance. The same hydrogen fusion process that powers the Sun took place in the early universe during the first 20 minutes, when the temperature was millions of degrees. In the Sun hydrogen is fused to created helium. For the early universe, this is known as primordial or Big Bang nucleosynthesis. There was only time enough and the right conditions to create helium, the second lightest element in the periodic table, and also the heavy form of hydrogen known as deuterium, plus just a bit of the third element lithium. None of the heavier elements such as carbon, nitrogen, oxygen, silicon or iron were created – this would happen later inside stellar furnaces. The final result of this cosmological nucleosynthesis turned 25% of the initial available mass of hydrogen into helium, and into trace amounts of deuterium, lithium and beryllium. The primordial abundance observed in the oldest stars for helium and deuterium matches the predictions of the Big Bang nucleosynthesis model.

The Big Bang moved from being possible theory to well-established factual model describing the universe when the first detection of the cosmic microwave background was published in 1965 by Arno Penzias and Robert Wilson, who received the Nobel Physics prize for their discovery. The cosmic microwave background is blackbody thermal radiation at millimeter wavelengths in the radio portion of the electromagnetic spectrum., and as we observe it at present, it has a temperature of a little under 3 degrees above absolute zero (see image above which has the characteristic thermal blackbody shape). It fills space in every direction in which one observes, and is remarkably uniform in intensity. The cosmic microwave background dates from a time when the universe was about 380,000 years old, and the radiation was originally emitted at a temperature of around 3000 degrees on the Kelvin scale. It also has redshifted, by over 1000 times. Thus we detect today as radio waves photons that were originally emitted in the optical and infrared portions of the electromagnetic spectrum when the universe was only 380,000 years old. Unlike the hydrogen and helium atoms which are found in stars and on planets, these photons have stretched out in proportion to the expansion of the universe.