Baryonic matter
Baryonic matter is matter composed mostly of baryons (by mass), which includes atoms of any sort (and thus includes nearly all matter that may be encountered or experienced in everyday life). Non-baryonic matter, as implied by the name, is any sort of matter that is not composed primarily of baryons. This might include such ordinary matter as neutrinos or free electrons; however, it may also include exotic species of non-baryonic dark matter, such as supersymmetric particles, axions, or black holes. The distinction between baryonic and non-baryonic matter is important in cosmology, because Big Bang nucleosynthesis models set tight constraints on the amount of baryonic matter present in the early universe.
The very existence of baryons is also a significant issue in cosmology because it is assumed that the Big Bang produced a state with equal amounts of baryons and antibaryons. The process by which baryons come to outnumber their antiparticles is called baryogenesis (in contrast to a process by which leptons account for the predominance of matter over antimatter, leptogenesis).
Baryogenesis
Main article: Baryogenesis
Experiments are consistent with the number of quarks in the universe being a constant and, to be more specific, the number of baryons being a constant; in technical language, the total baryon number appears to be conserved. Within the prevailing Standard Model of particle physics, the number of baryons may change in multiples of three due to the action of sphalerons, although this is rare and has not been observed under experiment. Some grand unified theories of particle physics also predict that a single proton can decay, changing the baryon number by one; however, this has not yet been observed under experiment. The excess of baryons over antibaryons in the present universe is thought to be due to non-conservation of baryon number in the very early universe, though this is not well understood.
leptogenesis
In physical cosmology, leptogenesis is the generic term for hypothetical physical processes that produced an asymmetry between leptons and antileptons in the very early universe, resulting in the dominance of leptons over antileptons. The analogous mechanism for baryons is called baryogenesis.
It should be understood however that in the currently accepted model for the elementary interactions, the so called Standard Model, it is not possible to create only "standalone" leptons as these processes are bound by conservation laws such as the conservation of electric charge.
Leptogenesis theories employ sub-disciplines of physics such as quantum field theory, and statistical physics, to describe such possible mechanisms. The next step after leptogenesis is the much better understood Big Bang nucleosynthesis, during which light atomic nuclei began to form. Successful synthesis of the light elements requires that there be an imbalance in the number of baryons and antibaryons to one part in a billion when the universe is a few minutes old.[1] An asymmetry in the number of leptons and antileptons is not mandatory for Big Bang nucleosynthesis. However, universal charge conservation suggests that any symmetry in the charged leptons and antileptons (electrons, muons and tau particles) should be of the same order of magnitude as the baryon asymmetry. (Observations of the primordial Helium-4 abundance place an upper limit on any lepton asymmetry residing in the neutrino sector[1] which is not very stringent though.)
Baryogenesis and leptogenesis are also connected by a phenomenon that happens in the Standard Model. Indeed, certain (non-perturbative) configurations of gauge fields, called sphalerons, can convert leptons into baryons and vice versa. This means that the Standard Model is in principle able to provide a mechanism to create baryons and leptons, realizing a speculative possibility suggested by Andrei Sakharov in the 1960s. The simplest version of the Standard Model, however, is quantitatively unable to realize this possibility.
A simple modification of the Standard Model that is instead able to realize the program of Sakharov is the one suggested by M. Fukugita and T. Yanagida [2] The Standard Model is extended by adding right-handed neutrinos, permitting implementation of the see-saw mechanism and providing the neutrinos with mass. At the same time, the extended model is able to spontaneously generate leptons from the decays of right-handed neutrinos. Finally, the sphalerons are able to convert the spontaneously generated lepton asymmetry into the observed baryonic asymmetry. Often, by an extension of terms, the physicists use the word leptogenesis to denote the mechanism here described.
Cosmological constant of dark energy
As the universe expand, there will be more and more dark energy!! Dark matter and baryonic matter do not increase as the universe increase in side , rather becomes more and more diluted! Hence future universe will becomes colder and colder, and expanding faster and faster.
A de Sitter universe is a cosmological solution to Einstein's field equations of General Relativity which is named after Willem de Sitter. It models the universe as spatially flat and neglects ordinary matter, so the dynamics of the universe are dominated by the cosmological constant, thought to correspond to dark energy in our universe or the inflaton field in the early universe. According to the models of inflation and current observations of the accelerating universe, the concordance models of physical cosmology are converging on a consistent model where our universe was best described as a de Sitter universe at about a time t = 10^{-33} seconds after the fiducial Big Bang singularity, and far into the future.
In mathematics and physics, a de Sitter space is the analog in Minkowski space, or spacetime, of a sphere in ordinary, Euclidean space. The n-dimensional de Sitter space, denoted dS_n, is the Lorentzian manifold analog of an n-sphere (with its canonical Riemannian metric); it is maximally symmetric, has constant positive curvature, and is simply-connected for n at least 3.
In the language of general relativity, de Sitter space is the maximally symmetric, vacuum solution of Einstein's field equations with a positive (repulsive) cosmological constant \Lambda (corresponding to a positive vacuum energy density and negative pressure). When n = 4 (3 space dimensions plus time), it is a cosmological model for the physical universe; see de Sitter universe.
The simplest explanation for dark energy is that it is simply the "cost of having space": that is, a volume of space has some intrinsic, fundamental energy. This is the cosmological constant, sometimes called Lambda (hence Lambda-CDM model) after the Greek letter Λ, the symbol used to represent this quantity mathematically. Since energy and mass are related by E = mc2, Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum. In fact, most theories of particle physics predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir Effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). The cosmological constant is estimated by cosmologists to be on the order of 10−29 g/cm3, or about 10−120 in reduced Planck units[citation needed]. Particle physics predicts a natural value of 1 in reduced Planck units, leading to a large discrepancy.
The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason why a cosmological constant has negative pressure can be seen from classical thermodynamics; Energy must be lost from inside a container to do work on the container. A change in volume dV requires work done equal to a change of energy −P dV, where P is the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases (dV is positive), because the energy is equal to ρV, where ρ (rho) is the energy density of the cosmological constant. Therefore, P is negative and, in fact, P = −ρ.
A major outstanding problem is that most quantum field theories predict a huge cosmological constant from the energy of the quantum vacuum, more than 100 orders of magnitude too large.[6] This would need to be cancelled almost, but not exactly, by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero,[citation needed] which does not help because supersymmetry must be broken. The present scientific consensus amounts to extrapolating the empirical evidence where it is relevant to predictions, and fine-tuning theories until a more elegant solution is found. Technically, this amounts to checking theories against macroscopic observations. Unfortunately, as the known error-margin in the constant predicts the fate of the universe more than its present state, many such "deeper" questions remain unknown.
Another problem arises with inclusion of the cosmological constant in the standard model: i.e., the appearance of solutions with regions of discontinuities (see classification of discontinuities for three examples) at low matter density.[23] Discontinuity also affects the past sign of the pressure assigned to the cosmological constant, changing from the current negative pressure to attractive, as one looks back towards the early Universe. A systematic, model-independent evaluation of the supernovae data supporting inclusion of the cosmological constant in the standard model indicates these data suffer systematic error. The supernovae data are not overwhelming evidence for an accelerating universe expansion which may be simply gliding.[24] A numerical evaluation of WMAP and supernovae data for evidence that our local group exists in a local void with poor matter density compared to other locations, uncovered possible conflict in the analysis used to support the cosmological constant.[25] A recent theoretical investigation found the cosmological time, dt, diverges for any finite interval, ds, associated with an observer approaching the cosmological horizon, representing a physical limit to observation. This is a key component required for a complete interpretation of astronomical observations, particularly pertaining to the nature of dark energy.[26] The identification of dark energy as a cosmological constant does not appear to be consistent with the data. These findings should be considered shortcomings of the standard model, but only when a term for vacuum energy is included.
In spite of its problems, the cosmological constant is in many respects the most economical solution to the problem of cosmic acceleration. One number successfully explains a multitude of observations. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.
Quintessence[edit]
Main article: Quintessence (physics)
In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.
No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time.[citation needed] Scalar fields are predicted by the standard model and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmic inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.
The cosmic coincidence problem asks why the cosmic acceleration began when it did. If cosmic acceleration began earlier in the universe, structures such as galaxies would never have had time to form and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called tracker behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.[citation needed]
In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w=−1) from above to below. A No-Go theorem has been proved that gives this scenario at least two degrees of freedom as required for dark energy models. This scenario is so-called Quintom scenario.
Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.
Baryonic matter is matter composed mostly of baryons (by mass), which includes atoms of any sort (and thus includes nearly all matter that may be encountered or experienced in everyday life). Non-baryonic matter, as implied by the name, is any sort of matter that is not composed primarily of baryons. This might include such ordinary matter as neutrinos or free electrons; however, it may also include exotic species of non-baryonic dark matter, such as supersymmetric particles, axions, or black holes. The distinction between baryonic and non-baryonic matter is important in cosmology, because Big Bang nucleosynthesis models set tight constraints on the amount of baryonic matter present in the early universe.
The very existence of baryons is also a significant issue in cosmology because it is assumed that the Big Bang produced a state with equal amounts of baryons and antibaryons. The process by which baryons come to outnumber their antiparticles is called baryogenesis (in contrast to a process by which leptons account for the predominance of matter over antimatter, leptogenesis).
Baryogenesis
Main article: Baryogenesis
Experiments are consistent with the number of quarks in the universe being a constant and, to be more specific, the number of baryons being a constant; in technical language, the total baryon number appears to be conserved. Within the prevailing Standard Model of particle physics, the number of baryons may change in multiples of three due to the action of sphalerons, although this is rare and has not been observed under experiment. Some grand unified theories of particle physics also predict that a single proton can decay, changing the baryon number by one; however, this has not yet been observed under experiment. The excess of baryons over antibaryons in the present universe is thought to be due to non-conservation of baryon number in the very early universe, though this is not well understood.
leptogenesis
In physical cosmology, leptogenesis is the generic term for hypothetical physical processes that produced an asymmetry between leptons and antileptons in the very early universe, resulting in the dominance of leptons over antileptons. The analogous mechanism for baryons is called baryogenesis.
It should be understood however that in the currently accepted model for the elementary interactions, the so called Standard Model, it is not possible to create only "standalone" leptons as these processes are bound by conservation laws such as the conservation of electric charge.
| γ | + | γ | → | e− | + | e+ | (Pair creation) |
| μ− | → | e− | + | ν e | + | ν μ | (Muon decay) |
| n | → | p | + | e− | + | ν e | (Beta decay) |
Leptogenesis theories employ sub-disciplines of physics such as quantum field theory, and statistical physics, to describe such possible mechanisms. The next step after leptogenesis is the much better understood Big Bang nucleosynthesis, during which light atomic nuclei began to form. Successful synthesis of the light elements requires that there be an imbalance in the number of baryons and antibaryons to one part in a billion when the universe is a few minutes old.[1] An asymmetry in the number of leptons and antileptons is not mandatory for Big Bang nucleosynthesis. However, universal charge conservation suggests that any symmetry in the charged leptons and antileptons (electrons, muons and tau particles) should be of the same order of magnitude as the baryon asymmetry. (Observations of the primordial Helium-4 abundance place an upper limit on any lepton asymmetry residing in the neutrino sector[1] which is not very stringent though.)
Baryogenesis and leptogenesis are also connected by a phenomenon that happens in the Standard Model. Indeed, certain (non-perturbative) configurations of gauge fields, called sphalerons, can convert leptons into baryons and vice versa. This means that the Standard Model is in principle able to provide a mechanism to create baryons and leptons, realizing a speculative possibility suggested by Andrei Sakharov in the 1960s. The simplest version of the Standard Model, however, is quantitatively unable to realize this possibility.
A simple modification of the Standard Model that is instead able to realize the program of Sakharov is the one suggested by M. Fukugita and T. Yanagida [2] The Standard Model is extended by adding right-handed neutrinos, permitting implementation of the see-saw mechanism and providing the neutrinos with mass. At the same time, the extended model is able to spontaneously generate leptons from the decays of right-handed neutrinos. Finally, the sphalerons are able to convert the spontaneously generated lepton asymmetry into the observed baryonic asymmetry. Often, by an extension of terms, the physicists use the word leptogenesis to denote the mechanism here described.
Cosmological constant of dark energy
As the universe expand, there will be more and more dark energy!! Dark matter and baryonic matter do not increase as the universe increase in side , rather becomes more and more diluted! Hence future universe will becomes colder and colder, and expanding faster and faster.
A de Sitter universe is a cosmological solution to Einstein's field equations of General Relativity which is named after Willem de Sitter. It models the universe as spatially flat and neglects ordinary matter, so the dynamics of the universe are dominated by the cosmological constant, thought to correspond to dark energy in our universe or the inflaton field in the early universe. According to the models of inflation and current observations of the accelerating universe, the concordance models of physical cosmology are converging on a consistent model where our universe was best described as a de Sitter universe at about a time t = 10^{-33} seconds after the fiducial Big Bang singularity, and far into the future.
In mathematics and physics, a de Sitter space is the analog in Minkowski space, or spacetime, of a sphere in ordinary, Euclidean space. The n-dimensional de Sitter space, denoted dS_n, is the Lorentzian manifold analog of an n-sphere (with its canonical Riemannian metric); it is maximally symmetric, has constant positive curvature, and is simply-connected for n at least 3.
In the language of general relativity, de Sitter space is the maximally symmetric, vacuum solution of Einstein's field equations with a positive (repulsive) cosmological constant \Lambda (corresponding to a positive vacuum energy density and negative pressure). When n = 4 (3 space dimensions plus time), it is a cosmological model for the physical universe; see de Sitter universe.
The simplest explanation for dark energy is that it is simply the "cost of having space": that is, a volume of space has some intrinsic, fundamental energy. This is the cosmological constant, sometimes called Lambda (hence Lambda-CDM model) after the Greek letter Λ, the symbol used to represent this quantity mathematically. Since energy and mass are related by E = mc2, Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum. In fact, most theories of particle physics predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir Effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). The cosmological constant is estimated by cosmologists to be on the order of 10−29 g/cm3, or about 10−120 in reduced Planck units[citation needed]. Particle physics predicts a natural value of 1 in reduced Planck units, leading to a large discrepancy.
The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason why a cosmological constant has negative pressure can be seen from classical thermodynamics; Energy must be lost from inside a container to do work on the container. A change in volume dV requires work done equal to a change of energy −P dV, where P is the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases (dV is positive), because the energy is equal to ρV, where ρ (rho) is the energy density of the cosmological constant. Therefore, P is negative and, in fact, P = −ρ.
A major outstanding problem is that most quantum field theories predict a huge cosmological constant from the energy of the quantum vacuum, more than 100 orders of magnitude too large.[6] This would need to be cancelled almost, but not exactly, by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero,[citation needed] which does not help because supersymmetry must be broken. The present scientific consensus amounts to extrapolating the empirical evidence where it is relevant to predictions, and fine-tuning theories until a more elegant solution is found. Technically, this amounts to checking theories against macroscopic observations. Unfortunately, as the known error-margin in the constant predicts the fate of the universe more than its present state, many such "deeper" questions remain unknown.
Another problem arises with inclusion of the cosmological constant in the standard model: i.e., the appearance of solutions with regions of discontinuities (see classification of discontinuities for three examples) at low matter density.[23] Discontinuity also affects the past sign of the pressure assigned to the cosmological constant, changing from the current negative pressure to attractive, as one looks back towards the early Universe. A systematic, model-independent evaluation of the supernovae data supporting inclusion of the cosmological constant in the standard model indicates these data suffer systematic error. The supernovae data are not overwhelming evidence for an accelerating universe expansion which may be simply gliding.[24] A numerical evaluation of WMAP and supernovae data for evidence that our local group exists in a local void with poor matter density compared to other locations, uncovered possible conflict in the analysis used to support the cosmological constant.[25] A recent theoretical investigation found the cosmological time, dt, diverges for any finite interval, ds, associated with an observer approaching the cosmological horizon, representing a physical limit to observation. This is a key component required for a complete interpretation of astronomical observations, particularly pertaining to the nature of dark energy.[26] The identification of dark energy as a cosmological constant does not appear to be consistent with the data. These findings should be considered shortcomings of the standard model, but only when a term for vacuum energy is included.
In spite of its problems, the cosmological constant is in many respects the most economical solution to the problem of cosmic acceleration. One number successfully explains a multitude of observations. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.
Quintessence[edit]
Main article: Quintessence (physics)
In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.
No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time.[citation needed] Scalar fields are predicted by the standard model and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmic inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.
The cosmic coincidence problem asks why the cosmic acceleration began when it did. If cosmic acceleration began earlier in the universe, structures such as galaxies would never have had time to form and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called tracker behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.[citation needed]
In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w=−1) from above to below. A No-Go theorem has been proved that gives this scenario at least two degrees of freedom as required for dark energy models. This scenario is so-called Quintom scenario.
Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.