We recall a few basics from a typical undergraduate level cosmology course, such as the expressions for the FLRW metric and the Einstein field equations:
from which we can derive the Friedmann and continuity equations:
Cosmological linear perturbation theory (mostly done in Fourier space) tells us about linear deviations from a smooth Universe, and is valid on large scales. For these reasons, non-linear effects and small scale behaviours are obviously beyond the scope of this talk.
We work in conformal Newton gauge, setting:
and we consider the perturbed energy-momentum tensor, from which we get four quantities of interest, namely: , , and . These corresponds respectively to the , , and components of the tensor. The perturbed Einstein equation yields 4 equations; in particular:
In the first equation (Poisson), we’ve defined (where, as usual, ). In the second equation, when (that is, baryonic matter and cold dark matter, both taken to be pressureless) there is no isotropic stress () and so the formula reduces to:
This is a unique feature of general relativity, which is lost in theories of modified gravity. As we will see later, assigning an observable to this difference allows us to quantify the divergence of observations from standard GR. Another relation of interest is derived by expanding :
This tells us that the rate of growth of a perturbation is related to (and thus to ). Finally, note that these results are only valid when , , and are all .
There are a number of cosmic objects we can use to constrain the parameters of our theory. For instance, the accelerated expansion of the Universe was discovered in 1998 (see the original paper) using supernovae, while the measurement of the power spectrum of the CMB (below) tells us about the geometry of the universe ( is derived from the angular position of the first peak, and turns out to correspond to a flat Euclidean spacetime) and its matter content (the separation of the second and third peaks is related to and their height to ; there is not in fact enough ordinary matter to reach the critical density for a flat geometry and so one has to invoke Dark Energy).
We can distinguish between observations about the background parameters (, ): CMB, supernovae, Baryon Acoustic Oscillations (BAO), local measurements; and those about the perturbations (, , ): growth rate, CMB lensing and polarisation, integrated Sachs-Wolfe effect, galaxy weak lensing, intensity mapping, etc. We now turn our attention to a couple of those measurements (in bold).
The growth rate can be defined as a function of and as:
The associated measurement in galaxy surveys is that of redshift-space distortions (RSD). Essentially, galaxy surveys provide three numbers for a vast range of galaxies: two spatial angles and the value of the redshift. Naively, this should be enough to get a clear 3D map.
Imagine a perfect sphere of galaxies (in redshift space), which could be precisely obtained with these three numbers; add to this picture that the sphere is being carried away from you (due to the expansion of the Universe). This causes an additional redshift, which is fine. Add further that the sphere is collapsing on itself, due to its gravitational pull - now galaxies have relative velocities, which is not so fine.
Indeed, the galaxies furthest away from you will be moving towards the center of collapse, that is, relatively to the sphere, backwards; meanwhile, the galaxies closest to you will be accelerating forward, also towards the center of collapse. This creates an additional Doppler effect, and in this redshift space, the galaxies that are spatially closest to you appear at the back, and vice-versa.
This “squashing” of the sphere, cause by RSDs, can be quantified by , where is the growth rate previously defined, is a density normalisation, and is the redshift.
Moving on to galaxy lensing, we note that strong lensing (leading to halos and streaks of light observed around galaxies) is extreme and relatively rare. On the other hand, we can learn more from the more frequent galaxy weak lensing, which causes a shear in the shape of galaxies (they seem more elliptical). Of course, we don’t know what the original shape of the galaxy is, so how could we determine how much shearing actually took place? By looking at possible correlations of directional ellipticity in many galaxies.
Here I skip a bit of optics about the process of lensing itself, and focus on the observable. We can write a tensor relating the true and apparent image positions as . We’re then interested in the divergence from the case of no lensing (unit tensor):
where contains the relevant cosmological distances and number densities of galaxies. Observables can then be built from this tensor by projecting out its components; in particular, we’ll be able to measure its magnitude and shear.
Precision measurements (Shapiro time delays, binary pulsars, etc.) yield very high constraints on deviations from GR but only on (relatively) short scales and dense media. If we want to come up with a viable theory of modified gravity on cosmological scales, we need to ensure the existence of a screening mechanism to protect the GR regime. An important result at this point is Lovelock’s theorem, which can be stated (in a rather approximate way) as:
The only second order, local gravitational field equation derivable from an action containing solely the 4-dimensional metric (plus related tensors) is the Einstein field equation with a cosmological constant.
The formulation of this theorem lies on five elements, which can all be (separetely or in various combinations) abandoned to allow for a theory of modified gravity:
- “containing solely the metric tensor” add a new field content.
- “4-dimensional” go to higher dimensions.
- “second order” include higher order derivatives. This, however, leads to Ostrogradsky instability, i.e. a Hamiltonian unbounded below. Clever work-arounds are possible, but are usually quite restrictive.
- “local” use non-local operators.
- “derivable from an action” do away with the action principle. This radical solution is taken in e.g. emergent gravity.
As can be seen in the diagram below, there are hundreds of such models: we need generic observables to quantify deviations from GR, in a model-independent way.
To do so, we modify our perturbation equations. The Poisson equation picks up a prefactor , while, as we’ve already mentioned, the equivalence between and is no longer assumed to hold:
We have to rerun all our calculations with and folded in and use the data to see if there is any deviation from
At the moment, all data seem to indicate we live close to the region. With more surveys coming soon (e.g. DES), we can hope to get more statistics and reduce error bars.