4.2. Before starting¶
In the section The Cosmology calculator we learned how to use EPIC to set up a cosmological model and load some datasets. The next logical step is to calculate the probability density at a given point of the parameter space, given that model and according to the chosen data. This can be done as follows:
In this example I am choosing the cosmic chronometers dataset, the Hubble constant local measurement and the simplified version of the JLA dataset.
The Analysis
object is created from the dictionary of datasets, the model
and a dictionary of priors in the model parameters (including nuisance
parameters related to the data).
The probability density at any point can then be calculated with the module
log_posterior
, which returns the logarithm of the posterior probability
density and the logarithm of the likelihood.
Setting the option chi2
to True
(It is False
by default) makes the
calculation of the likelihood as \(\log \mathcal{L} = - \chi^2/2\),
dropping the usual multiplicative terms from the normalized Gaussian likelihood.
When false, the results include the contribution of the factors
\(1/\sqrt{2\pi} \sigma_i\) or the factor \(1/\sqrt{2 \pi |\textbf{C}|}\).
These are constant in most cases, making no difference to the analysis,
but in other cases, depending on the data set, the covariance matrix
\(\textbf{C}\) can depend on nuisance parameters and thus vary at each
point.
Now that we know how to calculate the posterior probability at a given point,
we can perform a Monte Carlo Markov Chain simulation to assess the confidence
regions of the model parameters.
The main script epic.py
accomplishes this making use of the objects and
modules here presented.
The configuration of the analysis (choice of model, datasets, priors, etc) is
defined in a .ini
configuration file that the program reads.
The program creates a folder in the working directory with the same name of
this .ini
file, if it does not already exist.
Another folder is created with the date and time for the output of each run of
the code, but you can always continue a previous run from where it stopped,
just giving the folder name instead of the .ini
file.
The script is stored in the EPIC
source folder, where the .ini
files
should also be placed.
The default working directory is the EPIC
’s parent directory, i.e., the
epic
repository folder.
Changing the default working directory¶
By default, the folders with the name of the .ini
files are created at the
repository root level.
But the chains can get very long and you might want to have them stored in a
different drive.
In order to set a new default location for all the new files, run:
$ python define_altdir.py
This will ask for the path of the folder where you want to save all the output
of the program and keep this information in a file altdir.txt
.
If you want to revert this change you can delete the altdir.txt
file or run
again the command above and leave the answer empty when prompted.
To change this directory temporarily you can use the argument --alt-dir
when running the main script.
The structure of the .ini
file¶
Let us work with an example, with a simple flat \(\Lambda\text{CDM}\) model.
Suppose we want to constrain its parameters with \(H(z)\), supernovae data,
CMB shift parameters and BAO data.
The model parameters are the reduced Hubble constant \(h\), the present-day
values of the physical density parameters of dark matter \(\Omega_{c0} h^2\),
baryons \(\Omega_{b0} h^2\) and radiation \(\Omega_{r0} h^2\).
We will not consider perturbations, we are only constraining the parameters at
the background level.
Since we are using supernovae data we must include a nuisance parameter
\(M\), which represents a shift in the absolute magnitudes of the
supernovae.
Use of the full JLA catalogue requires the inclusion of the nuisance parameters
\(\alpha\), \(\beta\) and \(\Delta M\) from the light-curve fit.
The first section of .ini
is required to specify the type
of the model,
whether to use physical density parameters or not, and which species has the
density parameter derived from the others (e. g. from the flatness condition):
[model]
type = lcdm
physical = yes
optional species = ['baryons', 'radiation']
derived = lambda
The lcdm
model will always have the two species cdm
and lambda
.
We are including the optional baryonic fluid and radiation, which being a
combined species
replaces photons
and neutrinos
.
The configurations and options available for each model are registered in the
EPIC/cosmology/model_recipes.ini
file.
This section can still received the interaction setup
dictionary to set the
configuration of an interacting dark sector model.
Details on this are given in the previous section Interacting Dark Energy models.
The second section defines the analysis: a label, datasets and specifications
about the priors ranges and distributions.
The optional property prior distributions
can
receive a dictionary with either Flat
or Gaussian
for each parameter.
When not specified, the code will assume flat priors by default and interpret
the list of two numbers as an interval prior range.
When Gaussian
, these numbers are interpreted as the parameters \(\mu\)
and \(\sigma\) of the Gaussian distribution.
In the simulation
section, we specify the
parameters of the diagonal covariance matrix to be used with the proposal
probability distribution in the sampler.
Values comparable to the expected standard deviation of the parameter
distributions are recommended.
[analysis]
label = $H(z)$ + $H_0$ + SNeIa + BAO + CMB
datasets = {
'Hz': 'cosmic_chronometers',
'H0': 'HST_local_H0',
'SNeIa': 'JLA_simplified',
'BAO': [
'6dF+SDSS_MGS',
'SDSS_BOSS_CMASS',
'SDSS_BOSS_LOWZ',
'SDSS_BOSS_QuasarLyman',
'SDSS_BOSS_consensus',
'SDSS_BOSS_Lyalpha-Forests',
],
'CMB': 'Planck2015_distances_LCDM',
}
priors = {
'Och2' : [0.08, 0.20],
'Obh2' : [0.02, 0.03],
'h' : [0.5, 0.9],
'M' : [-0.3, 0.3],
}
prior distributions =
fixed = {
'T_CMB' : 2.7255
}
[simulation]
proposal covariance = {
'Och2' : 1e-3,
'Obh2' : 1e-5,
'h' : 1e-3,
'M': 1e-3,
}