Description

A common synthetic distribution used to benchmark MCMC methods. For details on this implementation, see Pagani et al. (2022).

Here we use the values:

ParameterValue
Number of "$y$" dimensions, $d$2
Scale factor1.0

Stan implementation

Pair plot

Diagonal entries show estimates of the marginal densities as well as the (0.16, 0.5, 0.84) quantiles (dotted lines). Off-diagonal entries show estimates of the pairwise densities.

Movie linked below (🍿) superimposes 100 iterations of MCMC.

🔍 Full page 🍿 Movie 🔗 Info

Trace plots

🔍 Full page

Intervals

Nominal coverage requested: 0.95 (change via interval_probability option which can be passed to report()).

The credible interval (naive_left, naive_right) is constructed using the quantiles of the posterior distribution. It is naive in the sense that it does not take into account additional uncertainty brought by the Monte Carlo approximation.

The radius of a Monte Carlo confidence interval with the same nominal coverage, constructed on each of the end points of the naive interval is shown in mcci_radius_left and mcci_radius_left.

Finally, (fused_left, fused_right) is obtained by merging the two sources of uncertainty: statistical, captured by the credible interval, and computational, captured by the confidence intervals on the end points.

parametersnaive_leftnaive_rightmcci_radius_leftmcci_radius_rightfused_leftfused_right
x-2.589252.265850.04756160.112907-2.636822.37876
y.1-0.3022116.919740.03275710.304065-0.3349687.22381
y.2-0.3354626.632950.07235580.264375-0.4078186.89733
log_density-5.30357-1.680490.4085180.0141637-5.71209-1.66632
💾 CSV🔗 Info

Moments, MCSE, ESS, etc

The ESS/MCSE/Rhat estimators use InferenceReports.safe_summarystats(chains), which are based on the truncated autocorrelation estimator (Geyer, 1992, sec 3.3) computed with FFT with no lag limit. As a result, these estimators should be safe to use in the low relative ESS regime, in contrast to the defaults used in MCMCChains, which lead to catastrophic ESS over-estimation in that regime.

parametersmeanstdmcseess_bulkess_tailrhatess_per_sec
x-0.1035881.416480.04467581022.471102.781.00701missing
y.12.011652.091990.05452121613.581301.010.999817missing
y.22.009572.095620.05357411742.691284.110.999468missing
log_density-2.70820.9815520.02293351786.871802.20.999798missing
💾 CSV

Cumulative traces

For each iteration $i$, shows the running average up to $i$, $\frac{1}{i} \sum_{n = 1}^{i} x_n$.

🔍 Full page

Local communication barrier

When the global communication barrier is large, many chains may be required to obtain tempered restarts.

The local communication barrier can be used to visualize the cause of a high global communication barrier. For example, if there is a sharp peak close to a reference constructed from the prior, it may be useful to switch to a variational approximation.

🔍 Full page 🔗 Info

Local communication barrier (variational)

Local communication barrier for the variational leg (ideally smaller than the non-variational one).

🔍 Full page 🔗 Info

GCB estimation progress

Estimate of the Global Communication Barrier (GCB) as a function of the adaptation round.

The global communication barrier can be used to set the number of chains. The theoretical framework of Syed et al., 2021 yields that under simplifying assumptions, it is optimal to set the number of chains (the argument n_chains in pigeons()) to roughly 2Λ.

Last round estimate: $8.881784197001252e-16$

🔍 Full page 🔗 Info

GCB estimation progress (variational)

Estimate of the Global Communication Barrier (GCB) as a function of the adaptation round for the variational chain.

Last round estimate: $2.2563447523276765$

🔍 Full page 🔗 Info

Evidence estimation progress

Estimate of the log normalization (computed using the stepping stone estimator) as a function of the adaptation round.

Last round estimate: $-0.7750440316552649$

🔍 Full page 🔗 Info

Round trips

Number of tempered restarts as a function of the adaptation round.

A tempered restart happens when a sample from the reference percolates to the target. When the reference supports iid sampling, tempered restarts can enable large jumps in the state space.

🔍 Full page 🔗 Info

Swaps plot

🔍 Full page

Pigeons summary

roundn_scansn_tempered_restartsglobal_barrierglobal_barrier_variationallast_round_max_timelast_round_max_allocationstepping_stone
1204.44089e-164.44089e-160.00386291170144.04.44089e-16
2402.22045e-162.22045e-160.00984442282608.0-1.66533e-16
3806.66134e-160.00.0176438570208.0-2.22045e-16
41683.33067e-163.33067e-160.03292831.17046e60.0
532242.22045e-163.33067e-160.06712932.13784e64.44089e-16
664366.66134e-161.581370.2021793.1264e7-0.8462
7128736.66134e-161.78780.4505816.57928e7-0.809928
82561549.99201e-161.971690.8165221.30888e8-0.348366
95123117.77156e-162.15881.714872.62537e8-0.59099
1010246258.88178e-162.256343.401665.35198e8-0.775044
💾 CSV🔗 Info

Pigeons inputs

KeysValues
extended_tracesfalse
checked_round0
extractornothing
recordFunction[Pigeons.traces, Pigeons.round_trip, Pigeons.log_sum_ratio, Pigeons.timing_extrema, Pigeons.allocation_extrema]
multithreadedfalse
show_reporttrue
n_chains10
variationalGaussianReference(Dict{Symbol, Any}(:singleton_variable => [-0.10358770214856587, 2.011651654110387, 2.0095743386153906]), Dict{Symbol, Any}(:singleton_variable => [1.4164799497692833, 2.0919854175342847, 2.0956166570083674]), 5)
explorernothing
n_chains_variational10
targetStanLogPotential(banana_model)
n_rounds10
exec_foldernothing
referencenothing
checkpointfalse
seed1
💾 CSV🔗 Info

Reproducibility

run(`git clone https://github.com/Julia-Tempering/InferenceReport.jl`)
cd("InferenceReport.jl")
run(`git checkout 3e33eb00e251294cff0d7580793dc023f8f1525d`)

using Pkg 
Pkg.activate(".")
Pkg.instantiate()
 

using Pigeons
inputs = Inputs(; target = Pigeons.stan_banana(2), variational = GaussianReference(first_tuning_round = 5), n_chains_variational = 10, n_rounds = 10, record = [traces; round_trip; record_default()])
 

pt = pigeons(inputs)