Here we demonstrate with simulated data a setting of fine-mapping problem where it is clearly problematic to run forward stagewise selection, but SuSiE's Iterative Bayesian Stepwise Selection procedure can correctly identify the causal signals and compute posterior inclusion probability (posterior mean from a variational approximation to the posterior distribution). We also compared SuSiE results with other Bayesian sparse regression approach, and demonstrate that SuSiE is robust to prior choice and provide information more relevant to fine-mapping applications that other methods do not provide.
From simulated data we pick this case example where the top z-score might be a result of two other nearby signal clusters (containing variables of non-zero effects). See here for a brief description on the simulation settings.
%revisions -s -n 10
library(susieR)
set.seed(1)
data("N2finemapping")
attach(N2finemapping)
dat = N2finemapping
names(dat)
This simulated data-set has two replicates. We focus on the fist replicate:
r = 1
We fit SuSiE and obtain its CS and PIP. We also perform a per-feature simple regression and get (asymptotic) z-scores by setting compute_univariate_zscore
to TRUE
.
fitted = susie(dat$X, dat$Y[,r], L=5,
estimate_residual_variance=TRUE,
scaled_prior_variance=0.2,
tol=1e-3, track_fit=TRUE,
compute_univariate_zscore=TRUE,
coverage = 0.95, min_abs_corr = 0.1)
To show intermediate state of the IBSS algorithm we run the same analysis but set max_iter
to 1,
fitted.one.iter = susie(dat$X, dat$Y[,r], L=5, max_iter=1,
estimate_residual_variance=TRUE,
scaled_prior_variance=0.2,
tol=1e-3,
coverage = 0.95, min_abs_corr = 0.1)
We plot p-values from per-feature regression in parallel with PIP from SuSiE.
b = dat$true_coef[,r]
b[which(b!=0)] = 1
#susie_label = paste('SuSiE', length(sets$cs), 'CS identified')
one_iter_label = "IBSS after 1 iteration"
susie_label = "IBSS after 10 iterations"
pdf('susie_demo.pdf', width =15, height = 5, pointsize=24)
par(mfrow=c(1,3), cex.axis = 0.9)
susie_plot(fitted, y='z', b=b, max_cs=1, main = 'Marginal associations', xlab = 'variable (SNP)', col = '#767676')
susie_plot(fitted.one.iter, y='PIP', b=b, max_cs=0.4, main = one_iter_label, add_legend = F, ylim = c(0,1), xlab = 'variable (SNP)', col = '#767676')
susie_plot(fitted, y='PIP', b=b, max_cs=0.4, main = susie_label, add_legend = F, ylim = c(0,1), xlab = 'variable (SNP)', col = '#767676')
dev.off()
%preview susie_demo.pdf -s png --dpi 150
For the middle panel, the CS looks like:
print(fitted.one.iter$sets)
The ultimately identified CS are:
print(fitted$sets$cs)
L2
corresponds to the blue cluster around position 900. L3
corresponds to the cyan cluster around position 400. In both cases the simulated non-zero effect are captured by the confidence sets.
The CS L2
has many variables. We check how correlated they are:
fitted$sets$purity
The minimum absolute correlation is 0.97 for CS L2
. So SuSiE behavior is reasonable.
Notice that the strongest marginal association is at position
which.max(abs(fitted$z))
LD pattern in the data-set can be visualized by:
pdf('LDheatmap.pdf')
LDheatmap::LDheatmap(cor(dat$X[,100:200]),add.map=F,title="Correlation matrix\nfor variables 100 to 200")
LDheatmap::LDheatmap(cor(dat$X[,350:450]),add.map=F,title="Correlation matrix\nfor variables 350 to 450")
LDheatmap::LDheatmap(cor(dat$X[,750:850]),add.map=F,title="Correlation matrix\nfor variables 750 to 850")
dev.off()
%preview LDheatmap.pdf -s png --dpi 40
Correlation between the simualted two effect variables is 0.31. Correlation between the variable with top $z$ score and the first effect variable is 0.32; its correlation with the second effect variable is -0.54. Even under this setting SuSiE was able to correctly choose the two effect variables.
cor(dat$X[,c(which(b!=0), which.max(abs(fitted$z)))])
It takes 23 iterations for SuSiE to converge on this example. With track_fit=TRUE
we keep in fitted$trace
object various quantities to help with diagnotics.
fitted$niter
susie_plot_iteration(fitted, 3, 'demo')
%preview demo.gif --width 80%
At the first iteration, the variable with the top z-score was picked up by the first CS. But in later iterations, the first CS vanishes; the 2nd and 3rd CS correctly pick up the two "true" signals.
L=2
¶We now set L=2
and see how SuSiE converges to the true signals, if capable:
fitted = susie(dat$X, dat$Y[,r], L=2,
estimate_residual_variance=TRUE,
scaled_prior_variance=0.2,
tol=1e-3, track_fit=TRUE,
coverage = 0.95,min_abs_corr = 0.1)
pdf('l2.pdf', width =5, height = 5, pointsize=16)
susie_plot(fitted, y='PIP', b=b, max_cs=0.4, main = paste('SuSiE', length(fitted$sets$cs), 'CS identified'))
dev.off()
%preview l2.pdf -s png --dpi 150
fitted$niter
susie_plot_iteration(fitted, 2, 'demo_l2')
%preview demo_l2.gif --width 80%
It seems at the first iteration, the first CS picks up the induced wrong signal cluster, but both true signals were picked up by the 2nd CS which is quite wide. Later iterations removed the induced wrong cluster and the 2 CS properly converged to the two true signals as desired.
Here instead of fixing prior, I ask SuSiE to update prior variance
fitted = susie(dat$X, dat$Y[,r], L=5,
estimate_residual_variance=TRUE,
estimate_prior_variance=TRUE,
tol=1e-3, track_fit=TRUE,
coverage = 0.95,
min_abs_corr = 0.1)
pdf('est_prior.pdf', width =5, height = 5, pointsize=16)
susie_plot(fitted, y='PIP', b=b, max_cs=0.4, main = paste('SuSiE', length(fitted$sets$cs), 'CS identified'))
dev.off()
%preview est_prior.pdf -s png --dpi 150
Now I fix SuSiE prior to a smaller value (scaled prior is 0.005):
fitted = susie(dat$X, dat$Y[,r], L=5,
scaled_prior_variance=0.005,
estimate_prior_variance=TRUE,
tol=1e-3, track_fit=TRUE,
coverage = 0.95,
min_abs_corr = 0.1)
pdf('small_prior.pdf', width =5, height = 5, pointsize=16)
susie_plot(fitted, y='PIP', b=b, max_cs=0.4, main = paste('SuSiE', length(fitted$sets$cs), 'CS identified'))
dev.off()
%preview small_prior.pdf -s png --dpi 150
Result remains largely the same.
In this folder we provide scripts to run CAVIAR
, FINEMAP
(version 1.1) and DAP-G
on the same data-set for a comparison. We generate summary statistics and run these 3 other methods on the example.
library(abind)
mm_regression = function(X, Y, Z=NULL) {
if (!is.null(Z)) {
Z = as.matrix(Z)
}
reg = lapply(seq_len(ncol(Y)), function (i) simplify2array(susieR:::univariate_regression(X, Y[,i], Z)))
reg = do.call(abind, c(reg, list(along=0)))
# return array: out[1,,] is betahat, out[2,,] is shat
return(aperm(reg, c(3,2,1)))
}
sumstats = mm_regression(as.matrix(dat$X), as.matrix(dat$Y))
saveRDS(list(data=dat, sumstats=sumstats), 'N2.with_sumstats.rds')
export PATH=~/GIT/github/mvarbvs/dsc/modules/linux:$PATH
Rscript ~/GIT/susieR/inst/code/finemap.R input=\"N2.with_sumstats.rds\" output=\"N2finemapping.FINEMAP\" args=\"--n-causal-max\ 2\" &> /dev/null
export PATH=~/GIT/github/mvarbvs/dsc/modules/linux:$PATH
Rscript ~/GIT/susieR/inst/code/caviar.R input=\"N2.with_sumstats.rds\" output=\"N2finemapping.CAVIAR\" args=\"-c\ 2\ -g\ 0.01\" &> /dev/null
export PATH=~/GIT/github/mvarbvs/dsc/modules/linux:$PATH
python ~/GIT/susieR/inst/code/dap-g.py N2.with_sumstats.rds N2finemapping.DAP -ld_control 0.20 --all &> /dev/null
Now we look at these results.
caviar = readRDS("N2finemapping.CAVIAR.rds")
finemap = readRDS("N2finemapping.FINEMAP.rds")
dap = readRDS("N2finemapping.DAP.rds")
snp = caviar[[1]]$snp
pip = snp[order(as.numeric(snp$snp)),]$snp_prob
pdf('caviar_fm.pdf', width =5, height = 5, pointsize=16)
susie_plot(pip, y='PIP', b=b, main = 'CAVIAR')
dev.off()
%preview caviar_fm.pdf -s png --dpi 150
CAVIAR provides single 95% confidence sets as follows:
print(caviar[[1]]$set)
any(which(b>0) %in% caviar[[1]]$set)
It is a wide CS, although it does capture the causal SNPs.
When I set CAVIAR
non-centrality parameter to 10 instead of the default 5.2, I do get an improved result,
%preview /tmp/CAVIAR.ncp.hack.png
Analysis procedure not shown because CAVIAR
by default does not allow configuration of NCP (need to change the code and re-compile the binary).
snp = finemap[[1]]$snp
pip = snp[order(as.numeric(snp$snp)),]$snp_prob
pdf('finemap_fm.pdf', width =5, height = 5, pointsize=16)
susie_plot(pip, y='PIP', b=b, main = 'FINEMAP')
dev.off()
%preview finemap_fm.pdf -s png --dpi 150
For each configuration, FINEMAP provides the corresponding configuration probability:
head(finemap[[1]]$set, 10)
It requires post-processing to come up with a single 95% set similar to that of CAVIAR with all 3 causal variables captured. The result from running finemap.R
provided has been truncated to the minimum set of configurations having cumulative probability of 95%. But the top 10 configurations completely missed the causal signals.
The default prior for FINEMAP
is 0.05 (for the Gaussian standard deviation). Knowing from simulation parameters of the true effect size, we increase it to 0.1.
export PATH=~/GIT/github/mvarbvs/dsc/modules/linux:$PATH
Rscript ~/GIT/susieR/inst/code/finemap.R input=\"N2.with_sumstats.rds\" output=\"N2finemapping.FINEMAP.p1std\" args=\"--n-causal-max\ 2\ --prior-std\ 0.1\" &> /dev/null
finemap = readRDS("N2finemapping.FINEMAP.p1std.rds")
snp = finemap[[1]]$snp
pip = snp[order(as.numeric(snp$snp)),]$snp_prob
pdf('finemap_bigger_prior.pdf', width =5, height = 5, pointsize=16)
susie_plot(pip, y='PIP', b=b, main = 'FINEMAP')
dev.off()
%preview finemap_bigger_prior.pdf -s png --dpi 150
head(finemap[[1]]$set, 10)
The reported best configuration is closer to the true effects.
snp = dap[[1]]$snp
pip = snp[order(as.numeric(snp$snp)),]$snp_prob
pdf('dap_fm.pdf', width =5, height = 5, pointsize=16)
susie_plot(pip, y='PIP', b=b, main = 'DAP-G')
dev.off()
%preview dap_fm.pdf -s png --dpi 150
The default prior works
Similar to SuSiE, DAP-G provides per-signal 95% CS:
dap[[1]]$set
Using an average $r^2$ filter of 0.2 ($r=0.44$, compared to SuSiE's $r=0.1$ in earlier analysis), it reports five 95% CS; the first 3 CS contain causal signals. Also the first CS is wider.
Exported from manuscript_results/pedagogical_example.ipynb
committed by Gao Wang on Wed Feb 12 15:59:05 2020 revision 22, 036919f