Multivariate SUm of Single Effect (SuSiE) Regression
Source:R/mvsusie.R
, R/mvsusie_rss.R
, R/mvsusie_ss.R
mvsusie.Rd
Performs a Bayesian multiple linear regression of Y on X.
That is, this function fits the regression model $$Y = \sum_l X
b_l + e,$$ where the elements of \(e\) are i.i.d. normal
with zero mean and variance residual_variance
, and the sum
\(\sum_l b_l\) is a vector of p effects to be estimated. The
SuSiE assumption is that each \(b_l\) has exactly one non-zero
element.
Usage
mvsusie(
X,
Y,
L = 10,
prior_variance = 0.2,
residual_variance = NULL,
prior_weights = NULL,
standardize = TRUE,
intercept = TRUE,
approximate = FALSE,
estimate_residual_variance = FALSE,
estimate_prior_variance = TRUE,
estimate_prior_method = "EM",
check_null_threshold = 0,
prior_tol = 1e-09,
compute_objective = TRUE,
s_init = NULL,
coverage = 0.95,
min_abs_corr = 0.5,
compute_univariate_zscore = FALSE,
precompute_covariances = FALSE,
n_thread = 1,
max_iter = 100,
tol = 0.001,
verbosity = 2,
track_fit = FALSE
)
mvsusie_rss(
Z,
R,
N,
Bhat,
Shat,
varY,
prior_variance = 0.2,
residual_variance = NULL,
...
)
mvsusie_suff_stat(
XtX,
XtY,
YtY,
N,
L = 10,
X_colmeans = NULL,
Y_colmeans = NULL,
prior_variance = 0.2,
residual_variance = NULL,
prior_weights = NULL,
standardize = TRUE,
estimate_residual_variance = FALSE,
estimate_prior_variance = TRUE,
estimate_prior_method = "EM",
check_null_threshold = 0,
prior_tol = 1e-09,
compute_objective = TRUE,
precompute_covariances = FALSE,
s_init = NULL,
coverage = 0.95,
min_abs_corr = 0.5,
n_thread = 1,
max_iter = 100,
tol = 0.001,
verbosity = 2,
track_fit = FALSE
)
Arguments
- X
N by J matrix of covariates.
- Y
Vector of length N, or N by R matrix of response variables.
- L
Maximum number of non-zero effects.
- prior_variance
Can be either (1) a vector of length L, or a scalar, for scaled prior variance when Y is univariate (which should then be equivalent to
susie
); or (2) a matrix for a simple multivariate regression; or (3) a mixture prior fromcreate_mixture_prior
.- residual_variance
The residual variance
- prior_weights
A vector of length p giving the prior probability that each element is non-zero. Note that the prior weights need to be non-negative but do not need to sum to 1; they will automatically be normalized to sum to 1 so that they represent probabilities. The default setting is that the prior weights are the same for all variables.
- standardize
Logical flag specifying whether to standardize columns of X to unit variance prior to fitting. If you do not standardize you may need to think more carefully about specifying the scale of the prior variance. Whatever the value of standardize, the coefficients (returned by
coef
) are for X on the original input scale. Note that any column of X with zero variance is not standardized, but left as is.- intercept
Should intercept be fitted or set to zero. Setting
intercept = FALSE
is generally not recommended.- approximate
Specifies whether to use approximate computation for the intercept when there are missing values in Y. The approximation saves some computational effort. Note that when the residual_variance is a diagonal matrix, running
mvsusie
withapproximate = TRUE
will give same result asapproximate = FALSE
, but with less running time. This setting is only relevant when there are missing values in Y andintercept
= TRUE.- estimate_residual_variance
When
estimate_residual_variance = TRUE
, the residual variance is estimated; otherwise it is fixed. Currentlyestimate_residual_variance = TRUE
only works for univariate Y.- estimate_prior_variance
When
estimate_prior_variance = TRUE
, the prior variance is estimated; otherwise it is fixed. Currentlyestimate_prior_variance = TRUE
only works for univariate Y, or for multivariate Y when the prior variance is a matrix).- estimate_prior_method
The method used for estimating the prior variance; valid choices are
"optim"
,"uniroot"
or"em"
for univariate Y; and"optim"
,"simple"
for multivariate Y.- check_null_threshold
When the prior variance is estimated, the estimate is compared against the null, and the prior variance is set to zero unless the log-likelihood using the estimate is larger than that of null by this threshold. For example, setting
check_null_threshold = 0.1
will “nudge” the estimate towards zero. When used withestimate_prior_method = "EM"
, settingcheck_null_threshold = NA
will skip this check.- prior_tol
When the prior variance is estimated, compare the estimated value to this value at the end of the analysis and exclude a single effect from PIP computation if the estimated prior variance is smaller than it.
- compute_objective
Add description of "compute_objective" input argument here.
- s_init
A previous model fit with which to initialize.
- coverage
Coverage of credible sets.
- min_abs_corr
Minimum of absolute value of correlation allowed in a credible set. The setting
min_abs_corr = 0.5
corresponds to squared correlation of 0.25, which is a commonly used threshold for genotype data in genetics studies.- compute_univariate_zscore
When
compute_univariate_zscore = TRUE
, the z-scores from the per-variable univariate regressions are outputted. (Note that these z-scores are not actually used to fit the multivariate susie model.)- precompute_covariances
If
precompute_covariances = TRUE
, precomputes various covariance quantities to speed up computations at the cost of increased memory usage.- n_thread
Maximum number of threads to use for parallel computation (only applicable when a mixture prior is used).
- max_iter
Maximum number of iterations to perform.
- tol
The model fitting will terminate when the increase in ELBOs between two successive iterations is less than
tol
.- verbosity
Set
verbosity = 0
for no messages;verbosity = 1
for a progress bar; andverbosity = 2
for more detailed information about the algorithm's progress at the end of each iteration.- track_fit
Add attribute
trace
to the return value which records the algorithm's progress at each iteration.- Z
J x R matrix of z-scores.
- R
J x J LD matrix.
- N
The sample size.
- Bhat
Alternative summary data giving the estimated effects (J X R matrix). This, together with
Shat
, may be provided instead ofZ
.- Shat
Alternative summary data giving the standard errors of the estimated effects (J X R matrix). This, together with
Bhat
, may be provided instead ofZ
.- varY
The sample covariance of Y, defined as \(Y'Y/(N-1)\). When the sample covariance is not provided, the coefficients (returned from
coef
) are computed on the “standardized” X, y scale.- ...
Additional arguments passed to
mvsusie_suff_stat
.- XtX
A J x J matrix \(X^TX\) in which the columns of \(X\) are centered to have mean zero.
- XtY
A J x R matrix \(X^TY\) in which the columns of \(X\) and \(Y\) are centered to have mean zero.
- YtY
An R x R matrix \(Y^TY\) in which the columns of \(Y\) are centered to have mean zero.
- X_colmeans
A vector of length J giving the column means of \(X\). If it is provided with
Y_colmeans
, the intercept is estimated; otherwise, the intercept isNA
.- Y_colmeans
A vector of length R giving the column means of \(Y\). If it is provided with
X_colmeans
, the intercept is estimated; otherwise, the intercept isNA
.
Value
A multivariate susie fit, which is a list with some or all of the following elements:
- alpha
L by p matrix of posterior inclusion probabilites.
- b1
L by p matrix of posterior mean single-effect estimates.
- b1_rescaled
L by p matrix
of posterior mean single-effect
estimates on the original input scale (same as coef
).
- b2
L by p matrix of posterior second moments.
- KL
Vector of single-effect KL divergences.
- lbf
Vector of single-effect log-Bayes factors.
- sigma2
Residual variance.
- V
Prior variance.
- elbo
Vector storing the the evidence lower bound, or “ELBO”, achieved at each iteration of the model fitting algorithm, which attempts to maximize the ELBO.
- niter
Number of iterations performed.
- convergence
Convergence status.
- sets
Estimated credible sets.
- pip
Vector of posterior inclusion probabilities.
- walltime
Records runtime of the model fitting algorithm.
- z
Vector of univariate z-scores.
- single_effect_lfsr
Average lfsr (local false sign rate) for each CS.
- lfsr
TO DO: Explain what this output is.
- conditional_lfsr
The lfsr (local false sign rate) given that the variable is the single effect.
Examples
# Example with one response.
set.seed(1)
n <- 2000
p <- 1000
beta <- rep(0, p)
beta[1:4] <- 1
X <- matrix(rnorm(n * p), n, p)
Y <- X %*% beta + rnorm(n)
fit <- mvsusie(X, Y, L = 10)
#> Initializing data object...
#> Dimension of X matrix: 2000 1000
#> Dimension of Y matrix: 2000 1
#> Memory used by data object 0.015 GB
#> Memory used by prior object 0 GB
#> Running IBSS algorithm...
#> Iteration 1 delta = Inf
#> Iteration 2 delta = 56.0443447076686
#> Iteration 3 delta = 2.6739610487889
#> Iteration 4 delta = 0.427756728521217
#> Iteration 5 delta = 0.216391749058857
#> Iteration 6 delta = 0.133061693517448
#> Iteration 7 delta = 0.090321317757116
#> Iteration 8 delta = 0.0653925507026543
#> Iteration 9 delta = 0.0495585088756343
#> Iteration 10 delta = 0.038865562061801
#> Iteration 11 delta = 0.0312950725319752
#> Iteration 12 delta = 0.290996033145348
#> Iteration 13 delta = 1.7334969015792e-09
# Sufficient statistics example with one response.
X_colmeans <- colMeans(X)
Y_colmeans <- colMeans(Y)
X <- scale(X, center = TRUE, scale = FALSE)
Y <- scale(Y, center = TRUE, scale = FALSE)
XtX <- crossprod(X)
XtY <- crossprod(X, Y)
YtY <- crossprod(Y)
res <- mvsusie_suff_stat(XtX, XtY, YtY, n, L = 10, X_colmeans, Y_colmeans)
#> Memory used by data object 0.008 GB
#> Memory used by prior object 0 GB
#> Running IBSS algorithm...
#> Iteration 1 delta = Inf
#> Iteration 2 delta = 56.04434470766
#> Iteration 3 delta = 2.67396104878617
#> Iteration 4 delta = 0.427756728520308
#> Iteration 5 delta = 0.216391749058403
#> Iteration 6 delta = 0.133061693517448
#> Iteration 7 delta = 0.0903213177580255
#> Iteration 8 delta = 0.0653925507026543
#> Iteration 9 delta = 0.0495585088751795
#> Iteration 10 delta = 0.038865562061801
#> Iteration 11 delta = 0.0312950725315204
#> Iteration 12 delta = 0.290996033145348
#> Iteration 13 delta = 1.7334969015792e-09
# RSS example with one response.
R <- crossprod(X)
z <- susieR:::calc_z(X, Y)
res <- mvsusie_rss(z, R, N = n, L = 10)
#> Memory used by data object 0.008 GB
#> Memory used by prior object 0 GB
#> Running IBSS algorithm...
#> Iteration 1 delta = Inf
#> Iteration 2 delta = 26.5014235793183
#> Iteration 3 delta = 1.434312697515
#> Iteration 4 delta = 0.587887748114099
#> Iteration 5 delta = 0.322281747516172
#> Iteration 6 delta = 0.203904712742769
#> Iteration 7 delta = 0.140737203480512
#> Iteration 8 delta = 0.103023266052787
#> Iteration 9 delta = 0.0786930484464392
#> Iteration 10 delta = 0.06207851327963
#> Iteration 11 delta = 0.050226823875164
#> Iteration 12 delta = 0.476440040988564
#> Iteration 13 delta = 0
# Example with three responses.
set.seed(1)
n <- 500
p <- 1000
true_eff <- 2
X <- sample(c(0, 1, 2), size = n * p, replace = TRUE)
X <- matrix(X, n, p)
beta1 <- rep(0, p)
beta2 <- rep(0, p)
beta3 <- rep(0, p)
beta1[1:true_eff] <- runif(true_eff)
beta2[1:true_eff] <- runif(true_eff)
beta3[1:true_eff] <- runif(true_eff)
y1 <- X %*% beta1 + rnorm(n)
y2 <- X %*% beta2 + rnorm(n)
y3 <- X %*% beta3 + rnorm(n)
Y <- cbind(y1, y2, y3)
prior <- create_mixture_prior(R = 3)
fit <- mvsusie(X, Y, prior_variance = prior)
#> Initializing data object...
#> Dimension of X matrix: 500 1000
#> Dimension of Y matrix: 500 3
#> Initializing prior object ...
#> Number of components in the mixture prior: 9
#> Memory used by data object 0.004 GB
#> Memory used by prior object 0 GB
#> Running IBSS algorithm...
#> Iteration 1 delta = Inf
#> Iteration 2 delta = 23.6215924552362
#> Iteration 3 delta = 1.74273499342553
#> Iteration 4 delta = 0.739175635345873
#> Iteration 5 delta = 0.404371327747413
#> Iteration 6 delta = 0.257532660034485
#> Iteration 7 delta = 0.179687918378931
#> Iteration 8 delta = 0.133185441116439
#> Iteration 9 delta = 0.103027669467338
#> Iteration 10 delta = 0.0822700318340139
#> Iteration 11 delta = 0.0653108544502174
#> Iteration 12 delta = 0.704721320222234
#> Iteration 13 delta = 5.13637132826261e-09
# Sufficient statistics example with three responses.
X_colmeans <- colMeans(X)
Y_colmeans <- colMeans(Y)
X <- scale(X, center = TRUE, scale = FALSE)
Y <- scale(Y, center = TRUE, scale = FALSE)
XtX <- crossprod(X)
XtY <- crossprod(X, Y)
YtY <- crossprod(Y)
res <- mvsusie_suff_stat(XtX, XtY, YtY, n,
L = 10, X_colmeans, Y_colmeans,
prior_variance = prior
)
#> Initializing prior object ...
#> Number of components in the mixture prior: 9
#> Memory used by data object 0.008 GB
#> Memory used by prior object 0 GB
#> Running IBSS algorithm...
#> Iteration 1 delta = Inf
#> Iteration 2 delta = 23.6391655575067
#> Iteration 3 delta = 1.60702772142031
#> Iteration 4 delta = 0.606677083448176
#> Iteration 5 delta = 0.29126337200978
#> Iteration 6 delta = 0.166858239634621
#> Iteration 7 delta = 0.106842660456095
#> Iteration 8 delta = 0.0738354540362707
#> Iteration 9 delta = 0.0539295627427236
#> Iteration 10 delta = 0.0410707409355382
#> Iteration 11 delta = 0.0295026920216515
#> Iteration 12 delta = 0.293559222763179
#> Iteration 13 delta = 5.43442638445413e-07
# RSS example with three responses.
R <- crossprod(X)
Z <- susieR:::calc_z(X, Y)
res <- mvsusie_rss(Z, R, N = n, L = 10, prior_variance = prior)
#> Initializing prior object ...
#> Number of components in the mixture prior: 9
#> Memory used by data object 0.008 GB
#> Memory used by prior object 0 GB
#> Running IBSS algorithm...
#> Iteration 1 delta = Inf
#> Iteration 2 delta = 32.9358327861441
#> Iteration 3 delta = 2.16182308824727
#> Iteration 4 delta = 1.05000397954973
#> Iteration 5 delta = 0.618635786592677
#> Iteration 6 delta = 0.406486432756992
#> Iteration 7 delta = 0.286805038751027
#> Iteration 8 delta = 0.212827602317702
#> Iteration 9 delta = 0.163993570944967
#> Iteration 10 delta = 0.130113136023738
#> Iteration 11 delta = 0.105671162524686
#> Iteration 12 delta = 0.998276810554671
#> Iteration 13 delta = 0