Last updated: 2020-01-11
Checks: 2 0
Knit directory: reading_lists/
This reproducible R Markdown analysis was created with workflowr (version 1.6.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .Rproj.user/
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote
), click on the hyperlinks in the table below to view them.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | b3809c1 | Matthew Stephens | 2020-01-11 | wflow_publish(“analysis/empirical_bayes.Rmd”) |
This document summarizes the reading you might do if you would like to know more about our work related to Empirical Bayes methods. If you want more background on where these ideas came from and the history, the monograph by Efron is the place to go. For a very simple intuitive introduction to Empirical Bayes see the blog post by Robinson.
The starting point for this work is the following model, sometimes known as the “normal means” model: \[X_j \sim N(\theta_j,s_j^2) \qquad j=1,\dots,n.\]
where \(X_j\) is observed, \(s_j\) is assumed known, and the means \(\theta_j\) are to be estimated.
The Empirical Bayes (EB) approach to fitting this model assumes that the \(\theta_j\) come from some prior distributions, \(g\in \cal{G}\), where \(\cal{G}\) is a suitably-chosen family of distributions (more on this later): \[\theta_j \sim g(\cdot).\] Assuming independence across \(j\), the likelihood for \(g\) is: \[L(g):= p(X | g) = \prod_j p(X_j|g) = \prod_j \int p(X_j | \theta_j) g(\theta_j) d\theta_j.\]
The EB approach involves two steps:
For suitably-chosen \(\cal{G}\) both steps can be done analytically. See, for example, Stephens, 2017.
I used to think that the normal means model was simply a “toy model” studied by statisticians for their own amusement. In particular the idea that the standard deviations \(s_j\) would ever be “known” seemed unlikely. Now I see this model as extremely useful for practical applications, and I have been working with students and postdoctoral researchers to develop and apply methods developed based on this model to solve several important applied problems, including multiple-testing/FDR, sparse factor analysis, large-scale multiple regression.
Start with Stephens, 2017, which applies these ideas to multiple testing and FDR in the simplest case, for various choices of \(\cal{G}\) that are unimodal and centered on 0. The relevant software packages are ashr and ebnm.
Several papers build on this basic theme to apply the methods to other settings:
Urbut et al do multiple testing where the tests are “multivariate” (ie each unit, say gene for example, is being tested for something under several different conditions). The software is mashr.
Xing et al do smoothing (“non-parametric regression”) for poisson and gaussian data, using wavelet denoising. The software is smashr. This application is one where similar ideas have been used before, so there is a lot of previous literature related to this. A classic reference is Johnstone and Silverman.
Wang and Stephens does matrix factorization (closely related to sparse PCA or sparse factor analysis). The software is flashr.
This last paper uses variational methods to fit the model. For background on these see the review by Blei.
There are several other papers that deal with various details that might be of interest:
Gerard and Stephens deals with the (univariate) multiple testing situation when the tests are not independent, but correlated due to unmeasured confounding factors. This paper assumes that we have access to the raw data that were used to conduct the tests, which allows the correlations among tests to be estimated. The software is the mouthwash function in the vicar package.
Sun and Stephens also deals with correlated tests, but in situations where the correlation cannot be directly estimated – we assume we only have \(z\) scores from the tests. The software is cashr
Lu and Stephens deals with shrinkage estimation of variances (rather than means). The software is vashr.
Lu and Stephens deals with the fact that in practice the standard errors \(s_j\) are not known, but estimated from data. The methods here are in ashr
.
The PhD thesis by Lu, chapter 4, develops a generalized version of the ashr
ideas, beyond the normal case.
Kim et al develops optimization methods for solving the non-parametric versions of the EBNM problem (and other problems). The software is mixsqp.
There are various unpublished projects where software has been developed or is in development (warning: may be under construction!)