Create a “Structure plot” from a multinomial topic model fit. The Structure plot represents the estimated topic proportions of each sample in a stacked bar chart, with bars of different colors representing different topics. Consequently, samples that have similar topic proportions have similar amounts of each color.

structure_plot(
  fit,
  topics,
  grouping,
  loadings_order = "embed",
  n = 2000,
  colors = c("#e41a1c", "#377eb8", "#4daf4a", "#984ea3", "#ff7f00", "#ffff33", "#a65628",
    "#f781bf", "#999999"),
  gap = 1,
  embed_method = structure_plot_default_embed_method,
  ggplot_call = structure_plot_ggplot_call,
  ...
)

structure_plot_default_embed_method(fit, ...)

# S3 method for poisson_nmf_fit
plot(x, ...)

# S3 method for multinom_topic_model_fit
plot(x, ...)

structure_plot_ggplot_call(dat, colors, ticks = NULL, font.size = 9)

Arguments

fit

An object of class “poisson_nmf_fit” or “multinom_topic_model_fit”. If a Poisson NMF fit is provided as input, the corresponding multinomial topic model fit is automatically recovered using poisson2multinom.

topics

Top-to-bottom ordering of the topics in the Structure plot; topics[1] is shown on the top, topics[2] is shown next, and so on. If the ordering of the topics is not specified, the topics are automatically ordered so that the topics with the greatest total “mass” are at shown at the bottom of the plot. The topics may be specified by number or by name.

grouping

Optional categorical variable (a factor) with one entry for each row of the loadings matrix fit$L defining a grouping of the samples (rows). The samples (rows) are arranged along the horizontal axis according to this grouping, then within each group according to loadings_order. If grouping is not a factor, an attempt is made to convert it to a factor using as.factor. Note that if loadings_order is specified manually, grouping should be the groups for the rows of fit$L before reordering.

loadings_order

Ordering of the rows of the loadings matrix fit$L along the horizontal axis the Structure plot (after they have been grouped). If loadings_order = "embed", the ordering is generated automatically from a 1-d embedding, separately for each group. The rows may be specified by number or by name. Note that loadings_order may include all the rows of fit$L, or a subset.

n

The maximum number of samples (rows of the loadings matrix fit$L) to include in the plot. Typically there is little to no benefit in including large number of samples in the Structure plot due to screen resolution limits. Ignored if loadings_order is provided.

colors

Colors used to draw topics in Structure plot. The default colour setting is the from https://colorbrewer2.org (qualitative data, “9-class Set1”).

gap

The horizontal spacing between groups. Ignored if grouping is not provided.

embed_method

The function used to compute an 1-d embedding from a loadings matrix fit$L; only used if loadings_order = "embed". The function must accept the multinomial topic model fit as its first input (“fit”) and additional arguments may be passed (...). The output should be a named numeric vector with one entry per row of fit$L, and the names of the entries should be the same as the row names of fit$L.

ggplot_call

The function used to create the plot. Replace structure_plot_ggplot_call with your own function to customize the appearance of the plot.

...

Additional arguments passed to structure_plot (for the plot method) or embed_method (for function structure_plot).

x

An object of class “poisson_nmf_fit” or “multinom_topic_model_fit”. If a Poisson NMF fit is provided as input, the corresponding multinomial topic model fit is automatically recovered using poisson2multinom.

dat

A data frame passed as input to ggplot, containing, at a minimum, columns “sample”, “topic” and “prop”: the “sample” column contains the positions of the samples (rows of the L matrix) along the horizontal axis; the “topic” column is a topic (a column of L); and the “prop” column is the topic proportion for the respective sample.

ticks

The placement of the group labels along the horizontal axis, and their names. For data that are not grouped, use ticks = NULL.

font.size

Font size used in plot.

Value

A ggplot object.

Details

The name “Structure plot” comes from its widespread use in population genetics to visualize the results of the Structure software (Rosenberg et al, 2002).

For most uses of the Structure plot in population genetics, there is usually some grouping of the samples (e.g., assignment to pre-defined populations) that guides arrangement of the samples along the horizontal axis in the bar chart. In other applications, such as analysis of gene expression data, a pre-defined grouping may not always be available. Therefore, a “smart” arrangement of the samples is, by default, generated automatically by performing a 1-d embedding of the samples.

References

Dey, K. K., Hsiao, C. J. and Stephens, M. (2017). Visualizing the structure of RNA-seq expression data using grade of membership models. PLoS Genetics 13, e1006599.

Rosenberg, N. A., Pritchard, J. K., Weber, J. L., Cann, H. M., Kidd, K. K., Zhivotovsky, L. A. and Feldman, M. W. (2002). Genetic structure of human populations. Science 298, 2381–2385.

Examples

# \donttest{
set.seed(1)
data(pbmc_facs)

# Get the multinomial topic model fitted to the
# PBMC data.
fit <- pbmc_facs$fit

# Create a Structure plot without labels. The samples (rows of L) are
# automatically arranged along the x-axis using t-SNE to highlight the
# structure in the data.
p1 <- structure_plot(fit)
#> Running tsne on 2000 x 6 matrix.
#> Read the 2000 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 100.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.61 seconds (sparsity = 0.184733)!
#> Learning embedding...
#> Iteration 50: error is 57.790679 (50 iterations in 0.30 seconds)
#> Iteration 100: error is 49.861306 (50 iterations in 0.29 seconds)
#> Iteration 150: error is 48.283663 (50 iterations in 0.30 seconds)
#> Iteration 200: error is 47.583935 (50 iterations in 0.27 seconds)
#> Iteration 250: error is 47.185906 (50 iterations in 0.28 seconds)
#> Iteration 300: error is 0.829310 (50 iterations in 0.28 seconds)
#> Iteration 350: error is 0.574113 (50 iterations in 0.28 seconds)
#> Iteration 400: error is 0.463630 (50 iterations in 0.28 seconds)
#> Iteration 450: error is 0.408564 (50 iterations in 0.28 seconds)
#> Iteration 500: error is 0.378602 (50 iterations in 0.28 seconds)
#> Iteration 550: error is 0.361187 (50 iterations in 0.27 seconds)
#> Iteration 600: error is 0.350349 (50 iterations in 0.28 seconds)
#> Iteration 650: error is 0.343237 (50 iterations in 0.28 seconds)
#> Iteration 700: error is 0.338447 (50 iterations in 0.28 seconds)
#> Iteration 750: error is 0.335129 (50 iterations in 0.28 seconds)
#> Iteration 800: error is 0.332753 (50 iterations in 0.28 seconds)
#> Iteration 850: error is 0.330927 (50 iterations in 0.29 seconds)
#> Iteration 900: error is 0.329686 (50 iterations in 0.28 seconds)
#> Iteration 950: error is 0.328633 (50 iterations in 0.28 seconds)
#> Iteration 1000: error is 0.327787 (50 iterations in 0.29 seconds)
#> Fitting performed in 5.64 seconds.

# Create a Structure plot with the FACS cell-type labels. Within each
# group (cell-type), the cells (rows of L) are automatically arranged
# using t-SNE.
subpop <- pbmc_facs$samples$subpop
p2 <- structure_plot(fit,grouping = subpop)
#> Running tsne on 428 x 6 matrix.
#> Read the 428 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 100.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.12 seconds (sparsity = 0.857553)!
#> Learning embedding...
#> Iteration 50: error is 46.068049 (50 iterations in 0.05 seconds)
#> Iteration 100: error is 46.066009 (50 iterations in 0.05 seconds)
#> Iteration 150: error is 45.929280 (50 iterations in 0.05 seconds)
#> Iteration 200: error is 45.689352 (50 iterations in 0.05 seconds)
#> Iteration 250: error is 45.680291 (50 iterations in 0.05 seconds)
#> Iteration 300: error is 0.417504 (50 iterations in 0.05 seconds)
#> Iteration 350: error is 0.415709 (50 iterations in 0.05 seconds)
#> Iteration 400: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 450: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 500: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 550: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 600: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 650: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 700: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 750: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 800: error is 0.415705 (50 iterations in 0.05 seconds)
#> Iteration 850: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 900: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 950: error is 0.415706 (50 iterations in 0.05 seconds)
#> Iteration 1000: error is 0.415706 (50 iterations in 0.05 seconds)
#> Fitting performed in 1.01 seconds.
#> Running tsne on 89 x 6 matrix.
#> Read the 89 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 28.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.00 seconds (sparsity = 0.984724)!
#> Learning embedding...
#> Iteration 50: error is 50.796039 (50 iterations in 0.01 seconds)
#> Iteration 100: error is 49.294609 (50 iterations in 0.01 seconds)
#> Iteration 150: error is 50.029133 (50 iterations in 0.01 seconds)
#> Iteration 200: error is 52.434520 (50 iterations in 0.01 seconds)
#> Iteration 250: error is 51.042966 (50 iterations in 0.01 seconds)
#> Iteration 300: error is 2.204181 (50 iterations in 0.01 seconds)
#> Iteration 350: error is 0.298109 (50 iterations in 0.01 seconds)
#> Iteration 400: error is 0.276139 (50 iterations in 0.01 seconds)
#> Iteration 450: error is 0.276117 (50 iterations in 0.01 seconds)
#> Iteration 500: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 550: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 600: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 650: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 700: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 750: error is 0.276117 (50 iterations in 0.01 seconds)
#> Iteration 800: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 850: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 900: error is 0.276118 (50 iterations in 0.01 seconds)
#> Iteration 950: error is 0.276117 (50 iterations in 0.01 seconds)
#> Iteration 1000: error is 0.276118 (50 iterations in 0.01 seconds)
#> Fitting performed in 0.13 seconds.
#> Running tsne on 337 x 6 matrix.
#> Read the 337 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 100.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.09 seconds (sparsity = 0.978101)!
#> Learning embedding...
#> Iteration 50: error is 42.894862 (50 iterations in 0.04 seconds)
#> Iteration 100: error is 42.891002 (50 iterations in 0.04 seconds)
#> Iteration 150: error is 42.818965 (50 iterations in 0.04 seconds)
#> Iteration 200: error is 42.612440 (50 iterations in 0.04 seconds)
#> Iteration 250: error is 42.515986 (50 iterations in 0.04 seconds)
#> Iteration 300: error is 0.258088 (50 iterations in 0.03 seconds)
#> Iteration 350: error is 0.257636 (50 iterations in 0.03 seconds)
#> Iteration 400: error is 0.257632 (50 iterations in 0.03 seconds)
#> Iteration 450: error is 0.257633 (50 iterations in 0.03 seconds)
#> Iteration 500: error is 0.257632 (50 iterations in 0.04 seconds)
#> Iteration 550: error is 0.257634 (50 iterations in 0.04 seconds)
#> Iteration 600: error is 0.257634 (50 iterations in 0.03 seconds)
#> Iteration 650: error is 0.257633 (50 iterations in 0.04 seconds)
#> Iteration 700: error is 0.257632 (50 iterations in 0.04 seconds)
#> Iteration 750: error is 0.257633 (50 iterations in 0.03 seconds)
#> Iteration 800: error is 0.257634 (50 iterations in 0.03 seconds)
#> Iteration 850: error is 0.257632 (50 iterations in 0.03 seconds)
#> Iteration 900: error is 0.257633 (50 iterations in 0.03 seconds)
#> Iteration 950: error is 0.257634 (50 iterations in 0.03 seconds)
#> Iteration 1000: error is 0.257633 (50 iterations in 0.04 seconds)
#> Fitting performed in 0.71 seconds.
#> Running tsne on 356 x 6 matrix.
#> Read the 356 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 100.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.10 seconds (sparsity = 0.954141)!
#> Learning embedding...
#> Iteration 50: error is 44.012322 (50 iterations in 0.04 seconds)
#> Iteration 100: error is 44.008936 (50 iterations in 0.04 seconds)
#> Iteration 150: error is 43.983331 (50 iterations in 0.04 seconds)
#> Iteration 200: error is 43.886940 (50 iterations in 0.04 seconds)
#> Iteration 250: error is 43.815308 (50 iterations in 0.04 seconds)
#> Iteration 300: error is 0.333246 (50 iterations in 0.04 seconds)
#> Iteration 350: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 400: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 450: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 500: error is 0.332928 (50 iterations in 0.03 seconds)
#> Iteration 550: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 600: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 650: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 700: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 750: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 800: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 850: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 900: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 950: error is 0.332928 (50 iterations in 0.04 seconds)
#> Iteration 1000: error is 0.332928 (50 iterations in 0.04 seconds)
#> Fitting performed in 0.73 seconds.
#> Running tsne on 790 x 6 matrix.
#> Read the 790 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 100.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.23 seconds (sparsity = 0.497827)!
#> Learning embedding...
#> Iteration 50: error is 50.311476 (50 iterations in 0.10 seconds)
#> Iteration 100: error is 48.367697 (50 iterations in 0.09 seconds)
#> Iteration 150: error is 48.366598 (50 iterations in 0.09 seconds)
#> Iteration 200: error is 48.366598 (50 iterations in 0.09 seconds)
#> Iteration 250: error is 48.366594 (50 iterations in 0.10 seconds)
#> Iteration 300: error is 0.459507 (50 iterations in 0.10 seconds)
#> Iteration 350: error is 0.414759 (50 iterations in 0.10 seconds)
#> Iteration 400: error is 0.409702 (50 iterations in 0.09 seconds)
#> Iteration 450: error is 0.409160 (50 iterations in 0.10 seconds)
#> Iteration 500: error is 0.409129 (50 iterations in 0.10 seconds)
#> Iteration 550: error is 0.409130 (50 iterations in 0.10 seconds)
#> Iteration 600: error is 0.409124 (50 iterations in 0.10 seconds)
#> Iteration 650: error is 0.409128 (50 iterations in 0.10 seconds)
#> Iteration 700: error is 0.409128 (50 iterations in 0.10 seconds)
#> Iteration 750: error is 0.409124 (50 iterations in 0.10 seconds)
#> Iteration 800: error is 0.409128 (50 iterations in 0.10 seconds)
#> Iteration 850: error is 0.409128 (50 iterations in 0.10 seconds)
#> Iteration 900: error is 0.409127 (50 iterations in 0.10 seconds)
#> Iteration 950: error is 0.409128 (50 iterations in 0.10 seconds)
#> Iteration 1000: error is 0.409124 (50 iterations in 0.10 seconds)
#> Fitting performed in 1.93 seconds.

# Next, we apply some customizations to improve the plot: (1) use the
# "topics" argument to specify the order in which the topic
# proportions are stacked on top of each other; (2) use the "gap"
# argrument to increase the whitespace between the groups; (3) use "n"
# to decrease the number of rows of L included in the Structure plot;
# and (4) use "colors" to change the colors used to draw the topic
# proportions.
topic_colors <- c("skyblue","forestgreen","darkmagenta",
                  "dodgerblue","gold","darkorange")
p3 <- structure_plot(fit,grouping = pbmc_facs$samples$subpop,gap = 20,
                     n = 1500,topics = c(5,6,1,4,2,3),colors = topic_colors)
#> Running tsne on 295 x 6 matrix.
#> Read the 295 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 97.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.08 seconds (sparsity = 0.996403)!
#> Learning embedding...
#> Iteration 50: error is 41.858935 (50 iterations in 0.03 seconds)
#> Iteration 100: error is 41.858669 (50 iterations in 0.04 seconds)
#> Iteration 150: error is 41.861621 (50 iterations in 0.04 seconds)
#> Iteration 200: error is 41.859881 (50 iterations in 0.04 seconds)
#> Iteration 250: error is 41.860017 (50 iterations in 0.04 seconds)
#> Iteration 300: error is 0.324563 (50 iterations in 0.03 seconds)
#> Iteration 350: error is 0.313767 (50 iterations in 0.03 seconds)
#> Iteration 400: error is 0.313313 (50 iterations in 0.03 seconds)
#> Iteration 450: error is 0.313482 (50 iterations in 0.03 seconds)
#> Iteration 500: error is 0.313482 (50 iterations in 0.03 seconds)
#> Iteration 550: error is 0.313481 (50 iterations in 0.03 seconds)
#> Iteration 600: error is 0.313482 (50 iterations in 0.03 seconds)
#> Iteration 650: error is 0.313482 (50 iterations in 0.03 seconds)
#> Iteration 700: error is 0.313142 (50 iterations in 0.03 seconds)
#> Iteration 750: error is 0.312468 (50 iterations in 0.03 seconds)
#> Iteration 800: error is 0.313480 (50 iterations in 0.03 seconds)
#> Iteration 850: error is 0.313480 (50 iterations in 0.03 seconds)
#> Iteration 900: error is 0.313480 (50 iterations in 0.03 seconds)
#> Iteration 950: error is 0.313143 (50 iterations in 0.03 seconds)
#> Iteration 1000: error is 0.313481 (50 iterations in 0.03 seconds)
#> Fitting performed in 0.64 seconds.
#> Running tsne on 72 x 6 matrix.
#> Read the 72 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 22.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.00 seconds (sparsity = 0.976466)!
#> Learning embedding...
#> Iteration 50: error is 52.440063 (50 iterations in 0.00 seconds)
#> Iteration 100: error is 52.118869 (50 iterations in 0.00 seconds)
#> Iteration 150: error is 52.374758 (50 iterations in 0.00 seconds)
#> Iteration 200: error is 49.909709 (50 iterations in 0.00 seconds)
#> Iteration 250: error is 52.440649 (50 iterations in 0.00 seconds)
#> Iteration 300: error is 1.508999 (50 iterations in 0.01 seconds)
#> Iteration 350: error is 0.857924 (50 iterations in 0.00 seconds)
#> Iteration 400: error is 0.770116 (50 iterations in 0.00 seconds)
#> Iteration 450: error is 0.769413 (50 iterations in 0.00 seconds)
#> Iteration 500: error is 0.769339 (50 iterations in 0.00 seconds)
#> Iteration 550: error is 0.769330 (50 iterations in 0.00 seconds)
#> Iteration 600: error is 0.769330 (50 iterations in 0.00 seconds)
#> Iteration 650: error is 0.769330 (50 iterations in 0.00 seconds)
#> Iteration 700: error is 0.769332 (50 iterations in 0.00 seconds)
#> Iteration 750: error is 0.769333 (50 iterations in 0.00 seconds)
#> Iteration 800: error is 0.769341 (50 iterations in 0.00 seconds)
#> Iteration 850: error is 0.769332 (50 iterations in 0.00 seconds)
#> Iteration 900: error is 0.769331 (50 iterations in 0.00 seconds)
#> Iteration 950: error is 0.769333 (50 iterations in 0.00 seconds)
#> Iteration 1000: error is 0.769333 (50 iterations in 0.00 seconds)
#> Fitting performed in 0.09 seconds.
#> Running tsne on 287 x 6 matrix.
#> Read the 287 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 94.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.07 seconds (sparsity = 0.996151)!
#> Learning embedding...
#> Iteration 50: error is 41.760880 (50 iterations in 0.03 seconds)
#> Iteration 100: error is 41.751621 (50 iterations in 0.03 seconds)
#> Iteration 150: error is 41.754161 (50 iterations in 0.03 seconds)
#> Iteration 200: error is 41.751217 (50 iterations in 0.03 seconds)
#> Iteration 250: error is 41.754137 (50 iterations in 0.03 seconds)
#> Iteration 300: error is 0.196220 (50 iterations in 0.03 seconds)
#> Iteration 350: error is 0.194830 (50 iterations in 0.03 seconds)
#> Iteration 400: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 450: error is 0.194815 (50 iterations in 0.03 seconds)
#> Iteration 500: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 550: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 600: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 650: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 700: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 750: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 800: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 850: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 900: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 950: error is 0.194814 (50 iterations in 0.03 seconds)
#> Iteration 1000: error is 0.194814 (50 iterations in 0.03 seconds)
#> Fitting performed in 0.58 seconds.
#> Running tsne on 264 x 6 matrix.
#> Read the 264 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 86.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.06 seconds (sparsity = 0.995552)!
#> Learning embedding...
#> Iteration 50: error is 42.239220 (50 iterations in 0.03 seconds)
#> Iteration 100: error is 42.234884 (50 iterations in 0.03 seconds)
#> Iteration 150: error is 42.234041 (50 iterations in 0.03 seconds)
#> Iteration 200: error is 42.225193 (50 iterations in 0.03 seconds)
#> Iteration 250: error is 42.235166 (50 iterations in 0.03 seconds)
#> Iteration 300: error is 0.288490 (50 iterations in 0.03 seconds)
#> Iteration 350: error is 0.286906 (50 iterations in 0.02 seconds)
#> Iteration 400: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 450: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 500: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 550: error is 0.286902 (50 iterations in 0.03 seconds)
#> Iteration 600: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 650: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 700: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 750: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 800: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 850: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 900: error is 0.286902 (50 iterations in 0.03 seconds)
#> Iteration 950: error is 0.286902 (50 iterations in 0.02 seconds)
#> Iteration 1000: error is 0.286902 (50 iterations in 0.03 seconds)
#> Fitting performed in 0.51 seconds.
#> Running tsne on 582 x 6 matrix.
#> Read the 582 x 6 data matrix successfully!
#> OpenMP is working. 1 threads.
#> Using no_dims = 1, perplexity = 100.000000, and theta = 0.100000
#> Computing input similarities...
#> Building tree...
#> Done in 0.16 seconds (sparsity = 0.671066)!
#> Learning embedding...
#> Iteration 50: error is 48.635769 (50 iterations in 0.08 seconds)
#> Iteration 100: error is 46.139671 (50 iterations in 0.07 seconds)
#> Iteration 150: error is 46.137047 (50 iterations in 0.07 seconds)
#> Iteration 200: error is 46.137043 (50 iterations in 0.07 seconds)
#> Iteration 250: error is 46.137046 (50 iterations in 0.07 seconds)
#> Iteration 300: error is 0.269337 (50 iterations in 0.07 seconds)
#> Iteration 350: error is 0.256593 (50 iterations in 0.07 seconds)
#> Iteration 400: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 450: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 500: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 550: error is 0.256365 (50 iterations in 0.07 seconds)
#> Iteration 600: error is 0.256365 (50 iterations in 0.07 seconds)
#> Iteration 650: error is 0.256363 (50 iterations in 0.07 seconds)
#> Iteration 700: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 750: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 800: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 850: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 900: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 950: error is 0.256364 (50 iterations in 0.07 seconds)
#> Iteration 1000: error is 0.256364 (50 iterations in 0.07 seconds)
#> Fitting performed in 1.45 seconds.

# In this example, we use UMAP instead of t-SNE to arrange the
# cells in the Structure plot. Note that this can be accomplished in
# a different way by overriding the default setting of
# "embed_method".
y <- drop(umap_from_topics(fit,dims = 1))
#> 15:49:35 UMAP embedding parameters a = 1.896 b = 0.8006
#> 15:49:35 Read 3774 rows and found 6 numeric columns
#> 15:49:35 Using FNN for neighbor search, n_neighbors = 30
#> 15:49:35 Commencing smooth kNN distance calibration using 2 threads
#> 15:49:35 111 smooth knn distance failures
#> 15:49:36 Initializing from normalized Laplacian + noise
#> 15:49:37 Commencing optimization for 500 epochs, with 138012 positive edges
#> 15:49:41 Optimization finished
p4 <- structure_plot(fit,loadings_order = order(y),grouping = subpop,
                     gap = 40,colors = topic_colors)

# We can also use PCA to arrange the cells.
y <- drop(pca_from_topics(fit,dims = 1))
p5 <- structure_plot(fit,loadings_order = order(y),grouping = subpop,
                     gap = 40,colors = topic_colors)

# In this final example, we plot a random subset of 400 cells, and
# arrange the cells randomly along the horizontal axis of the
# Structure plot.
p6 <- structure_plot(fit,loadings_order = sample(3744,400),gap = 10,
                     grouping = subpop,colors = topic_colors)
# }