`vignettes/tidy-brms.Rmd`

`tidy-brms.Rmd`

This vignette describes how to use the `tidybayes`

package to extract tidy data frames of samples of parameters, fits, and predictions from `brms::brm`

. For a more general introduction to `tidybayes`

and its use on general-purpose sampling languages (like Stan and JAGS), see `vignette(“tidybayes”)`

.

The following libraries are required to run this vignette:

```
library(magrittr)
library(dplyr)
library(forcats)
library(tidyr)
library(modelr)
library(tidybayes)
library(ggplot2)
library(ggstance)
library(ggridges)
library(cowplot)
library(rstan)
library(brms)
```

These options help Stan run faster:

```
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
```

To demonstrate `tidybayes`

, we will use a simple dataset with 10 observations from 5 conditions each:

```
set.seed(5)
n = 10
n_condition = 5
ABC =
data_frame(
condition = rep(c("A","B","C","D","E"), n),
response = rnorm(n * 5, c(0,1,2,1,-1), 0.5)
)
```

A snapshot of the data looks like this:

`head(ABC, 10)`

```
## # A tibble: 10 x 2
## condition response
## <chr> <dbl>
## 1 A -0.420
## 2 B 1.69
## 3 C 1.37
## 4 D 1.04
## 5 E -0.144
## 6 A -0.301
## 7 B 0.764
## 8 C 1.68
## 9 D 0.857
## 10 E -0.931
```

*(10 rows of 50)*

This is a typical tidy format data frame: one observation per row. Graphically:

```
ABC %>%
ggplot(aes(y = condition, x = response)) +
geom_point()
```

Let’s fit a hierarchical model with shrinkage towards a global mean:

```
m = brm(response ~ (1|condition), data = ABC, control = list(adapt_delta = .99),
prior = c(
prior(normal(0, 1), class = Intercept),
prior(student_t(3, 0, 1), class = sd),
prior(student_t(3, 0, 1), class = sigma)
))
```

`## Compiling the C++ model`

`## Start sampling`

The results look like this:

`summary(m)`

```
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: response ~ (1 | condition)
## Data: ABC (Number of observations: 50)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Group-Level Effects:
## ~condition (Number of levels: 5)
## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
## sd(Intercept) 1.15 0.43 0.61 2.26 936 1.01
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
## Intercept 0.51 0.47 -0.45 1.41 948 1.01
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
## sigma 0.56 0.06 0.46 0.70 1969 1.00
##
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
## is a crude measure of effective sample size, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
```

`spread_samples`

Now that we have our results, the fun begins: getting the samples out in a tidy format! First, we’ll use the `parameters`

function to get a list of raw parameter names so that we know what parameters we can extract from the model:

`parameters(m)`

```
## [1] "b_Intercept" "sd_condition__Intercept" "sigma" "r_condition[A,Intercept]"
## [5] "r_condition[B,Intercept]" "r_condition[C,Intercept]" "r_condition[D,Intercept]" "r_condition[E,Intercept]"
## [9] "lp__"
```

Here, `b_Intercept`

is the global mean, and the `r_condition`

parameters are offsets from that mean for each condition. Given these parameters:

`r_condition[A,Intercept]`

`r_condition[B,Intercept]`

`r_condition[C,Intercept]`

`r_condition[D,Intercept]`

`r_condition[E,Intercept]`

We might want a data frame where each row is a sample from either `r_condition[A,Intercept]`

, `r_condition[B,Intercept]`

, `...[C,...]`

, `...[D,...]`

, or `...[E,...]`

, and where we have columns indexing which iteration of the sampler the row came from and which condition (`A`

to `E`

) it is for. That would allow us to easily compute quantities grouped by condition, or generate plots by condition using ggplot, or even merge samples with the original data to plot data and estimates.

The workhorse of `tidybayes`

is the `spread_samples`

function, which does this extraction for us. It includes a simple specification format that we can use to extract parameters and their indices into tidy-format data frames.

Given a parameter like this:

`r_condition[D,Intercept]`

We can provide `spread_samples`

with a column specification like this:

`r_condition[condition,term]`

Where `condition`

corresponds to `D`

and `term`

corresponds to `Intercept`

. There is nothing too magical about what `spread_samples`

does with this specification: under the hood, it splits the parameter indices by commas and spaces (you can split by other characters by changing the `sep`

argument). It lets you assign columns to the resulting indices in order. So `r_condition[D,Intercept]`

has indices `D`

and `Intercept`

, and `spread_samples`

lets us extract these indices as columns in the resulting tidy data frame of samples of `r_condition`

:

```
m %>%
spread_samples(r_condition[condition,term]) %>%
head(10)
```

```
## # A tibble: 10 x 5
## # Groups: condition, term [5]
## .chain .iteration condition term r_condition
## <int> <int> <chr> <chr> <dbl>
## 1 1 1 A Intercept -0.270
## 2 1 1 B Intercept 0.431
## 3 1 1 C Intercept 1.48
## 4 1 1 D Intercept 0.333
## 5 1 1 E Intercept -1.61
## 6 1 2 A Intercept -0.600
## 7 1 2 B Intercept 0.0536
## 8 1 2 C Intercept 0.910
## 9 1 2 D Intercept -0.260
## 10 1 2 E Intercept -1.88
```

*(10 rows of 20000)*

We can choose whatever names we want for the index columns; e.g.:

```
m %>%
spread_samples(r_condition[c,t]) %>%
head(10)
```

```
## # A tibble: 10 x 5
## # Groups: c, t [5]
## .chain .iteration c t r_condition
## <int> <int> <chr> <chr> <dbl>
## 1 1 1 A Intercept -0.270
## 2 1 1 B Intercept 0.431
## 3 1 1 C Intercept 1.48
## 4 1 1 D Intercept 0.333
## 5 1 1 E Intercept -1.61
## 6 1 2 A Intercept -0.600
## 7 1 2 B Intercept 0.0536
## 8 1 2 C Intercept 0.910
## 9 1 2 D Intercept -0.260
## 10 1 2 E Intercept -1.88
```

*(10 rows of 20000)*

But the more descriptive and less cryptic names from the previous example are probably preferable.

In this particular model, there is only one term (`Intercept`

), thus we could omit that index altogether to just get each `condition`

and the value of `r_condition`

for that condition:

```
m %>%
spread_samples(r_condition[condition,]) %>%
head(10)
```

```
## # A tibble: 10 x 4
## # Groups: condition [5]
## .chain .iteration condition r_condition
## <int> <int> <chr> <dbl>
## 1 1 1 A -0.270
## 2 1 1 B 0.431
## 3 1 1 C 1.48
## 4 1 1 D 0.333
## 5 1 1 E -1.61
## 6 1 2 A -0.600
## 7 1 2 B 0.0536
## 8 1 2 C 0.910
## 9 1 2 D -0.260
## 10 1 2 E -1.88
```

*(10 rows of 20000)*

**Note:** If you have used `spread_samples`

with raw samples from Stan or JAGS, you may be used to using `recover_types`

before `spread_samples`

to get index column values back (e.g. if the index was a factor). This is not necessary when using `spread_samples`

on `rstanarm`

models, because those models already contain that information in their parameter names. For more on `recover_types`

, see `vignette(“tidybayes”)`

.

`tidybayes`

provides a family of functions for generating point estimates and intervals from samples in a tidy format. These functions follow the naming scheme `[mean|median|mode]_[qi|hdi]`

, for example, `mean_qi`

, `median_qi`

, `mode_hdi`

, and so on. The first name (before the `_`

) indicates the type of point estimate, and the second name indicates the type of interval. `qi`

yields a quantile interval (a.k.a. equi-tailed interval, central interval, or percentile interval) and `hdi`

yields a highest (posterior) density interval. Custom estimates or intervals can also be applied using the `point_interval`

function.

For example, we might extract the samples corresponding to the overall mean and standard deviation of observations:

```
m %>%
spread_samples(b_Intercept, sigma) %>%
head(10)
```

```
## # A tibble: 10 x 4
## .chain .iteration b_Intercept sigma
## <int> <int> <dbl> <dbl>
## 1 1 1 0.571 0.554
## 2 1 2 0.923 0.576
## 3 1 3 0.803 0.552
## 4 1 4 0.545 0.545
## 5 1 5 0.692 0.473
## 6 1 6 0.866 0.511
## 7 1 7 0.544 0.588
## 8 1 8 0.551 0.609
## 9 1 9 0.565 0.540
## 10 1 10 0.226 0.490
```

*(10 rows of 4000)*

Like with `r_condition[condition,term]`

, this gives us a tidy data frame. If we want the mean and 95% quantile interval of the parameters, we can apply `mean_qi`

:

```
m %>%
spread_samples(b_Intercept, sigma) %>%
mean_qi(b_Intercept, sigma)
```

```
## # A tibble: 1 x 7
## b_Intercept b_Intercept.low b_Intercept.high sigma sigma.low sigma.high .prob
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.505 -0.449 1.41 0.561 0.458 0.696 0.95
```

We can specify the columns we want to get means and intervals from, as above, or if we omit the list of columns, `mean_qi`

will use every column that is not a grouping column or a special column (one that starts with `.`

, like `.chain`

or `.iteration`

). Thus in the above example, `b_Intercept`

and `sigma`

are redundant arguments to `mean_qi`

because they are also the only columns we gathered from the model. So we can simplify this to:

```
m %>%
spread_samples(b_Intercept, sigma) %>%
mean_qi()
```

```
## # A tibble: 1 x 7
## b_Intercept b_Intercept.low b_Intercept.high sigma sigma.low sigma.high .prob
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.505 -0.449 1.41 0.561 0.458 0.696 0.95
```

If you would rather have a long-format list of intervals, use `gather_samples`

instead:

```
m %>%
gather_samples(b_Intercept, sigma) %>%
mean_qi()
```

```
## # A tibble: 2 x 5
## # Groups: term [2]
## term estimate conf.low conf.high .prob
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 b_Intercept 0.505 -0.449 1.41 0.95
## 2 sigma 0.561 0.458 0.696 0.95
```

The `conf.low`

and `conf.high`

naming scheme is used when `mean_qi`

summarizes a single column in order to be consistent with the output of `broom::tidy`

. This makes it easier to compare output from `tidybayes`

to other models supported by `broom`

.

For more on `gather_samples`

, see `vignette(“tidybayes”)`

.

When we have a parameter with one or more indices, such as `r_condition`

, we can apply `mean_qi`

(or other functions in the `point_estimate`

family) as we did before:

```
m %>%
spread_samples(r_condition[condition,]) %>%
mean_qi()
```

```
## # A tibble: 5 x 5
## # Groups: condition [5]
## condition r_condition conf.low conf.high .prob
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 A -0.313 -1.25 0.710 0.95
## 2 B 0.500 -0.430 1.53 0.95
## 3 C 1.33 0.395 2.34 0.95
## 4 D 0.505 -0.427 1.52 0.95
## 5 E -1.39 -2.35 -0.424 0.95
```

How did `mean_qi`

know what to aggregate? Data frames returned by `spread_samples`

are automatically grouped by all index variables you pass to it; in this case, that means `spread_samples`

groups its results by `condition`

. `mean_qi`

respects those groups, and calculates the estimates and intervals within all groups. Then, because no columns were passed to `mean_qi`

, it acts on the only non-special (`.`

-prefixed) and non-group column, `r_condition`

. So the above shortened syntax is equivalent to this more verbose call:

```
m %>%
spread_samples(r_condition[condition,]) %>%
group_by(condition) %>% # this line not necessary (done by spread_samples)
mean_qi(r_condition) # b is not necessary (it is the only non-group column)
```

```
## # A tibble: 5 x 5
## # Groups: condition [5]
## condition r_condition conf.low conf.high .prob
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 A -0.313 -1.25 0.710 0.95
## 2 B 0.500 -0.430 1.53 0.95
## 3 C 1.33 0.395 2.34 0.95
## 4 D 0.505 -0.427 1.52 0.95
## 5 E -1.39 -2.35 -0.424 0.95
```

`spread_samples`

and `gather_samples`

support extracting variables that have different indices into the same data frame. Indices with the same name are automatically matched up, and values are duplicated as necessary to produce one row per all combination of levels of all indices. For example, we might want to calculate the mean within each condition (call this `condition_mean`

). In this model, that mean is the intercept (`b_Intercept`

) plus the effect for a given condition (`r_condition`

).

We can gather samples from `b_Intercept`

and `r_condition`

together in a single data frame:

```
m %>%
spread_samples(b_Intercept, r_condition[condition,]) %>%
head(10)
```

```
## # A tibble: 10 x 5
## # Groups: condition [5]
## .chain .iteration b_Intercept condition r_condition
## <int> <int> <dbl> <chr> <dbl>
## 1 1 1 0.571 A -0.270
## 2 1 1 0.571 B 0.431
## 3 1 1 0.571 C 1.48
## 4 1 1 0.571 D 0.333
## 5 1 1 0.571 E -1.61
## 6 1 2 0.923 A -0.600
## 7 1 2 0.923 B 0.0536
## 8 1 2 0.923 C 0.910
## 9 1 2 0.923 D -0.260
## 10 1 2 0.923 E -1.88
```

*(10 rows of 20000)*

Within each sample, `b_Intercept`

is repeated as necessary to correspond to every index of `r_condition`

. Thus, the `mutate`

function from dplyr can be used to find their sum, `condition_mean`

(which is the estimated mean for each condition):

```
m %>%
spread_samples(`b_Intercept`, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
mean_qi(condition_mean)
```

```
## # A tibble: 5 x 5
## # Groups: condition [5]
## condition condition_mean conf.low conf.high .prob
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 A 0.192 -0.149 0.551 0.95
## 2 B 1.00 0.663 1.35 0.95
## 3 C 1.83 1.48 2.18 0.95
## 4 D 1.01 0.663 1.36 0.95
## 5 E -0.889 -1.22 -0.550 0.95
```

`mean_qi`

uses tidy evaluation (see `vignette("tidy-evaluation", package = "rlang")`

), so it can take column expressions, not just column names. Thus, we can simplify the above example by moving the calculation of `condition_mean`

from `mutate`

into `mean_qi`

:

```
m %>%
spread_samples(b_Intercept, r_condition[condition,]) %>%
mean_qi(condition_mean = b_Intercept + r_condition)
```

```
## # A tibble: 5 x 5
## # Groups: condition [5]
## condition condition_mean conf.low conf.high .prob
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 A 0.192 -0.149 0.551 0.95
## 2 B 1.00 0.663 1.35 0.95
## 3 C 1.83 1.48 2.18 0.95
## 4 D 1.01 0.663 1.36 0.95
## 5 E -0.889 -1.22 -0.550 0.95
```

Plotting point estimates and with one interval is straightforward using the `ggplot2::geom_pointrange`

or `ggstance::geom_pointrangeh`

geoms:

```
m %>%
spread_samples(b_Intercept, r_condition[condition,]) %>%
mean_qi(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean, xmin = conf.low, xmax = conf.high)) +
geom_pointrangeh()
```

`mean_qi`

and its sister functions can also produce an arbitrary number of probability intervals by setting the `.prob =`

argument:

```
m %>%
spread_samples(b_Intercept, r_condition[condition,]) %>%
mean_qi(condition_mean = b_Intercept + r_condition, .prob = c(.95, .8, .5))
```

```
## # A tibble: 15 x 5
## # Groups: condition [5]
## condition condition_mean conf.low conf.high .prob
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 A 0.192 -0.149 0.551 0.95
## 2 B 1.00 0.663 1.35 0.95
## 3 C 1.83 1.48 2.18 0.95
## 4 D 1.01 0.663 1.36 0.95
## 5 E -0.889 -1.22 -0.550 0.95
## 6 A 0.192 -0.0349 0.418 0.8
## 7 B 1.00 0.785 1.22 0.8
## 8 C 1.83 1.61 2.06 0.8
## 9 D 1.01 0.789 1.24 0.8
## 10 E -0.889 -1.11 -0.670 0.8
## 11 A 0.192 0.0731 0.310 0.5
## 12 B 1.00 0.890 1.12 0.5
## 13 C 1.83 1.72 1.95 0.5
## 14 D 1.01 0.887 1.12 0.5
## 15 E -0.889 -1.01 -0.768 0.5
```

The results are in a tidy format: one row per group and probability level (`.prob`

). This facilitates plotting. For example, assigning `-.prob`

to the `size`

aesthetic will show all intervals, making thicker lines correspond to smaller intervals. The `geom_pointintervalh`

geom, provided by tidybayes, is a shorthand for a `geom_pointrangeh`

with `xmin`

, `xmax`

, and `size`

set appropriately based on the `conf.low`

, `conf.high`

, and `.prob`

columns in the data to produce plots of estimates with multiple probability levels:

```
m %>%
spread_samples(b_Intercept, r_condition[condition,]) %>%
mean_qi(condition_mean = b_Intercept + r_condition, .prob = c(.95, .66)) %>%
ggplot(aes(y = condition, x = condition_mean)) +
geom_pointintervalh()
```

To see the density along with the intervals, we can use `geom_eyeh`

(horizontal “eye plots”, which combine intervals with violin plots), or `geom_halfeyeh`

(horizontal interval + density plots):

```
m %>%
spread_samples(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean)) +
geom_halfeyeh()
```

Rather than calculating conditional means manually as in the previous example, we could use `add_fitted_samples`

, which is analogous to `brms::fitted.brmsfit`

or `brms::posterior_linpred`

(giving posterior draws from the model’s linear predictor, in this case, posterior distributions of conditional means), but uses a tidy data format. We can combine it with `modelr::data_grid`

to first generate a grid describing the fits we want, then transform that grid into a long-format data frame of samples of posterior fits:

```
ABC %>%
data_grid(condition) %>%
add_fitted_samples(m) %>%
head(10)
```

```
## # A tibble: 10 x 5
## # Groups: condition, .row [1]
## condition .row .chain .iteration estimate
## <chr> <int> <int> <int> <dbl>
## 1 A 1 NA 1 0.301
## 2 A 1 NA 2 0.323
## 3 A 1 NA 3 0.145
## 4 A 1 NA 4 0.178
## 5 A 1 NA 5 0.166
## 6 A 1 NA 6 0.415
## 7 A 1 NA 7 0.371
## 8 A 1 NA 8 0.0657
## 9 A 1 NA 9 0.317
## 10 A 1 NA 10 0.00462
```

*(10 rows of 20000)*

To plot this example, we’ll also show the use of `stat_pointintervalh`

instead of `geom_pointintervalh`

, which summarizes samples into estimates and intervals within ggplot:

```
ABC %>%
data_grid(condition) %>%
add_fitted_samples(m) %>%
ggplot(aes(x = estimate, y = condition)) +
stat_pointintervalh(.prob = c(.66, .95))
```

Intervals are nice if the alpha level happens to line up with whatever decision you are trying to make, but getting a shape of the posterior is better (hence eye plots, above). On the other hand, making inferences from density plots is imprecise (estimating the area of one shape as a proportion of another is a hard perceptual task). Reasoning about probability in frequency formats is easier, motivating quantile dotplots, which also allow precise estimation of arbitrary intervals (down to the dot resolution of the plot, here 100):

```
ABC %>%
data_grid(condition) %>%
add_fitted_samples(m) %>%
do(data_frame(estimate = quantile(.$estimate, ppoints(100)))) %>%
ggplot(aes(x = estimate)) +
geom_dotplot(binwidth = .04) +
facet_grid(fct_rev(condition) ~ .) +
scale_y_continuous(breaks = NULL)
```

The idea is to get away from thinking about the posterior as indicating one canonical point or interval, but instead to represent it as (say) 100 approximately equally likely points.

Where `add_fitted_samples`

is analogous to `brms::fitted.brmsfit`

(or `brms::posterior_linpred`

), `add_predicted_samples`

is analogous to `brms::predict.brmsfit`

(`brms::posterior_predict`

), giving samples from the posterior predictive distribution.

Here is an example of posterior predictive distributions plotted using `ggridges::geom_density_ridges`

:

```
ABC %>%
data_grid(condition) %>%
add_predicted_samples(m) %>%
ggplot(aes(x = pred, y = condition)) +
geom_density_ridges()
```

`## Picking joint bandwidth of 0.101`

We could also use `tidybayes::stat_intervalh`

to plot predictive bands alongside the data:

```
ABC %>%
data_grid(condition) %>%
add_predicted_samples(m) %>%
ggplot(aes(y = condition, x = pred)) +
stat_intervalh() +
geom_point(aes(x = response), data = ABC) +
scale_color_brewer()
```

Altogether, data, posterior predictions, and estimates of the means:

```
grid = ABC %>%
data_grid(condition)
fits = grid %>%
add_fitted_samples(m)
preds = grid %>%
add_predicted_samples(m)
ABC %>%
ggplot(aes(y = condition, x = response)) +
stat_intervalh(aes(x = pred), data = preds) +
stat_pointintervalh(aes(x = estimate), data = fits, .prob = c(.66, .95), position = position_nudge(y = -0.2)) +
geom_point() +
scale_color_brewer()
```

To demonstrate drawing fit curves with uncertainty, let’s fit a slightly naive model to part of the `mtcars`

dataset:

`m_mpg = brm(mpg ~ hp * cyl, data = mtcars)`

We can draw fit curves with probability bands:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_fitted_samples(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
stat_lineribbon(aes(y = estimate)) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Greys")
```

Or we can sample a reasonable number of fit lines (say 100) and overplot them:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_fitted_samples(m_mpg, n = 100) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
geom_line(aes(y = estimate, group = paste(cyl, .iteration)), alpha = 0.25) +
geom_point(data = mtcars)
```

Or, for posterior predictions (instead of fits), we can go back to probability bands:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_samples(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
stat_lineribbon(aes(y = pred), .prob = c(.99, .95, .8, .5), alpha = 0.25) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Greys")
```

This gets difficult to judge by group, so probably better to facet into multiple plots. Fortunately, since we are using ggplot, that functionality is built in:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_samples(m_mpg) %>%
ggplot(aes(x = hp, y = mpg)) +
stat_lineribbon(aes(y = pred), .prob = c(.99, .95, .8, .5)) +
geom_point(data = mtcars) +
scale_fill_brewer() +
facet_grid(. ~ cyl)
```

`brm`

also allows us to set up submodels for parameters of the response distribution *other than* the location (e.g., mean). For example, we can allow a variance parameter, such as the standard deviation, to also be some function of the predictors.

This approach can be helpful in cases of non-constant variance (also called *hetereoskedasticity* by folks who like obfuscation via Latin). E.g., imagine two groups, each with different mean response *and variance*:

```
set.seed(1234)
AB = data_frame(
group = rep(c("a", "b"), each = 20),
response = rnorm(40, mean = rep(c(1, 5), each = 20), sd = rep(c(1, 3), each = 20))
)
AB %>%
ggplot(aes(x = response, y = group)) +
geom_point()
```

Here is a model that lets the mean *and standard deviation* of `response`

be dependent on `group`

:

```
m_ab = brm(
bf(
response ~ group,
sigma ~ group
),
data = AB
)
```

`## Compiling the C++ model`

`## Start sampling`

We can plot the estimated mean of `response`

alongside posterior predictive intervals and the data:

```
grid = AB %>%
data_grid(group)
fits = grid %>%
add_fitted_samples(m_ab)
preds = grid %>%
add_predicted_samples(m_ab)
AB %>%
ggplot(aes(x = response, y = group)) +
geom_halfeyeh(aes(x = estimate), relative_scale = 0.7, position = position_nudge(y = 0.1), data = fits) +
stat_intervalh(aes(x = pred), data = preds) +
geom_point(data = AB) +
scale_color_brewer()
```

This shows estimates of the mean in each group (black intervals and the density plots) and posterior predictive intervals (blue).

The predictive intervals in group `b`

are larger than in group `a`

because the model estimates a different standard deviation for each group. We can see how estimates for the corresponding distributional parameter, `sigma`

, changes by extracting it using the `dpar`

argument to `add_fitted_samples`

:

```
grid %>%
add_fitted_samples(m_ab, dpar = TRUE) %>%
ggplot(aes(x = sigma, y = group)) +
geom_halfeyeh() +
geom_vline(xintercept = 0, linetype = "dashed")
```

By setting `dpar = TRUE`

, all distributional parameters are added as additional columns in the result of `add_fitted_samples`

; if you only want a specific parameter, you can specify it (or a list of just the parameters you want). In the above model, `dpar = TRUE`

is equivalent to `dpar = list("mu", "sigma")`

.

If we wish compare the means from each condition, `compare_levels`

facilitates comparisons of the value of some variable across levels of a factor. By default it computes all pairwise differences.

Let’s demonstrate `compare_levels`

with another plotting geom, `geom_halfeyeh`

, which gives horizontal “half-eye” plots, combining interval estimates with a density plot:

```
#N.B. the syntax for compare_levels is experimental and may change
m %>%
spread_samples(r_condition[condition,]) %>%
compare_levels(r_condition, by = condition) %>%
ggplot(aes(y = condition, x = r_condition)) +
geom_halfeyeh()
```

If you prefer “caterpillar” plots, ordered by something like the mean of the difference, you can reorder the factor before plotting:

```
#N.B. the syntax for compare_levels is experimental and may change
m %>%
spread_samples(r_condition[condition,]) %>%
compare_levels(r_condition, by = condition) %>%
ungroup() %>%
mutate(condition = reorder(condition, r_condition)) %>%
ggplot(aes(y = condition, x = r_condition)) +
geom_halfeyeh() +
geom_vline(xintercept = 0, linetype = "dashed")
```

The `brms::fitted.brmsfit`

function for ordinal and multinomial regression models in brms returns multiple estimates for each sample: one for each outcome category (in contrast to `rstanarm::stan_polr`

models, which return samples from the latent linear predictor). The philosophy of `tidybayes`

is to tidy whatever format is output by a model, so in keeping with that philosophy, when applied to ordinal and multinomial `brms`

models, `add_fitted_samples`

adds an additional column called `category`

and a separate row containing the estimate for each category is output for every sample and predictor.

Consider this ordinal regression model:

`m_cyl = brm(ordered(cyl) ~ mpg, data = mtcars, family = cumulative)`

`## Compiling the C++ model`

`## Start sampling`

`add_fitted_samples`

will include a `category`

column, and `estimate`

will contain the estimated probability that the response is in that category. For example, here is the fit for the first row in the dataset:

```
data_frame(mpg = 21) %>%
add_fitted_samples(m_cyl) %>%
mean_qi(estimate)
```

```
## # A tibble: 3 x 7
## # Groups: mpg, .row, category [3]
## mpg .row category estimate conf.low conf.high .prob
## <dbl> <int> <fct> <dbl> <dbl> <dbl> <dbl>
## 1 21 1 1 0.299 0.0509 0.671 0.95
## 2 21 1 2 0.689 0.322 0.945 0.95
## 3 21 1 3 0.0122 0.0000199 0.0787 0.95
```

We could plot fit lines for estimated probabilities against the dataset:

```
data_plot = mtcars %>%
ggplot(aes(x = mpg, y = cyl, color = ordered(cyl))) +
geom_point()
fit_plot = mtcars %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_fitted_samples(m_cyl) %>%
ggplot(aes(x = mpg, y = estimate, color = category)) +
stat_lineribbon(alpha = .5) +
scale_fill_brewer(palette = "Greys")
plot_grid(ncol = 1, align = "v",
data_plot,
fit_plot
)
```

Here’s an ordinal model with a categorical predictor:

```
data(esoph)
m_esoph_brm = brm(tobgp ~ agegp, data = esoph, family = cumulative())
```

`## Compiling the C++ model`

`## Start sampling`

Then we can plot predicted probabilities for each outcome category within each level of the predictor:

```
esoph %>%
data_grid(agegp) %>%
add_fitted_samples(m_esoph_brm) %>%
# brms does not keep the category labels,
# but we can recover them from the original data
within(levels(category) <- levels(esoph$tobgp)) %>%
ggplot(aes(x = agegp, y = estimate, color = category)) +
stat_pointinterval(position = position_dodge(width = .4), .prob = c(.66, .95), show.legend = TRUE) +
scale_size_continuous(guide = FALSE)
```

This output should be very similar to the output from the corresponding `m_esoph_rs`

model in `vignette("tidy-rstanarm")`

(modulo different priors), though brms does more of the work for us to produce it than `rstanarm`

does.