# Introduction to mason

#### 2020-06-04

Brief note: I created this package to make my analyses easier. So some statistics that have been implemented were chosen because that’s what I’ve done. If you would like a particular statistical method included, please fill out an Issue and I will try to implement it!

Most analyses follow a similar pattern to how construction/engineering projects are developed: design -> add specifications -> construction -> (optional) add to the design and specs -> cleaning, scrubbing, and polishing. The mason package tries to emulate this process to make it easier to do analyses in a consistent and ‘tidy’ format.

## Basic command flow

The general command flow for using mason is:

1. Start the design of a blueprint for the analysis by specifying which statistical technique to use in your analysis (design()).
2. Add settings/options to the blueprint for the methods of the statistics (add_settings()).
3. Add the variables you want to run the statistics on (add_variables()). These variables include the $$y$$ variables (outcomes), the $$x$$ variables (predictors), covariates, and interaction variables.
4. Using the blueprint, construct the ‘mason project’ (stats analysis) so that the results are generated (construct()).
5. Sometimes analyses are too big for one first pass, from blueprint to construction, and needs to add more to the blueprint. Use add_variables() or add_settings() after the construct() to add to the existing results.
6. When you are ready, make the ‘mason project’ cleaned up by scrubbing it down and polishing it up (scrub() and polish_*() commands). The results are now ready for further presentation in a figure or table!

### Example usage

Let’s go over an example analysis. We’ll use glm for a simple linear regression. Let’s use the built-in swiss dataset. A quick peek at it shows:

head(swiss)
#>              Fertility Agriculture Examination Education Catholic
#> Courtelary        80.2        17.0          15        12     9.96
#> Delemont          83.1        45.1           6         9    84.84
#> Franches-Mnt      92.5        39.7           5         5    93.40
#> Moutier           85.8        36.5          12         7    33.77
#> Neuveville        76.9        43.5          17        15     5.16
#> Porrentruy        76.1        35.3           9         7    90.57
#>              Infant.Mortality
#> Courtelary               22.2
#> Delemont                 22.2
#> Franches-Mnt             20.2
#> Moutier                  20.3
#> Neuveville               20.6
#> Porrentruy               26.6

Ok, let’s say we want to several models. We are interested in Fertility and Infant.Mortality as outcomes and Education and Agriculture as potential predictors. We also want to control for Catholic. This setup means we have four potential models to analyze. With mason this is relatively easy. Analyses in mason are essentially separated into a blueprint phase and a construction phase. Since any structure or building always needs a blueprint, let’s get that started.

library(mason)
design(swiss, 'glm')
#> # Analysis for glm is still under construction.
#> # Showing data right now:
#> # A tibble: 47 x 6
#>   Fertility Agriculture Examination Education Catholic Infant.Mortality
#>       <dbl>       <dbl>       <int>     <int>    <dbl>            <dbl>
#> 1      80.2        17            15        12     9.96             22.2
#> 2      83.1        45.1           6         9    84.8              22.2
#> 3      92.5        39.7           5         5    93.4              20.2
#> 4      85.8        36.5          12         7    33.8              20.3
#> 5      76.9        43.5          17        15     5.16             20.6
#> 6      76.1        35.3           9         7    90.6              26.6
#> # … with 41 more rows

So far, all we’ve done is created a blueprint of the analysis, but it doesn’t contain much. Let’s add some settings to the blueprint. mason was designed to make use of the %>% pipes from the package magrittr (also found in dplyr), so let’s load up magrittr!

library(magrittr)
dp <- design(swiss, 'glm') %>%
add_settings(family = gaussian())

You’ll notice that each time, the only thing that is printed to the console is the dataset. That’s because we haven’t constructed the analysis yet! We are still in the blueprint phase, so nothing new has been added! Since we have two outcomes and two predictors, we have a total of four models to analysis. Normally we would need to run each of the models separately. However, if simply list the outcomes and the predictors in mason, it will ‘loop’ through each combination and run all four models! Let’s add the variables.

dp <- dp %>%
add_variables('xvars', c('Education', 'Agriculture'))

Alright, still nothing has happened. However, we are now at the phase that we can construct the analysis using construct().

dp <- construct(dp)
dp
#> # Analysis for glm constructed but has not been scrubbed.
#> # Here is a peek at the results:
#> # A tibble: 8 x 10
#>   Yterms Xterms term  estimate std.error statistic  p.value conf.low conf.high
#>   <chr>  <chr>  <chr>    <dbl>     <dbl>     <dbl>    <dbl>    <dbl>     <dbl>
#> 1 Ferti… Educa… (Int…  79.6       2.10      37.8   9.30e-36  75.5      83.7
#> 2 Ferti… Educa… Educ…  -0.862     0.145     -5.95  3.66e- 7  -1.15     -0.578
#> 3 Infan… Educa… (Int…  20.3       0.653     31.1   4.85e-32  19.0      21.6
#> 4 Infan… Educa… Educ…  -0.0301    0.0449    -0.670 5.07e- 1  -0.118     0.0580
#> 5 Ferti… Agric… (Int…  60.3       4.25      14.2   3.22e-18  52.0      68.6
#> 6 Ferti… Agric… Agri…   0.194     0.0767     2.53  1.49e- 2   0.0438    0.345
#> # … with 2 more rows, and 1 more variable: sample.size <int>

Cool! This is the unadjusted model, without any covariates. We said we wanted to adjust for Catholic. But let’s say we want to keep the unadjusted analysis too. Since we have ‘finished’ the analysis by cleaning it up, we can still add to the blueprint.

dp2 <- dp %>%
construct()
dp2
#> # Analysis for glm constructed but has not been scrubbed.
#> # Here is a peek at the results:
#> # A tibble: 20 x 10
#>   Yterms Xterms term  estimate std.error statistic  p.value conf.low conf.high
#>   <chr>  <chr>  <chr>    <dbl>     <dbl>     <dbl>    <dbl>    <dbl>     <dbl>
#> 1 Ferti… Educa… (Int…  79.6       2.10      37.8   9.30e-36  75.5      83.7
#> 2 Ferti… Educa… Educ…  -0.862     0.145     -5.95  3.66e- 7  -1.15     -0.578
#> 3 Infan… Educa… (Int…  20.3       0.653     31.1   4.85e-32  19.0      21.6
#> 4 Infan… Educa… Educ…  -0.0301    0.0449    -0.670 5.07e- 1  -0.118     0.0580
#> 5 Ferti… Agric… (Int…  60.3       4.25      14.2   3.22e-18  52.0      68.6
#> 6 Ferti… Agric… Agri…   0.194     0.0767     2.53  1.49e- 2   0.0438    0.345
#> # … with 14 more rows, and 1 more variable: sample.size <int>

We now have two models in the results. We’re happy with them, so let’s clean it up using the scrub() function.

dp_clean <- dp2 %>%
scrub()

All scrub() does is removes any extra specs in the attributes and sets the results as the main dataset. You can see this by looking at it’s details and comparing to the unscrubbed version.

colnames(dp2)
#> [1] "Fertility"        "Agriculture"      "Examination"      "Education"
#> [5] "Catholic"         "Infant.Mortality"
colnames(dp_clean)
#>  [1] "Yterms"      "Xterms"      "term"        "estimate"    "std.error"
#>  [6] "statistic"   "p.value"     "conf.low"    "conf.high"   "sample.size"
names(attributes(dp2))
#> [1] "names"     "class"     "row.names" "specs"
names(attributes(dp_clean))
#> [1] "names"     "row.names" "class"
class(dp2)
#> [1] "bp"         "glm_bp"     "data.frame"
class(dp_clean)
#> [1] "tbl_df"     "tbl"        "data.frame"

And all as a single pipe chain:

swiss %>%
design('glm') %>%
construct() %>%
construct() %>%
scrub()
#> # A tibble: 20 x 10
#>    Yterms Xterms term  estimate std.error statistic  p.value conf.low conf.high
#>    <chr>  <chr>  <chr>    <dbl>     <dbl>     <dbl>    <dbl>    <dbl>     <dbl>
#>  1 Ferti… Educa… (Int… 79.6        2.10      37.8   9.30e-36 75.5       83.7
#>  2 Ferti… Educa… Educ… -0.862      0.145     -5.95  3.66e- 7 -1.15      -0.578
#>  3 Infan… Educa… (Int… 20.3        0.653     31.1   4.85e-32 19.0       21.6
#>  4 Infan… Educa… Educ… -0.0301     0.0449    -0.670 5.07e- 1 -0.118      0.0580
#>  5 Ferti… Agric… (Int… 60.3        4.25      14.2   3.22e-18 52.0       68.6
#>  6 Ferti… Agric… Agri…  0.194      0.0767     2.53  1.49e- 2  0.0438     0.345
#>  7 Infan… Agric… (Int… 20.3        1.06      19.2   2.46e-23 18.3       22.4
#>  8 Infan… Agric… Agri… -0.00781    0.0191    -0.409 6.84e- 1 -0.0452     0.0296
#>  9 Ferti… Educa… (Int… 74.2        2.35      31.6   7.35e-32 69.6       78.8
#> 10 Ferti… Educa… Educ… -0.788      0.129     -6.10  2.43e- 7 -1.04      -0.535
#> 11 Ferti… Educa… Cath…  0.111      0.0298     3.72  5.60e- 4  0.0525     0.169
#> 12 Infan… Educa… (Int… 19.7        0.825     23.9   7.93e-27 18.1       21.3
#> 13 Infan… Educa… Educ… -0.0224     0.0454    -0.495 6.23e- 1 -0.111      0.0665
#> 14 Infan… Educa… Cath…  0.0115     0.0105     1.10  2.79e- 1 -0.00904    0.0320
#> 15 Ferti… Agric… (Int… 59.9        3.99      15.0   6.35e-19 52.0       67.7
#> 16 Ferti… Agric… Agri…  0.110      0.0785     1.40  1.70e- 1 -0.0443     0.263
#> 17 Ferti… Agric… Cath…  0.115      0.0427     2.69  1.01e- 2  0.0312     0.199
#> 18 Infan… Agric… (Int… 20.3        1.04      19.4   3.37e-23 18.2       22.3
#> 19 Infan… Agric… Agri… -0.0201     0.0206    -0.976 3.35e- 1 -0.0604     0.0202
#> 20 Infan… Agric… Cath…  0.0166     0.0112     1.49  1.44e- 1 -0.00530    0.0386
#> # … with 1 more variable: sample.size <int>

There are also additional polish_* type commands that are more or less simply wrappers around commands that you may do on the results dataset, like filtering or renaming. The list of polish commands can be found in ?mason::polish.