# Using DevTreatRules

Here, we illustrate the DevTreatRules package by building and evaluating treatment rules based on the example dataset included with the package.

library(DevTreatRules)
##   no_relapse intervention prognosis clinic age gene_1 gene_2 gene_3 gene_4
## 1          1            0      Good      1  46  0.531  0.071 0.1294  0.427
## 2          0            1      Poor      3  51  0.744  0.711 1.3532  1.939
## 3          1            0      Poor      3  62  1.146  0.498 1.4707  1.456
## 4          1            0      Good      5  51  1.816  1.758 0.2226  0.688
## 5          1            1      Good      4  52  0.403  0.636 0.0933  0.288
## 6          1            1      Poor      6  51  1.797  0.642 0.2618  0.277
##   gene_5 gene_6 gene_7 gene_8 gene_9 gene_10
## 1  0.421  0.402  0.831 0.0136  1.186  1.8831
## 2  0.230  0.228  0.282 1.3413  1.106  0.6606
## 3  0.291  0.543  0.915 0.4839  1.193  0.6233
## 4  0.620  1.571  1.606 1.9986  0.785  1.7684
## 5  0.300  1.276  0.642 1.9003  0.521  0.0126
## 6  1.053  0.939  1.736 0.2348  1.480  1.3908

## Split the Dataset

First, we split obsStudyGeneExpressions into independent development/validation/evaluation partitions by calling the SplitData() function

set.seed(123)
example.split <- SplitData(data=obsStudyGeneExpressions, n.sets=3, split.proportions=c(0.5, 0.25, 0.25))
table(example.split$partition) ## ## development validation evaluation ## 2500 1250 1250 and then forming independent datasets based on the partition variable created above. library(dplyr) development.data <- example.split %>% filter(partition == "development") validation.data <- example.split %>% filter(partition == "validation") evaluation.data <- example.split %>% filter(partition == "evaluation") ## Specify Variable Roles Suppose we believe the variables prognosis, clinic, and age may have influenced treatment assignment. We would codify this knowledge into DevTreatRules by specifying the argument names.influencing.treatment=c("prognosis", "clinic", "age") in functions we will call later in this vignette. Further suppose that we don’t expect prognosis and clinic to be measured on the same scale in independent clinical settings where we would like to apply our estimated rule (so they are not sensible rule inputs). However, we do expect the gene expression measurements in gene_1, …, gene_10 to potentially influence treatment response and also to be reliably measured in future settings (so they are sensible rule inputs). We specify this knowledge in DevTreatRules with the argument names.influencing.rule=c("age", paste0("gene_", 1:10)) ## On the Development Dataset, Build the Treatment Rule Although we could directly estimate a single treatment rule on the development dataset with BuildRule() using, for example, one.rule <- BuildRule(development.data=development.data, study.design="observational", prediction.approach="split.regression", name.outcome="no_relapse", type.outcome="binary", desirable.outcome=TRUE, name.treatment="intervention", names.influencing.treatment=c("prognosis", "clinic", "age"), names.influencing.rule=c("age", paste0("gene_", 1:10)), propensity.method="logistic.regression", rule.method="glm.regression") this has limited utility because it required us to specify just one value for the prediction.approach argument (even if we don’t have a priori knowledge about which of split-regression, OWL framework, and direct-interactions approaches will perform best) and to specify just one regression method for the propensity.score and rule.method arguments (even if we are not sure whether standard logistic regression or lasso/ridge logistic regression will yield a better rule). Instead, it would be more useful to perform model selection to estimate rules for different combinations of split-regression/OWL framework/direct-interactions and standard/lasso/ridge logistic regression (e.g. by looping over calls to BuildRule()). The model-selection process is automated in CompareRulesOnValidation(). ## On the Evaluation Dataset, Evaluate the Selected Rules Since the validation dataset informed our model selection (i.e. we selected these particular two rules because they appeared best on the validation set), the estimates from the validation set itself are not trustworthy estimates of performance in independent settings. To obtain a trustworthy estimate of the rules’ performance in independent samples drawn from the same population, we turn to the EvaluateRule() function applied to the independent evaluation dataset. First, we obtain the estimated performance of the rule using split-regression with set.seed(123) split.eval <- EvaluateRule(evaluation.data=evaluation.data, BuildRule.object=rule.split, study.design="observational", name.outcome="no_relapse", type.outcome="binary", desirable.outcome=TRUE, name.treatment="intervention", names.influencing.treatment=c("prognosis", "clinic", "age"), names.influencing.rule=c("age", paste0("gene_", 1:10)), propensity.method="logistic.regression", bootstrap.CI=FALSE) split.eval$summaries
##                n.positives ATE.positives n.negatives ATE.negatives     ABR
## estimated.rule         685        0.1529         565       -0.0540  0.1082
## treat.all             1250        0.0663           0            NA  0.0663
## treat.none               0            NA        1250        0.0663 -0.0663

And last, the rule from OWL framework yields the following estimates

set.seed(123)
OWL.eval <- EvaluateRule(evaluation.data=evaluation.data,
BuildRule.object=rule.OWL,
study.design="observational",
name.outcome="no_relapse",
type.outcome="binary",
desirable.outcome=TRUE,
name.treatment="intervention",
names.influencing.treatment=c("prognosis", "clinic", "age"),
names.influencing.rule=c("age", paste0("gene_", 1:10)),
propensity.method="logistic.regression",
bootstrap.CI=FALSE)
OWL.eval$summaries ## n.positives ATE.positives n.negatives ATE.negatives ABR ## estimated.rule 691 0.1511 559 -0.0518 0.1067 ## treat.all 1250 0.0663 0 NA 0.0663 ## treat.none 0 NA 1250 0.0663 -0.0663 And last, the rule from OWL framework yields the following estimates set.seed(123) DI.eval <- EvaluateRule(evaluation.data=evaluation.data, BuildRule.object=rule.DI, study.design="observational", name.outcome="no_relapse", type.outcome="binary", desirable.outcome=TRUE, name.treatment="intervention", names.influencing.treatment=c("prognosis", "clinic", "age"), names.influencing.rule=c("age", paste0("gene_", 1:10)), propensity.method="logistic.regression", bootstrap.CI=FALSE) DI.eval$summaries
##                n.positives ATE.positives n.negatives ATE.negatives     ABR
## estimated.rule         685        0.1584         565       -0.0500  0.1094
## treat.all             1250        0.0663           0            NA  0.0663
## treat.none               0            NA        1250        0.0663 -0.0663

We could have also obtained bootstrap-based CIs for the ATE/ABR estimates (in both the validation and evaluation datasets) by specifying the following arguments to BuildRulesOnValidation and EvaluateRule()

bootstrap.CI=TRUE
booststrap.CI.replications=1000 # or any other number of replications

but we chose not to compute CIs in this example to minimize run-time.

### References

• Yingqi Zhao, Donglin Zeng, A. John Rush & Michael R. Kosorok (2012). Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association, 107:499 1106–1118.
• Lu Tian, Ash A. Alizadeh, Andrew J. Gentles, Robert Tibshirani (2014). A simple method for estimating interactions between a treatment and a large number of covariates. Journal of the American Statistical Association, 109:508: 1517–1532.
• Shuai Chen, Lu Tian, Tianxi Cai, Menggang Yu (2017). A general statistical framework for subgroup identification and comparative treatment scoring. Biometrics, 73:4: 1199–1209.
• Jeremy Roth and Noah Simon (2019). Using propensity scores to develop and evaluate treatment rules with observational data. (Manuscript in progress).
• Jeremy Roth and Noah Simon (2019). Elucidating outcome-weighted-learning and its comparison to split-regression: direct vs. indirect methods in practice. (Manuscript in progress).