`R/qda-shrink-mean.r`

`qda_shrink_mean.Rd`

Given a set of training data, this function builds the Shrinkage-mean-based
Diagonal Quadratic Discriminant Analysis (SmDQDA) classifier from Tong, Chen,
and Zhao (2012). The SmDQDA classifier incorporates a Lindley-type shrunken
mean estimator into the DQDA classifier from Dudoit et al. (2002). For more
about the DQDA classifier, see `qda_diag()`

.

The SmDQDA classifier is a modification to QDA, where the off-diagonal elements of the pooled sample covariance matrix are set to zero.

qda_shrink_mean(x, ...) # S3 method for default qda_shrink_mean(x, y, prior = NULL, ...) # S3 method for formula qda_shrink_mean(formula, data, prior = NULL, ...) # S3 method for qda_shrink_mean predict(object, newdata, type = c("class", "prob", "score"), ...)

x | Matrix or data frame containing the training data. The rows are the sample observations, and the columns are the features. Only complete data are retained. |
---|---|

... | additional arguments (not currently used). |

y | Vector of class labels for each training observation. Only complete data are retained. |

prior | Vector with prior probabilities for each class. If NULL (default), then equal probabilities are used. See details. |

formula | A formula of the form |

data | data frame from which variables specified in |

object | Fitted model object |

newdata | Matrix or data frame of observations to predict. Each row corresponds to a new observation. |

type | Prediction type: either `"class"`, `"prob"`, or `"score"`. |

`qda_shrink_mean`

object that contains the trained SmDQDA classifier

The DQDA classifier is a modification to the well-known QDA classifier, where the off-diagonal elements of each class covariance matrix are assumed to be zero -- the features are assumed to be uncorrelated. Under multivariate normality, the assumption uncorrelated features is equivalent to the assumption of independent features. The feature-independence assumption is a notable attribute of the Naive Bayes classifier family. The benefit of these classifiers is that they are fast and have much fewer parameters to estimate, especially when the number of features is quite large.

The matrix of training observations are given in `x`

. The rows of `x`

contain the sample observations, and the columns contain the features for each
training observation.

The vector of class labels given in `y`

are coerced to a `factor`

.
The length of `y`

should match the number of rows in `x`

.

An error is thrown if a given class has less than 2 observations because the variance for each feature within a class cannot be estimated with less than 2 observations.

The vector, `prior`

, contains the *a priori* class membership for
each class. If `prior`

is NULL (default), the class membership
probabilities are estimated as the sample proportion of observations belonging
to each class. Otherwise, `prior`

should be a vector with the same length
as the number of classes in `y`

. The `prior`

probabilities should be
nonnegative and sum to one.

Tong, T., Chen, L., and Zhao, H. (2012), "Improved Mean Estimation and Its Application to Diagonal Discriminant Analysis," Bioinformatics, 28, 4, 531-537. http://bioinformatics.oxfordjournals.org/content/28/4/531.long

Dudoit, S., Fridlyand, J., & Speed, T. P. (2002). "Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data," Journal of the American Statistical Association, 97, 457, 77-87.

library(modeldata) data(penguins) pred_rows <- seq(1, 344, by = 20) penguins <- penguins[, c("species", "body_mass_g", "flipper_length_mm")] smdqda_out <- qda_shrink_mean(species ~ ., data = penguins[-pred_rows, ]) predicted <- predict(smdqda_out, penguins[pred_rows, -1], type = "class") smdqda_out2 <- qda_shrink_mean(x = penguins[-pred_rows, -1], y = penguins$species[-pred_rows]) predicted2 <- predict(smdqda_out2, penguins[pred_rows, -1], type = "class") all.equal(predicted, predicted2)#> [1] TRUE