Given a set of training data, this function builds the Diagonal Linear Discriminant Analysis (DLDA) classifier, which is often attributed to Dudoit et al. (2002). The DLDA classifier belongs to the family of Naive Bayes classifiers, where the distributions of each class are assumed to be multivariate normal and to share a common covariance matrix.

The DLDA classifier is a modification to LDA, where the off-diagonal elements of the pooled sample covariance matrix are set to zero.

lda_diag(x, ...) # S3 method for default lda_diag(x, y, prior = NULL, ...) # S3 method for formula lda_diag(formula, data, prior = NULL, ...) # S3 method for lda_diag predict(object, newdata, type = c("class", "prob", "score"), ...)

x | Matrix or data frame containing the training data. The rows are the sample observations, and the columns are the features. Only complete data are retained. |
---|---|

... | additional arguments (not currently used). |

y | Vector of class labels for each training observation. Only complete data are retained. |

prior | Vector with prior probabilities for each class. If NULL (default), then equal probabilities are used. See details. |

formula | A formula of the form |

data | data frame from which variables specified in |

object | Fitted model object |

newdata | Matrix or data frame of observations to predict. Each row corresponds to a new observation. |

type | Prediction type: either `"class"`, `"prob"`, or `"score"`. |

The model fitting function returns the fitted classifier. The `predict()` method returns either a vector (`type = "class"`) or a data frame (all other `type` values).

The DLDA classifier is a modification to the well-known LDA classifier, where the off-diagonal elements of the pooled sample covariance matrix are assumed to be zero -- the features are assumed to be uncorrelated. Under multivariate normality, the assumption uncorrelated features is equivalent to the assumption of independent features. The feature-independence assumption is a notable attribute of the Naive Bayes classifier family. The benefit of these classifiers is that they are fast and have much fewer parameters to estimate, especially when the number of features is quite large.

The matrix of training observations are given in `x`

. The rows of `x`

contain the sample observations, and the columns contain the features for each
training observation.

The vector of class labels given in `y`

are coerced to a `factor`

.
The length of `y`

should match the number of rows in `x`

.

An error is thrown if a given class has less than 2 observations because the variance for each feature within a class cannot be estimated with less than 2 observations.

The vector, `prior`

, contains the *a priori* class membership for
each class. If `prior`

is NULL (default), the class membership
probabilities are estimated as the sample proportion of observations belonging
to each class. Otherwise, `prior`

should be a vector with the same length
as the number of classes in `y`

. The `prior`

probabilities should be
nonnegative and sum to one.

Dudoit, S., Fridlyand, J., & Speed, T. P. (2002). "Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data," Journal of the American Statistical Association, 97, 457, 77-87.

library(modeldata) data(penguins) pred_rows <- seq(1, 344, by = 20) penguins <- penguins[, c("species", "body_mass_g", "flipper_length_mm")] dlda_out <- lda_diag(species ~ ., data = penguins[-pred_rows, ]) predicted <- predict(dlda_out, penguins[pred_rows, -1], type = "class") dlda_out2 <- lda_diag(x = penguins[-pred_rows, -1], y = penguins$species[-pred_rows]) predicted2 <- predict(dlda_out2, penguins[pred_rows, -1], type = "class") all.equal(predicted, predicted2)#> [1] TRUE