Computes the maximum likelihood estimators (MLEs) for each class under the assumption of multivariate normality for each class. Also, computes ancillary information necessary for classifier summary, such as sample size, the number of features, etc.

diag_estimates(x, y, prior = NULL, pool = FALSE, est_mean = c("mle", "tong"))

Arguments

x

Matrix or data frame containing the training data. The rows are the sample observations, and the columns are the features. Only complete data are retained.

y

Vector of class labels for each training observation. Only complete data are retained.

prior

Vector with prior probabilities for each class. If NULL (default), then equal probabilities are used. See details.

pool

logical value. If TRUE, calculates the pooled sample variances for each class.

est_mean

the estimator for the class means. By default, we use the maximum likelihood estimator (MLE). To improve the estimation, we provide the option to use a shrunken mean estimator proposed by Tong et al. (2012).

Value

named list with estimators for each class and necessary ancillary information

Details

This function computes the common estimates and ancillary information used in all of the diagonal classifiers in the sparsediscrim package.

The matrix of training observations are given in x. The rows of x contain the sample observations, and the columns contain the features for each training observation.

The vector of class labels given in y are coerced to a factor. The length of y should match the number of rows in x.

An error is thrown if a given class has less than 2 observations because the variance for each feature within a class cannot be estimated with less than 2 observations. If other data have zero variances, these will be removed with a warning.

The vector, prior, contains the a priori class membership for each class. If prior is NULL (default), the class membership probabilities are estimated as the sample proportion of observations belonging to each class. Otherwise, prior should be a vector with the same length as the number of classes in y. The prior probabilities should be nonnegative and sum to one.

References

Tong, T., Chen, L., and Zhao, H. (2012), "Improved Mean Estimation and Its Application to Diagonal Discriminant Analysis," Bioinformatics, 28, 4, 531-537. http://bioinformatics.oxfordjournals.org/content/28/4/531.long