Gaussian Mixture Model (GMM) Training

>>> from mlpack import gmm_train

This program takes a parametric estimate of a Gaussian mixture model (GMM) using the EM algorithm to find the maximum likelihood estimate. The model may be saved and reused by other mlpack GMM tools.

The input data to train on must be specified with the 'input' parameter, and the number of Gaussians in the model must be specified with the 'gaussians' parameter. Optionally, many trials with different random initializations may be run, and the result with highest log-likelihood on the training data will be taken. The number of trials to run is specified with the 'trials' parameter. By default, only one trial is run.

The tolerance for convergence and maximum number of iterations of the EM algorithm are specified with the 'tolerance' and 'max_iterations' parameters, respectively. The GMM may be initialized for training with another model, specified with the 'input_model' parameter. Otherwise, the model is initialized by running k-means on the data. The k-means clustering initialization can be controlled with the 'refined_start', 'samplings', and 'percentage' parameters. If 'refined_start' is specified, then the Bradley-Fayyad refined start initialization will be used. This can often lead to better clustering results.

The 'diagonal_covariance' flag will cause the learned covariances to be diagonal matrices. This significantly simplifies the model itself and causes training to be faster, but restricts the ability to fit more complex GMMs.

If GMM training fails with an error indicating that a covariance matrix could not be inverted, make sure that the 'no_force_positive' parameter is not specified. Alternately, adding a small amount of Gaussian noise (using the 'noise' parameter) to the entire dataset may help prevent Gaussians with zero variance in a particular dimension, which is usually the cause of non-invertible covariance matrices.

The 'no_force_positive' parameter, if set, will avoid the checks after each iteration of the EM algorithm which ensure that the covariance matrices are positive definite. Specifying the flag can cause faster runtime, but may also cause non-positive definite covariance matrices, which will cause the program to crash.

As an example, to train a 6-Gaussian GMM on the data in 'data' with a maximum of 100 iterations of EM and 3 trials, saving the trained GMM to 'gmm', the following command can be used:

>>> gmm_train(input=data, gaussians=6, trials=3)
>>> gmm = output['output_model']

To re-train that GMM on another set of data 'data2', the following command may be used:

>>> gmm_train(input_model=gmm, input=data2, gaussians=6)
>>> new_gmm = output['output_model']

input options

output options

The return value from the binding is a dict containing the following elements: