Research Article Open Access

PARALLEL IMPLEMENTATION OF EXPECTATION-MAXIMISATION ALGORITHM FOR THE TRAINING OF GAUSSIAN MIXTURE MODELS

G. F. Araújo1, H. T. Macedo1, M. T. Chella1, C. A.E. Montesco1 and M. V.O. Medeiros1
  • 1 , Brazil

Abstract

Most machine learning algorithms need to handle large data sets. This feature often leads to limitations on processing time and memory. The Expectation-Maximization (EM) is one of such algorithms, which is used to train one of the most commonly used parametric statistical models, the Gaussian Mixture Models (GMM). All steps of the algorithm are potentially parallelizable once they iterate over the entire data set. In this study, we propose a parallel implementation of EM for training GMM using CUDA. Experiments are performed with a UCI dataset and results show a speedup of 7 if compared to the sequential version. We have also carried out modifications to the code in order to provide better access to global memory and shared memory usage. We have achieved up to 56.4% of achieved occupancy, regardless the number of Gaussians considered in the set of experiments.

Journal of Computer Science
Volume 10 No. 10, 2014, 2124-2134

DOI: https://doi.org/10.3844/jcssp.2014.2124.2134

Submitted On: 18 November 2013 Published On: 9 July 2014

How to Cite: Araújo, G. F., Macedo, H. T., Chella, M. T., Montesco, C. A. & Medeiros, M. V. (2014). PARALLEL IMPLEMENTATION OF EXPECTATION-MAXIMISATION ALGORITHM FOR THE TRAINING OF GAUSSIAN MIXTURE MODELS. Journal of Computer Science, 10(10), 2124-2134. https://doi.org/10.3844/jcssp.2014.2124.2134

  • 3,241 Views
  • 2,838 Downloads
  • 5 Citations

Download

Keywords

  • Expectation-Maximization (EM)
  • Gaussian Mixture Models (GMM)
  • CUDA