Performance Analysis of Message Passing Interface Collective Communication on Intel Xeon Quad-Core Gigabit Ethernet and Infiniband Clusters
- 1 Universiti Pendidikan Sultan Idris, Malaysia
- 2 Universiti Putra Malaysia, Malaysia
Abstract
The performance of MPI implementation operations still presents critical issues for high performance computing systems, particularly for more advanced processor technology. Consequently, this study concentrates on benchmarking MPI implementation on multi-core architecture by measuring the performance of Open MPI collective communication on Intel Xeon dual quad-core Gigabit Ethernet and InfiniBand clusters using SKaMPI. It focuses on well known collective communication routines such as MPI-Bcast, MPI-AlltoAll, MPI-Scatter and MPI-Gather. From the collection of results, MPI collective communication on InfiniBand clusters had distinctly better performance in terms of latency and throughput. The analysis indicates that the algorithm used for collective communication performed very well for all message sizes except for MPI-Bcast and MPI-Alltoall operation of inter-node communication. However, InfiniBand provides the lowest latency for all operations since it provides applications with an easy to use messaging service, compared to Gigabit Ethernet, which still requests the operating system for access to one of the server communication resources with the complex dance between an application and a network.
DOI: https://doi.org/10.3844/jcssp.2013.455.462
Copyright: © 2013 Roswan Ismail, Nor Asilah Wati Abdul Hamid, Mohamed Othman and Rohaya Latip. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 3,213 Views
- 2,929 Downloads
- 1 Citations
Download
Keywords
- MPI Benchmark
- Performance Analysis
- MPI Communication
- Open MPI
- Gigabit
- InfiniBand