Research Article Open Access

Performance Analysis of Message Passing Interface Collective Communication on Intel Xeon Quad-Core Gigabit Ethernet and Infiniband Clusters

Roswan Ismail1, Nor Asilah Wati Abdul Hamid2, Mohamed Othman2 and Rohaya Latip2
  • 1 Universiti Pendidikan Sultan Idris, Malaysia
  • 2 Universiti Putra Malaysia, Malaysia
Journal of Computer Science
Volume 9 No. 4, 2013, 455-462

DOI: https://doi.org/10.3844/jcssp.2013.455.462

Published On: 9 May 2013

How to Cite: Ismail, R., Hamid, N. A. W. A., Othman, M. & Latip, R. (2013). Performance Analysis of Message Passing Interface Collective Communication on Intel Xeon Quad-Core Gigabit Ethernet and Infiniband Clusters. Journal of Computer Science, 9(4), 455-462. https://doi.org/10.3844/jcssp.2013.455.462

Abstract

The performance of MPI implementation operations still presents critical issues for high performance computing systems, particularly for more advanced processor technology. Consequently, this study concentrates on benchmarking MPI implementation on multi-core architecture by measuring the performance of Open MPI collective communication on Intel Xeon dual quad-core Gigabit Ethernet and InfiniBand clusters using SKaMPI. It focuses on well known collective communication routines such as MPI-Bcast, MPI-AlltoAll, MPI-Scatter and MPI-Gather. From the collection of results, MPI collective communication on InfiniBand clusters had distinctly better performance in terms of latency and throughput. The analysis indicates that the algorithm used for collective communication performed very well for all message sizes except for MPI-Bcast and MPI-Alltoall operation of inter-node communication. However, InfiniBand provides the lowest latency for all operations since it provides applications with an easy to use messaging service, compared to Gigabit Ethernet, which still requests the operating system for access to one of the server communication resources with the complex dance between an application and a network.

Download

Keywords

  • MPI Benchmark
  • Performance Analysis
  • MPI Communication
  • Open MPI
  • Gigabit
  • InfiniBand