Research Article Open Access

Performance Analysis of Deep Learning Libraries: TensorFlow and PyTorch

Felipe Florencio1, Thiago Valenç1, Edward David Moreno1 and Methanias Colaço Junior1
  • 1 Universidade Federal de Sergipe, Brazil
Journal of Computer Science
Volume 15 No. 6, 2019, 785-799

DOI: https://doi.org/10.3844/jcssp.2019.785.799

Submitted On: 1 January 2019 Published On: 11 April 2019

How to Cite: Florencio, F., Valenç, T., Moreno, E. D. & Junior, M. C. (2019). Performance Analysis of Deep Learning Libraries: TensorFlow and PyTorch. Journal of Computer Science, 15(6), 785-799. https://doi.org/10.3844/jcssp.2019.785.799

Abstract

Through the increase in deep learning study and use, in the last years there was a development of specific libraries for Deep Neural Network (DNN). Each one of these libraries has different performance results and applies different techniques to optimize the implementation of algorithms. Therefore, even though implementing the same algorithm and using different libraries, the performance of their executions may have a considerable variation. For this reason, developers and scientists that work with deep learning need scientific experimental studies that examine the performance of those libraries. Therefore, this paper has the aim of evaluating and comparing these two libraries: TensorFlow and PyTorch. We have used three parameters: Hardware utilization, hardware temperature and execution time in a context of heterogeneous platforms with CPU and GPU. We used the MNIST database for training and testing the LeNet Convolutional Neural Network (CNN). We performed a scientific experiment following the Goal Question Metrics (GQM) methodology, data were validated through statistical tests. After data analysis, we show that PyTorch library presented a better performance, even though the TensorFlow library presented a greater GPU utilization rate.

  • 1,170 Views
  • 886 Downloads
  • 1 Citations

Download

Keywords

  • Tensorflow
  • PyTorch
  • Comparison
  • Evaluation Performance
  • Benchmarking
  • Deep Learning Library