A parallelization model for performance characterization of Spark Big Data jobs on Hadoop clusters

N. Ahmed*, Andre L.C. Barczak, Mohammad A. Rashid, Teo Susnjak

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

14 Citations (Scopus)
64 Downloads (Pure)


This article proposes a new parallel performance model for different workloads of Spark Big Data applications running on Hadoop clusters. The proposed model can predict the runtime for generic workloads as a function of the number of executors, without necessarily knowing how the algorithms were implemented. For a certain problem size, it is shown that a model based on serial boundaries for a 2D arrangement of executors can fit the empirical data for various workloads. The empirical data was obtained from a real Hadoop cluster, using Spark and HiBench. The workloads used in this work were included WordCount, SVM, Kmeans, PageRank and Graph (Nweight). A particular runtime pattern emerged when adding more executors to run a job. For some workloads, the runtime was longer with more executors added. This phenomenon is predicted with the new model of parallelisation. The resulting equation from the model explains certain performance patterns that do not fit Amdahl’s law predictions, nor Gustafson’s equation. The results show that the proposed model achieved the best fit with all workloads and most of the data sizes, using the R-squared metric for the accuracy of the fitting of empirical data. The proposed model has advantages over machine learning models due to its simplicity, requiring a smaller number of experiments to fit the data. This is very useful to practitioners in the area of Big Data because they can predict runtime of specific applications by analysing the logs. In this work, the model is limited to changes in the number of executors for a fixed problem size.

Original languageEnglish
Article number107
JournalJournal of Big Data
Issue number1
Publication statusPublished - 14 Aug 2021
Externally publishedYes


Dive into the research topics of 'A parallelization model for performance characterization of Spark Big Data jobs on Hadoop clusters'. Together they form a unique fingerprint.

Cite this