A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench

N. Ahmed*, Andre L.C. Barczak, Teo Susnjak, Mohammed A. Rashid

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

25 Citations (Scopus)
4 Downloads (Pure)

Abstract

Big Data analytics for storing, processing, and analyzing large-scale datasets has become an essential tool for the industry. The advent of distributed computing frameworks such as Hadoop and Spark offers efficient solutions to analyze vast amounts of data. Due to the application programming interface (API) availability and its performance, Spark becomes very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters, and the combination of these parameters has a massive impact on cluster performance. The default system parameters help the system administrator deploy their system applications without much effort, and they can measure their specific cluster performance with factory-set parameters. However, an open question remains: can new parameter selection improve cluster performance for large datasets? In this regard, this study investigates the most impacting parameters, under resource utilization, input splits, and shuffle, to compare the performance between Hadoop and Spark, using an implemented cluster in our laboratory. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks of comparative analysis, we select two workloads: WordCount and TeraSort. The performance metrics are carried out based on three criteria: execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis of the results shows that Spark has better performance as compared to Hadoop when data sets are small, achieving up to two times speedup in WordCount workloads and up to 14 times in TeraSort workloads when default parameter values are reconfigured.

Original languageEnglish
Article number110
JournalJournal of Big Data
Volume7
Issue number1
DOIs
Publication statusPublished - Dec 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench'. Together they form a unique fingerprint.

Cite this