The purpose of this paper is to assess the performance of a Hadoop cluster in MapReduce and Spark using two different clusters, one with 5 slave nodes, and another with 9 slave nodes. For the experiment, the HiBenchmark workloads WordCount and TeraSort are used with varied data scale from 50GB to 600GB. We have chosen a few different parameters and replaced their default values with the tuned values, allowing us to analyze the effects of such changes in each job's runtime. The results show that for both WordCount and Terasort workloads, depending on the tuned parameters, MapReduce and Spark achieved 64% and around 60% performance improvement at each data point. Besides, we also got slightly interesting results of speed-up progress by 1% using extra slave nodes. These results show that cluster performance can be improved by changing default values of a few parameters and adding additional slave nodes.