نویسندگان | Hamid Saadatfar |
---|---|
نشریه | International Journal of Computers and Applications |
شماره صفحات | 260-269 |
شماره سریال | 44 |
شماره مجلد | 3 |
نوع مقاله | Full Paper |
تاریخ انتشار | 2022 |
رتبه نشریه | ISI |
نوع نشریه | الکترونیکی |
کشور محل چاپ | ایران |
نمایه نشریه | Scopus |
چکیده مقاله
Hadoop is a popular framework based on MapReduce programming model to allow for distributed processing of large datasets across clusters with various number of computer nodes. Just like any dynamic computational environment, Hadoop has some problems and one of which is unsuccessful execution of MapReduce jobs. Job failures can cause significant resource wasting, performance deterioration, and user dissatisfaction. Therefore, a proactive and predictive management approach could be very useful in Hadoop systems. In this paper, we try to predict the futurity of MapReduce jobs in OpenCloud Hadoop cluster by using its log files. OpenCloud is a research cluster managed by CMU’s Parallel Data Lab which uses Hadoop to process big data. We first tried to study the log files and analyze the relationship between the jobs, resources, and workload characteristics and the failures in order to discover the effective features for the prediction process. After recognizing the job failure patterns, some popular machine learning algorithms are deployed to predict the success/failure status of the jobs before they start to execute. Eventually, we compared the learning methods and showed that the C5.0 algorithm had the best results with an accuracy of 91.37%, a recall of 74.43%, and a precision of 80.31%.
tags: Hadoop; MapReduce job; cluster workload; failure prediction; data mining; log file