نویسندگان | Hamid Saadatfar,Amir Mosavi,Shahaboddin Shamshirband |
---|---|
نشریه | Mathematics |
شماره صفحات | 1-12 |
شماره سریال | 8 |
شماره مجلد | 2 |
نوع مقاله | Full Paper |
تاریخ انتشار | 2020 |
رتبه نشریه | ISI |
نوع نشریه | الکترونیکی |
کشور محل چاپ | سوئیس |
نمایه نشریه | ISI،JCR،Scopus |
چکیده مقاله
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
tags: K-nearest neighbors; KNN; classifier; machine learning; big data; clustering; cluster shape; cluster density; classification; reinforcement learning; machine learning for big data; data science; computation; artificial intelligence