Authors | ehsanullah zia,Mohammad Ali Zeraatkar,Javad Hassannataj Joloudari,Ali Hoseini |
---|---|
Journal | Signal, Image and Video Processing |
Page number | 7183-7197 |
Serial number | 18 |
Volume number | 10 |
Paper Type | Full Paper |
Published At | 2024 |
Journal Type | Typographic |
Journal Country | Iran, Islamic Republic Of |
Journal Index | ISI،JCR،Scopus |
Abstract
Fire is recognized as a destructive disaster in smart environments that causes serious harm to ecosystems and humans. Early and rapid-fire detection in cities and forests can prevent human, economic, and environmental damage. Wireless sensor networks have been used for fire detection, but their deployment is costly and limited to specific locations. On the other hand, cameras are widely deployed in smart cities and interurban areas, as they are cheaper and more pervasive than sensor networks. In this paper, an end-to-end neural network model called EfficientNetB2 (3ENB2) based on transfer learning is proposed for accurate fire detection from images. This model implements an online data augmentation strategy encompassing random rotation and horizontal flip during the data training phase. According to this perspective, the precise count of altered data samples during the training procedure remains unspecified. The results show that the proposed model outperforms the primary 3ENB2 model with an accuracy rate of 99.04% compared to 98.57%. Additionally, the proposed model provides better localization and representation of fire images.
tags: Fire detection; Online data augmentation; Convolutional neural network; Transfer learning; Efficient-NetB2; Grad-CAM