| Authors | Hassan Farsi,Sajad Mohamadzadeh |
| Journal | International Journal of Engineering |
| Page number | 2511-2526 |
| Serial number | 38 |
| Volume number | 11 |
| Paper Type | Full Paper |
| Published At | 2025 |
| Journal Grade | Scientific - research |
| Journal Type | Typographic |
| Journal Country | Iran, Islamic Republic Of |
| Journal Index | JCR،isc،Scopus |
Abstract
In the field of computer vision, semantic segmentation has become an important problem that has applications in fields such as autonomous driving and robotics. Image segmentation datasets, on the other hand, present substantial hurdles due to the high intra-class variability, which includes differences across car models or building designs, and the low inter-class variability, which makes it difficult to discern between objects such as buildings that have facades that are visually identical. A focus-enhanced ASPP module that is coupled with an upgraded backbone for semantic segmentation networks is presented in this study as a solution to the problems that have been identified. In order to augment the adaptability of extracted features, the proposed framework utilizes the capability of an attention ASPP module to implement attention processes within the multiscale module. In order to efficiently capture complex features, the encoder stage also makes use of a ResNet-50 backbone that has been properly optimized. In addition, to increase the robustness of the model, data augmentation approaches are applied. mDice of 87.82, mIoU of 79.05, and mean accuracy of 85.2 on the Stanford dataset, and mDice of 88.91, mIoU of 80.03, and mean accuracy of 89.84 on the Cityscapes dataset, according to experimental assessments, demonstrate that the developed technique performs at an accuracy level that is believed to be modern. As a result of these findings, the possibility for greatly improving semantic segmentation performance may be highlighted by integrating attention mechanisms, ASPP modules, and upgraded ResNet structures
Paper URL