| Authors | Sajad Mohamadzadeh,Seyyed Mohammad Razavi |
| Journal | International Journal of Engineering |
| Page number | 2414-2425 |
| Serial number | 38 |
| Volume number | 10 |
| Paper Type | Full Paper |
| Published At | 2025 |
| Journal Grade | Scientific - research |
| Journal Type | Electronic |
| Journal Country | Iran, Islamic Republic Of |
| Journal Index | JCR،isc،Scopus |
Abstract
One of the methods that have gained attention in recent years is the extraction of Mel-spectrogram images from speech signals and the use of speaker recognition systems. This permits us to utilize existing image recognition methods for this purpose. Three-second segments of the speech are randomly chosen in this paper and then the Mel-spectrogram image of that segment is derived. These images are inputted into a proposed convolutional neural network that has been designed and optimized based on VGG-13. Compared to similar tasks, this optimized classifier has fewer parameters, and it trains faster and has a higher level of accuracy. For the voxceleb1 dataset with 1251 speakers, the accuracy of top-1 = 84.25% and top-5 = 94.33% has been achieved. In addition, various methods have been employed to augment data based on these images, ensuring the speech's nature remains intact, and in most cases, it improves the system's performance. The utilization of data agumentation techniques, such as flip horizontal and time shifting of images or ES technique, led to an increase in top-1 to 91.17% and top-5 to 97.32%. Moreover, by employing the Dropout layer output of the proposed neural network as a feature vector during training of the GMM-UBM model, the EER rate in the speaker verification system is decreased. These features reduce the EER value by 9% for the MFCC feature to 3.5%.
Paper URL