Integration of the Latent Variable Knowledge into Deep Image Captioning with Bayesian Modeling

نویسندگانHassan Farsi,Sajad Mohamadzadeh
نشریهIET Image Processing
شماره صفحات2256-2271
شماره سریال17
شماره مجلد7
ضریب تاثیر (IF)1.401
نوع مقالهFull Paper
تاریخ انتشار2023
نوع نشریهچاپی
کشور محل چاپایران
نمایه نشریهJCR،Scopus

چکیده مقاله

Automatic image captioning systems assign one or more sentences to images to describe their visual content. Most of these systems use attention-based deep convolutional neural networks and recurrent neural networks (CNN-RNN-Att). However, they must optimally use latent variables and side information within the image concepts. This study aims to integrate a latent variable into image captioning using CNN-RNN-Att. A Bayesian modeling framework is used for this work. As an instance of a latent variable, High-Level Semantic Concepts (HLSCs) of tags are used to implement the proposed model. The Bayesian model output solution is to localize the entire image description process and break it down into sub-problems. Thus, a baseline descriptor subnet is trained independently for each sub-problem, and it is the only expert in captioning for a given HLSC. The final output is the caption derived from the subnet, which its HLSC is closest to the image content. The simulation results indicate that CNN-RNN-Att applied to data localized using HLSCs improves the captioning accuracy of the proposed method, which can be compared to the latest state-of-theart and most accurate captioning systems.

لینک ثابت مقاله

tags: Images Description · Automatic Image Captioning · Latent Variable · High-level semantic Concepts · Deep Neural Networks · Attention Mechanism