Integration of the Latent Variable Knowledge into Deep Image Captioning with Bayesian Modeling

AuthorsHassan Farsi,Sajad Mohamadzadeh
JournalIET Image Processing
Page number2256-2271
Serial number17
Volume number7
IF1.401
Paper TypeFull Paper
Published At2023
Journal TypeTypographic
Journal CountryIran, Islamic Republic Of
Journal IndexJCR،Scopus

Abstract

Automatic image captioning systems assign one or more sentences to images to describe their visual content. Most of these systems use attention-based deep convolutional neural networks and recurrent neural networks (CNN-RNN-Att). However, they must optimally use latent variables and side information within the image concepts. This study aims to integrate a latent variable into image captioning using CNN-RNN-Att. A Bayesian modeling framework is used for this work. As an instance of a latent variable, High-Level Semantic Concepts (HLSCs) of tags are used to implement the proposed model. The Bayesian model output solution is to localize the entire image description process and break it down into sub-problems. Thus, a baseline descriptor subnet is trained independently for each sub-problem, and it is the only expert in captioning for a given HLSC. The final output is the caption derived from the subnet, which its HLSC is closest to the image content. The simulation results indicate that CNN-RNN-Att applied to data localized using HLSCs improves the captioning accuracy of the proposed method, which can be compared to the latest state-of-theart and most accurate captioning systems.

Paper URL

tags: Images Description · Automatic Image Captioning · Latent Variable · High-level semantic Concepts · Deep Neural Networks · Attention Mechanism