Re-se-arch
Our re-se-arch has been generously supported by ARO, NSF, ARFL, IARPA, BlueHalo and Salesforce.
2016
Chen, Diqi; Wang, Yizhou; Wu, Tianfu; Gao, Wen
Recurrent Attentional Model for No-Reference Image Quality Assessment Miscellaneous
arXiv preprint, 2016.
@misc{Chen_IQA,
title = {Recurrent Attentional Model for No-Reference Image Quality Assessment},
author = {Diqi Chen and Yizhou Wang and Tianfu Wu and Wen Gao},
url = {https://arxiv.org/abs/1612.03530},
year = {2016},
date = {2016-01-01},
journal = {CoRR},
volume = {abs/1612.03530},
abstract = {This paper presents a recurrent attentional model (RAM) for general no-reference image quality assessment (NR-IQA), that is to predict the perceptual quality score for an input image without using any reference image and/or prior knowledge regarding underlying distortions. The proposed RAM is inspired by the well known visual attention mechanism, both covert and overt, which affects many aspects of visual perception including image quality assessment. The attentional mechanism is, however, largely ignored in the NR-IQA literature. The proposed RAM hypothesizes that the attentional scanning path in an image should contain intrinsic information for IQA. The RAM thus consists of three components: a glimpse sub-network analyzing the quality at a fixation using multi-scale information, a location sub-network selecting where to look next by sampling a stochastic node, and a recurrent network aggregating information along the scanning path to compute the final prediction. The RAM is formulated under multi-task learning for the joint prediction of distortion type and image quality score and for the REINFORCE rule~citewilliams1992simple used to handle the stochastic node. The RAM is trained through back-propagation. In experiments, the RAM is tested on the TID2008 dataset with promising performance obtained, which shows the effectiveness of the proposed RAM. Furthermore, the RAM is very efficient in the sense that a small number of glimpses are used usually in testing.},
howpublished = {arXiv preprint},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
This paper presents a recurrent attentional model (RAM) for general no-reference image quality assessment (NR-IQA), that is to predict the perceptual quality score for an input image without using any reference image and/or prior knowledge regarding underlying distortions. The proposed RAM is inspired by the well known visual attention mechanism, both covert and overt, which affects many aspects of visual perception including image quality assessment. The attentional mechanism is, however, largely ignored in the NR-IQA literature. The proposed RAM hypothesizes that the attentional scanning path in an image should contain intrinsic information for IQA. The RAM thus consists of three components: a glimpse sub-network analyzing the quality at a fixation using multi-scale information, a location sub-network selecting where to look next by sampling a stochastic node, and a recurrent network aggregating information along the scanning path to compute the final prediction. The RAM is formulated under multi-task learning for the joint prediction of distortion type and image quality score and for the REINFORCE rule~citewilliams1992simple used to handle the stochastic node. The RAM is trained through back-propagation. In experiments, the RAM is tested on the TID2008 dataset with promising performance obtained, which shows the effectiveness of the proposed RAM. Furthermore, the RAM is very efficient in the sense that a small number of glimpses are used usually in testing.