VCSNet: V-fused color saliency net
Author(s): Xu, BJ (Xu, Binjing); Wu, Q (Wu, Qiong); Qin, H (Qin, Hong); Liu, ZY (Liu, Zhiyuan); Shi, L (Shi, Lin); Li, S (Li, Shuang)
Source: CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE Article Number: e7127 DOI: 10.1002/cpe.7127 Early Access Date: JUN 2022
Abstract: Image salient region detection methods, which can detect and extract interesting regions in pictures, have been a hot research direction in recent years. Most of current salient region detection algorithms and corresponding training datasets originated from general visual attention processes and primarily reflected attention to object shape in pictures. While color vision provided more useful information in vision systems, the color effect of visual attention should be and must be considered. First, we collected cue data of painting pictures from several observers using eye-track recording technology when observers were asked to pay attention to color information of various paintings. Second, we constructed a color attention dataset, color saliency dataset (CSD), from the cue data and pictures. Thirdly, we designed a V-fused color saliency net (VCSNet) model which included three modules: a color information fusion module, a prediction module, and an optimization module and trained the model using the CSD. Finally, we compared our method with previous algorithms on the CSD, and results showed that our method outperformed the previous algorithms in color saliency detection with MAE of 0.057 and Fmax of 0.265. We open source part of the self-created dataset: .
Accession Number: WOS:000810647100001
ISSN: 1532-0626
eISSN: 1532-0634
Full Text: https://onlinelibrary.wiley.com/doi/10.1002/cpe.7127