扫描手机二维码

欢迎您的访问
您是第 位访客

开通时间:..

最后更新时间:..

  • 徐迈 ( 教授 )

    的个人主页 http://shi.buaa.edu.cn/MaiXu/zh_CN/index.htm

  •   教授   博士生导师   硕士生导师
Image Saliency Detection 当前位置: 中文主页 >> 研究成果 >> Image Saliency Detection
Image Saliency Detection With Sparse Representation of Learnt Texture Atoms
点击次数:

 

Image Saliency Detection With Sparse Representation of Learnt Texture Atoms

Lai Jiang, Mai Xu, Zhaoting Ye, Zulin Wang Beihang University, Beijing, 100191, China

 

 

Abstract

This paper proposes a saliency detection method using a novel feature on sparse representation of learnt texture atoms (SR-LTA), which are encoded in salient and non-salient dictionaries. For salient dictionary, a novel formulation is proposed to learn salient texture atoms from image patches attracting extensive attention. Then, online salient dictionary learning (OSDL) algorithm is provided to solve the proposed formulation. Similarly, the non-salient dictionary can be learnt from image patches without any attention. A new pixel-wise feature, namely SR-LTA, is yielded based on the difference of sparse representation errors regarding the learnt salient and non-salient dictionaries. Finally, image saliency can be predicted via linear combination of the proposed SR-LTA feature and conventional features, i.e., luminance and contrast. For the linear combination, the weights corresponding to different feature channels are determined by least square estimation on the training data. The experimental results show that our method outperforms several state-of-the-art saliency detection methods



 

Motivations

It is well-known that an image patch can be seen as the composition of texture patterns. After some experiments, we found that salient image patches could be represented by similar patterns. The figure below shows the possibility of learning hundreds of salient patterns (by applying the dictionary learning algorithm) for saliency detection. In this figure, the regions inside the red squares (enlarged in the corners) are salient patches, in the images of the eye tracking Kienzle database (the first row) and Doves database (the second row). Some atoms of the dictionaries, learnt from the salient regions of other training images, are shown in the middle of two images. In addition, the sparse representation coefficients of the salient patterns regarding the learnt dictionaries are also provided. It can be seen that the salient patches across the different images may share some similar basic patterns, and these basic patterns may be learnt from the training data. Note that the non-salient patches in the second row have the same characteristic.


       

 



 

Algorithms

This work proposes to learn extensive positive and negative patterns from the eye tracking data of training images, for bottom-up saliency detection. Specifically, we first proposes a formulation with a novel center-surround term, for learning two discriminative dictionaries. These two dictionaries contain the atoms for basic texture patterns of salient and non-salient regions, respectively. In light of online dictionary learning, we develop an online salient dictionary learning (OSDL) algorithm to solve the proposed formulation, and then the salient and non-salient dictionaries can be learnt from the eye tracking data of training images. Given the learnt dictionaries, a novel feature based on sparse representation of learnt texture atoms (SR-LTA) is worked out in our method. Such a feature is generally based on the errors of sparse representation regarding salient and non-salient dictionaries. Next, the saliency of an image can be predicted, via combining the SR-LTA feature with conventional luminance and contrast features. For the linear combination, the weights corresponding to each feature channel are estimated via least square fitting on the training data.  Similar to other bottom-up methods, we only works on gray images with natural scenes.


        The summary of online salient dictionary learning (OSDL) algorithm.

undefined


The procedure of our salieney detection method.

undefined

 



 

Results

The averaged accuracy of saliency detection on test images of two databases.

undefined

 

 

The ROC curves of saliency detection.

undefined

 

           

   Saliency maps of four test images from two database.

From left to right: Input images, human fixation maps, our, BMS, Itti, Duan, GAFFE, Hou, Zhao, Judd, AWS and SALICON methods.

undefined

 



 

Reference

1.    Lai Jiang, Mai Xu, Zhaoting Ye, Zulin Wang. Image Saliency Detection with Sparse Representation of Learnt Texture Atoms. The IEEE International Conference on Computer Vision (ICCV) Workshops, 2015, pp. 54-62

2.    (In submitting) Mai Xu, Lai Jiang, Zhaoting Ye, Zulin Wang. Bottom-up saliency detection with Sparse Representation of Learnt Texture Atoms. Pattern Recognition 2016.





Mai Xu's Homepage