Visual Saliency based on Lossy Coding

 
Visual Saliency Based on Lossy Coding

 
       In the real world, human visual system demonstrates remarkable ability in locating the objects of interest in cluttered background, whereby attention provides a mechanism to quickly identify subsets within the scene that contains important information. Besides the scientific goal of understanding this human behavior, a computational approach to visual attention also contributes to many applications in computer vision, such as scene understanding, image/video compression, or object recognition.Therefore, saliency mechanism has been considered crucial in human visual system and helpful to object detection and recognition.
 
       Inspired by biological vision, we consider the definition of visual saliency in a strictly local approach. Given the surrounding area, the saliency is defined as the minimum uncertainty of the local region, namely the minimum conditional entropy, when the perceptional distortion is considered. To simplify the problem, we approximate the conditional entropy in two different approaches. The first one attack the problem by the lossy coding length of sparse representation, and the other one provides a solution by the lossy coding length of multivariate Gaussian data. The final saliency map is accumulated by pixels and further segmented to detect the proto-objects.
 
 
 

 

Overview of our approaches 

 

Comparison of the two different approach 

 
 

Comparision of the Results (from Left to Right):Original Image, Human Labeled Salient Region,
Itti PAMI98, Hou CVPR07, Bruce NIPS04, Gaussian Conditional Entropy, Incremental Sparse Coding
 

 
       We also addresses a global feature-based model for visual saliency detection. It consists of two steps: first, using the learned overcomplete sparse bases to represent image patches; and then, estimating saliency information via direct low-rank and sparsity matrix decomposition.
 
 

Over-Complete Dictionary by Random Sampling of Natural Images

 

 Visual Saliency by Low-Rank Sparsity Decomposition

 

 Visual Comparison of Our Results(from Left to Right):
Original Image, Human Fixation map, O
ur Results, Hou NIPS08, Bruce NIPS04, Itti PAMI98

    Related Publication

    1. Yin Li, Yue Zhou, Lei Xu and Xiaochao Yang. Incremental Sparse Saliency Detection, IEEE International Conference on Image Processing (ICIP) 2009 (oral presentation)

    2. Yin Li, Junchi Yan and Yue Zhou. Visual Saliency Based on Conditional Entropy, The Asian Conference on Computer Vision (ACCV) 2009 (oral presentation)

    3. Junchi Yan, Jian Liu, Yin Li and Yuncai Liu. Visual Saliency via Sparsity Rank Decomposition, accepted by ICIP 2010 

Comments