Voting based object boundary Reconstruction

Qi Tian, Like Zhang, Jingsheng Ma

    Research output: Contribution to journalArticlepeer-review

    Abstract

    A voting-based object boundary reconstruction approach is proposed in this paper. Morphological technique was adopted in many applications for video object extraction to reconstruct the missing pixels. However, when the missing areas become large, the morphological processing cannot bring us good results. Recently, Tensor voting has attracted people's attention, and it can be used for boundary estimation on curves or irregular trajectories. However, the complexity of saliency tensor creation limits its applications in real-time systems. An alternative approach based on tensor voting is introduced in this paper. Rather than creating saliency tensors, we use a "2-pass" method for orientation estimation. For the first pass, Sobel d'etector is applied on a coarse boundary image to get the gradient map. In the second pass, each pixel puts decreasing weights based on its gradient information, and the direction with maximum weights sum is selected as the correct orientation of the pixel. After the orientation map is obtained, pixels begin linking edges or intersections along their direction. The approach is applied to various video surveillance clips under different conditions, and the experimental results demonstrate significant improvement on the final extracted objects accuracy.

    Original languageEnglish
    Pages (from-to)2163-2171
    Number of pages9
    JournalProceedings of SPIE - the International Society for Optical Engineering
    Volume5960
    Issue number4
    DOIs
    Publication statusPublished - 2005
    EventVisual Communications and Image Processing 2005 - Beijing, China
    Duration: 12 Jul 200515 Jul 2005

    Keywords

    • Boundary extraction
    • Edge linking
    • Video object
    • Voting

    Fingerprint

    Dive into the research topics of 'Voting based object boundary Reconstruction'. Together they form a unique fingerprint.

    Cite this