Low memory visual saliency architecture for data reduction in wireless sensor networks

C.W.H. Ngau, L.-M. Ang, K.P. Seng

    Research output: Contribution to journalArticlepeer-review

    7 Citations (Scopus)

    Abstract

    Traditionally, to reduce communication overheads because of bandwidth limitations in wireless sensor networks (WSNs), image compression techniques are used on high-resolution captures. Higher data reduction rates can be achieved by first removing redundant parts of the capture prior to the application of image compression. To locate these redundant parts, biologically plausible visual saliency processing is used to isolate parts that seemed important based on visual perception. Although visual saliency proves to be an effective method in providing a distinctive difference between important and unimportant regions, computational complexity and memory requirements often impair implementation. This study presents an implementation of a low-memory visual saliency architecture with reduced computation complexity for data reduction in WSNs through salient patch transmission. A custom softcore microprocessor-based hardware implementation on a field programmable gate array is then used to verify the architecture. Real-time processing demonstrated that data reductions of more than 50% are achievable for simple to medium scenes without the application of image compression techniques. © 2012 The Institution of Engineering and Technology.
    Original languageEnglish
    Pages (from-to)115-127
    Number of pages13
    JournalIET Wireless Sensor Systems
    Volume2
    Issue number2
    DOIs
    Publication statusPublished - Jun 2012

    Fingerprint

    Dive into the research topics of 'Low memory visual saliency architecture for data reduction in wireless sensor networks'. Together they form a unique fingerprint.

    Cite this