An efficient framework of Bregman divergence optimization for co-ranking images and tags in a heterogeneous network

Lin Wu, Xiaodi Huang, Chengyuan Zhang, John Shepherd, Yang Wang

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Graph-based ranking is an effective way of ranking images by making use of the graph structure. However, its applications are usually limited to individual image graphs, which are derived from self-contained features of images. Nowadays, many images in social web sites are often associated with semantic information (i.e., tags). Ranking of these orderless tags is helpful in understanding and retrieving images, thus, improving the overall ranking performance if their mutual reinforcement is considered. Unlike previous work only focusing on individual image or tag graphs, in this paper, we investigate the problem of co-ranking images and tags in a heterogeneous network. Considering that ranking on images and tags can be conducted simultaneously, we present a novel co-ranking method with random walks that is able to significantly improve the ranking effectiveness on both images and tags. We further improve the performance of our algorithm in computational complexity and the out-of-sample problem. This is achieved by casting the co-ranking as a Bregman divergence optimization, under which we transform the original random walks into an equivalent optimal kernel matrix learning problem. Extensive experiments conducted on three benchmarks show that our approach outperforms the state-of-the-art local ranking approaches and scales on large-scaled databases.
Original languageEnglish
Pages (from-to)5635-5660
Number of pages26
JournalMultimedia Tools and Applications
Volume74
Issue number15
Early online date2014
DOIs
Publication statusPublished - Aug 2015

Fingerprint Dive into the research topics of 'An efficient framework of Bregman divergence optimization for co-ranking images and tags in a heterogeneous network'. Together they form a unique fingerprint.

Cite this