Abstract
Transfer learning aims to learn classifiers for a target domain by transferring knowledge from a source domain. However, due to two main issues: feature discrepancy and distribution divergence, transfer learning can be a very difficult problem in practice. In this paper, we present a framework called TLF that builds a classifier for the target domain having only a few labeled training records by transferring knowledge from the source domain having many labeled records. While existing methods often focus on one issue and leave the other one for further work, TLF is capable of handling both issues simultaneously. In TLF, we alleviate feature discrepancy by identifying shared label distributions that act as the pivots to bridge the domains. We handle distribution divergence by simultaneously optimizing the structural risk functional, joint distributions between domains, and the manifold consistency underlying marginal distributions. Moreover, for the manifold consistency we exploit its intrinsic properties by identifying <inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula> nearest neighbors of a record, where <inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula> is determined automatically. We evaluate TLF on seven publicly available datasets and compare performances of TLF and fourteen state-of-the-art techniques. Our experimental results, including statistical sign test and Nemenyi test analyses, indicate a clear superiority of TLF over the state-of-the-art techniques.
Original language | English |
---|---|
Pages (from-to) | 1-18 |
Number of pages | 18 |
Journal | IEEE Transactions on Services Computing |
DOIs | |
Publication status | Published - 11 Oct 2022 |