Tensorizing restricted Boltzmann machine

Fujiao Ju, Yanfeng Sun, Junbin Gao, Michael Antolovich, Junliang Dong, Baocai Yin

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


Restricted Boltzmann machine (RBM) is a famous model for feature extraction and can be used as an initializer for neural networks. When applying the classic RBM to multidimensional data such as 2D/3D tensors, one needs to vectorize such as high-order data. Vectorizing will result in dimensional disaster and valuable spatial information loss. As RBM is a model with fully connected layers, it requires a large amount of memory. Therefore, it is difficult to use RBM with high-order data on low-end devices. In this article, to utilize classic RBM on tensorial data directly, we propose a new tensorial RBM model parameterized by the tensor train format (TTRBM). In this model, both visible and hidden variables are in tensorial form, which are connected by a parameter matrix in tensor train format. The biggest advantage of the proposed model is that TTRBM can obtain comparable performance compared with the classic RBM with much fewer model parameters and faster training process. To demonstrate the advantages of TTRBM, we conduct three real-world applications, face reconstruction, handwritten digit recognition, and image super-resolution in the experiments.

Original languageEnglish
Article number30
Pages (from-to)1-16
Number of pages16
JournalACM Transactions on Knowledge Discovery from Data
Issue number3
Publication statusPublished - Jul 2019


Dive into the research topics of 'Tensorizing restricted Boltzmann machine'. Together they form a unique fingerprint.

Cite this