The appropriateness of k-Sparse autoencoders in sparse coding

Pushkar Bhatkoti, Manoranjan Paul

Research output: Book chapter/Published conference paperConference paperpeer-review

Abstract

Learning of representations usually happens in different ways. Sometimes it persuades sparsity thus enhances performance through the task categorization. The sparse elements entail the learning algorithms that relate to the sparse-coding. Sometimes the algorithms have neural training networks with sparsity penalties and fines. The k-sparse autoencoder (KSA) model appears linear. The appropriateness of the model in sparse coding forms the foundation of this paper. Most important, the model appears speedily encoded and easily trained. Given these advantages, the model is suited for solving large-size issues or problems. We used openly available Mixed National Institute of Standard and Technology Database (MINST) and NYU Object Recognition Benchmark (NORB) dataset in supervisory and un-supervisory learning tasks to validate the hypothesis. The result of the paper shows that the traditional algorithms cannot resolve large size problems for sparse coding as the k-Sparse autoencoder model. Keywords—k-sparse autoencoder (KSA), Sparsity, algorithms, Sparse-coding
Original languageEnglish
Title of host publication2018 International Conference on Electrical, Electronics, Computers, Communication, Mechanical and Computing (EECCMC)
PublisherIEEE Xplore
Number of pages8
Publication statusPublished - 2018
EventInternational Conference on Electrical, Electronics, Computers, Communication, Mechanical and Computing 2018: EECCMC 2018 - Priyadarshini Engineering College, Tamil Nadu, India
Duration: 28 Jan 201829 Jan 2018
http://eeccmc.org/index.php

Conference

ConferenceInternational Conference on Electrical, Electronics, Computers, Communication, Mechanical and Computing 2018
Country/TerritoryIndia
CityTamil Nadu
Period28/01/1829/01/18
Internet address

Fingerprint

Dive into the research topics of 'The appropriateness of k-Sparse autoencoders in sparse coding'. Together they form a unique fingerprint.

Cite this