Abstract
Learning of representations usually happens in different ways. Sometimes it persuades sparsity thus enhances performance through the task categorization. The sparse elements entail the learning algorithms that relate to the sparse-coding. Sometimes the algorithms have neural training networks with sparsity penalties and fines. The k-sparse autoencoder (KSA) model appears linear. The appropriateness of the model in sparse coding forms the foundation of this paper. Most important, the model appears speedily encoded and easily trained. Given these advantages, the model is suited for solving large-size issues or problems. We used openly available Mixed National Institute of Standard and Technology Database (MINST) and NYU Object Recognition Benchmark (NORB) dataset in supervisory and un-supervisory learning tasks to validate the hypothesis. The result of the paper shows that the traditional algorithms cannot resolve large size problems for sparse coding as the k-Sparse autoencoder model.
Original language | English |
---|---|
Title of host publication | 2018 International Conference on Electrical, Electronics, Computers, Communication, Mechanical and Computing (EECCMC) |
Publisher | IEEE Xplore |
Number of pages | 8 |
Publication status | Published - 2018 |
Event | International Conference on Electrical, Electronics, Computers, Communication, Mechanical and Computing 2018: EECCMC 2018 - Priyadarshini Engineering College, Tamil Nadu, India Duration: 28 Jan 2018 → 29 Jan 2018 https://web.archive.org/web/20171231114532/http://eeccmc.org/index.php (Conference website) https://priyadarshini.net.in/pec_new/extra-images/conference/EECCMC%20REPORT.pdf (Conference report) |
Conference
Conference | International Conference on Electrical, Electronics, Computers, Communication, Mechanical and Computing 2018 |
---|---|
Country/Territory | India |
City | Tamil Nadu |
Period | 28/01/18 → 29/01/18 |
Internet address |
|