Show simple item record

dc.contributor.author Gupta, Kavya
dc.contributor.author Majumdar, Angshul (Advisor)
dc.date.accessioned 2016-11-09T09:28:45Z
dc.date.available 2016-11-09T09:28:45Z
dc.date.issued 2016-11-09T09:28:45Z
dc.identifier.uri https://repository.iiitd.edu.in/jspui/handle/123456789/493
dc.description.abstract Autoencoders are Neural Networks trained in order to map input to its output. Autoencoders are designed to enable the network to copy input to output as close as possible. Following this, the network will be able to learn useful properties in the data and learns a representation of the data, which might be helpful in Classification, image recovery or any application where good feature set is detrimental. Autoencoders have been used for dimensionality reduction and feature learning and lately they are also being ventured into the field of generative modeling. Copying the input to the output might seem trivial, but we are not typically concerned about the result of the decoder but the intrinsic representation of data which should be able to capture its salient features. In order to achieve this, different constraints can be enforced on the network learning of the autoencoders. In this work we are interested in adding regularization terms to the basic Euclidean distance data fidelity term in order to get more defined and structured feature set which will further help in the task at hand. In other words, Regularized Autoencoders uses a cost function that not only copy input to output but also encourages the network to have other properties like sparsity, rank deficiency etc. and accordingly helps to regulate the capacity of the network to learn. Feature learning has become a norm for most applications and extracting good learned features have always been the ultimate aim of Machine Learning techniques. The main contribution of this work is to provide simple yet effective Machine Learning networks which can be widely used for a variety of applications and datasets. This work explores regularization constraints on Undercomplete Autoencoders in two parts. The first part focuses on Modeling Redundancy in the network exploring the Sparsity and rank deficiency. Sparsity helps in learning (keeping) the important connections while trimming the irrelevant ones where as rank deficiency encourages linear dependency in the connections. The ensuing formulations are solved using Majorization-Minimization technique. The second part explores Modeling of Similarity in features using rank deficiency within classes which encourages linear dependency in the features belonging to the same class. For this part, Split Bregman technique is employed to solve the proposed formulation. Regularized Autoencoders iv The performance of our methods is tested on two tasks – classification and Denoising. Thorough experiments showed that our proposed methods yield considerable better results for Gaussian denoising. For classification, we have opted a way of Deep learning and formed Stacked Autoencoders which significantly improves the classification results. The results are even better than highly tuned existing Deep Learning tools such as SDAEs and DBNs and the versatility of network is tested for different datasets. Our methods are also computationally simpler as compared to existing state-of-the-art tools which have enormous training times and requires a huge amount of data en_US
dc.language.iso en_US en_US
dc.subject Autoencoders en_US
dc.subject Neural networks en_US
dc.subject Gaussian denoising en_US
dc.subject Deep Learning en_US
dc.title Regularized autoencoders en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account