Denoising autoenconders
I have been interested in autoencoders as a way to initialize deep neural networks. It seems to be a easier way to do that (compared to RBMs). One way of allowing hidden activations to be larger than the output is to use denoising autoencoders. Basically, a denoising autoencoder tries to reconstruct the input from a corrupted version of the input. In the git hub below I wrote a python notebook to do a simple test.
https://github.com/erochassa/basicautoencoder
Further reading:
http://machinelearning.org/archive/icml2008/papers/592.pdf
and algorithm 3 in:
http://www.iro.umontreal.ca/~lisa/pointeurs/BengioNips2006All.pdf
I have been interested in autoencoders as a way to initialize deep neural networks. It seems to be a easier way to do that (compared to RBMs). One way of allowing hidden activations to be larger than the output is to use denoising autoencoders. Basically, a denoising autoencoder tries to reconstruct the input from a corrupted version of the input. In the git hub below I wrote a python notebook to do a simple test.
https://github.com/erochassa/basicautoencoder
Further reading:
http://machinelearning.org/archive/icml2008/papers/592.pdf
and algorithm 3 in:
http://www.iro.umontreal.ca/~lisa/pointeurs/BengioNips2006All.pdf
Comments
Post a Comment