Stacked Denoising Autoencoding System
(Redirected from stacked denoising autoencoder)
Jump to navigation
Jump to search
A Stacked Denoising Autoencoding System is a feedforward network training system that implements a stacked denoising autoencoding algorithm to solve a ...
- Context:
- It can (typically) be based on an (unstacked) Denoising Autoencoding System.
- Example(s):
- Counter-Example(s):
- See: Learned Neural Network.
References
2016
- http://deep-learning-tensorflow.readthedocs.io/en/latest/#stacked-denoising-autoencoder
- QUOTE: Stack of Denoising Autoencoders used to build a Deep Network for supervised learning.
Cmd example usage::
python command_line/run_stacked_autoencoder_supervised.py --dae_layers 1024,784,512,256 --dae_batch_size 25 --dae_num_epochs 5 --verbose 1 --dae_corr_type masking --dae_corr_frac 0.0 --finetune_num_epochs 25 --finetune_batch_size 32 --finetune_opt momentum --momentum 0.9 --finetune_learning_rate 0.05 --dae_enc_act_func sigmoid --dae_dec_act_func sigmoid --dae_loss_func cross_entropy --finetune_act_func relu --finetune_loss_func softmax_cross_entropy --dropout 0.7
- This command trains a Stack of Denoising Autoencoders 784 <-> 1024, 1024 <-> 784, 784 <-> 512, 512 <-> 256, and then performs supervised finetuning with ReLU units. This basic command trains the model on the training set (MNIST in this case), and print the accuracy on the test set. If in addition to the accuracy you want also the predicted labels on the test set, just add the option --save_predictions /path/to/file.npy. You can also get the output of each layer on the test set. This can be useful to analyze the learned model and to visualized the learned features. This can be done by adding the --save_layers_output /path/to/file. The files will be saved in the form file-layer-1.npy, file-layer-n.npy.
- QUOTE: Stack of Denoising Autoencoders used to build a Deep Network for supervised learning.