{"_id":"5508995e0f146f3500b031ae","githubsync":"","project":"5503ea178c5e913700362c70","user":"5503e897e508a017002013bd","version":{"_id":"55053eeb84ad8c0d005b0a62","__v":2,"forked_from":"5503ea188c5e913700362c73","project":"5503ea178c5e913700362c70","createdAt":"2015-03-15T08:12:27.786Z","releaseDate":"2015-03-15T08:12:27.786Z","categories":["55053eec84ad8c0d005b0a63","550556a4728deb23005ec0f0"],"is_deprecated":false,"is_hidden":false,"is_beta":true,"is_stable":false,"codename":"","version_clean":"0.0.5","version":"0.0.5"},"category":{"_id":"550556a4728deb23005ec0f0","__v":6,"pages":["550556cdc16b21170080c646","55055707b9a7a0190036697c","5507638ffa89210d00c8c987","5507ebef6ac1620d001b9405","5508995e0f146f3500b031ae","552ada773f29c30d00619cbc"],"version":"55053eeb84ad8c0d005b0a62","project":"5503ea178c5e913700362c70","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-03-15T09:53:40.258Z","from_sync":false,"order":1,"slug":"tutorials","title":"Tutorials"},"__v":4,"metadata":{"title":"","description":"","image":[]},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-03-17T21:15:10.031Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":999,"body":"If you followed [Tutorial: Flexibility and  Modularity](doc:tutorial-making-your-model-more-flexible), you know how to create modular and flexible models to handle hooks between their inputs, hidden representations, and parameters.\n\nIn this tutorial, you will use those hooks in action - you will quickly combine denoising autoencoders together to create a stacked denoising autoencoder (sDA), and then add a classification layer at the end to make a supervised MNIST classifier. (I gave in, this is the most like a multilayer perceptron you will get with these tutorials). Except you will also learn the concepts of unsupervised pre-training with supervised fine-tuning to make a better model than the MLP! Let's dig in.\n\n#Stacking layers without creating a new model\nSo you just want to run an experiment by combining some layers? No problem. With this example, you will see the basics of using hooks with layers to combine them in sequence.\n\nWith each additional model or layer you add, you can choose to give it an inputs_hook using the outputs of a previous model (by calling get_outputs()), or a hiddens_hook by using the hidden representation calculated by the previous model (by calling get_hiddens()). Our optimization algorithm takes care of the rest! It is able to create the fully-connected computation graph so that by training the final layer, you train all the model parameters. Here is some code to walkthrough stacking denoising autoencoders:\n\n[ THIS TUTORIAL IS UNDER CONSTRUCTION ]","excerpt":"Here you will learn to stack existing models and layers to form a stacked denoising autoencoder for MNIST classification.","slug":"tutorial-your-second-model-combining-layers","type":"basic","title":"Tutorial: Your Second Model (Combining Layers)"}

Tutorial: Your Second Model (Combining Layers)

Here you will learn to stack existing models and layers to form a stacked denoising autoencoder for MNIST classification.

If you followed [Tutorial: Flexibility and Modularity](doc:tutorial-making-your-model-more-flexible), you know how to create modular and flexible models to handle hooks between their inputs, hidden representations, and parameters. In this tutorial, you will use those hooks in action - you will quickly combine denoising autoencoders together to create a stacked denoising autoencoder (sDA), and then add a classification layer at the end to make a supervised MNIST classifier. (I gave in, this is the most like a multilayer perceptron you will get with these tutorials). Except you will also learn the concepts of unsupervised pre-training with supervised fine-tuning to make a better model than the MLP! Let's dig in. #Stacking layers without creating a new model So you just want to run an experiment by combining some layers? No problem. With this example, you will see the basics of using hooks with layers to combine them in sequence. With each additional model or layer you add, you can choose to give it an inputs_hook using the outputs of a previous model (by calling get_outputs()), or a hiddens_hook by using the hidden representation calculated by the previous model (by calling get_hiddens()). Our optimization algorithm takes care of the rest! It is able to create the fully-connected computation graph so that by training the final layer, you train all the model parameters. Here is some code to walkthrough stacking denoising autoencoders: [ THIS TUTORIAL IS UNDER CONSTRUCTION ]