{"_id":"552ada773f29c30d00619cbc","category":{"_id":"550556a4728deb23005ec0f0","__v":6,"pages":["550556cdc16b21170080c646","55055707b9a7a0190036697c","5507638ffa89210d00c8c987","5507ebef6ac1620d001b9405","5508995e0f146f3500b031ae","552ada773f29c30d00619cbc"],"version":"55053eeb84ad8c0d005b0a62","project":"5503ea178c5e913700362c70","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-03-15T09:53:40.258Z","from_sync":false,"order":1,"slug":"tutorials","title":"Tutorials"},"githubsync":"","project":"5503ea178c5e913700362c70","user":"5503e897e508a017002013bd","version":{"_id":"55053eeb84ad8c0d005b0a62","__v":2,"forked_from":"5503ea188c5e913700362c73","project":"5503ea178c5e913700362c70","createdAt":"2015-03-15T08:12:27.786Z","releaseDate":"2015-03-15T08:12:27.786Z","categories":["55053eec84ad8c0d005b0a63","550556a4728deb23005ec0f0"],"is_deprecated":false,"is_hidden":false,"is_beta":true,"is_stable":false,"codename":"","version_clean":"0.0.5","version":"0.0.5"},"__v":4,"metadata":{"title":"","description":"","image":[]},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-04-12T20:49:59.292Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":999,"body":"[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"MNIST: The \\\"hello world\\\" of deep learning\"\n}\n[/block]\nMNIST is a standard academic dataset of binary images of handwritten digits. In this tutorial, we will see how to use a few methods to quickly set up models to classify the images into their 0-9 digit labels:\n* The Prototype container model to quickly create a feedforward [multilayer perceptron model](http://deeplearning.net/tutorial/mlp.html) from basic layers.\n* Transform this Prototype into a Model of our own.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/DhKHDNzS0OJhH2uiMOkl_mnist.png\",\n        \"mnist.png\",\n        \"240\",\n        \"240\",\n        \"#bcbcbc\",\n        \"\"\n      ],\n      \"caption\": \"Some MNIST images.\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Prototype: Quickly create models by adding layers (similar to Torch)\"\n}\n[/block]\nThe `opendeep.models.container.Prototype` class is a container for quickly assembling multiple layers together into a model. It is essentially a flexible list of Model objects, where you can add a single layer (model) at a time, or lists of models linked in complex ways.\n\nTo classify MNIST images with a multilayer perceptron, you only need the inputs, a hidden layer, and the output classification layer. Let's dive in and create a Prototype with these layers!","excerpt":"","slug":"tutorial-classifying-handwritten-mnist-images","type":"basic","title":"Tutorial: Classifying Handwritten MNIST Images"}

Tutorial: Classifying Handwritten MNIST Images


[block:api-header] { "type": "basic", "title": "MNIST: The \"hello world\" of deep learning" } [/block] MNIST is a standard academic dataset of binary images of handwritten digits. In this tutorial, we will see how to use a few methods to quickly set up models to classify the images into their 0-9 digit labels: * The Prototype container model to quickly create a feedforward [multilayer perceptron model](http://deeplearning.net/tutorial/mlp.html) from basic layers. * Transform this Prototype into a Model of our own. [block:image] { "images": [ { "image": [ "https://files.readme.io/DhKHDNzS0OJhH2uiMOkl_mnist.png", "mnist.png", "240", "240", "#bcbcbc", "" ], "caption": "Some MNIST images." } ] } [/block] [block:api-header] { "type": "basic", "title": "Prototype: Quickly create models by adding layers (similar to Torch)" } [/block] The `opendeep.models.container.Prototype` class is a container for quickly assembling multiple layers together into a model. It is essentially a flexible list of Model objects, where you can add a single layer (model) at a time, or lists of models linked in complex ways. To classify MNIST images with a multilayer perceptron, you only need the inputs, a hidden layer, and the output classification layer. Let's dive in and create a Prototype with these layers!