{"_id":"552ca79468b9a0210071848f","version":{"_id":"552ca79368b9a02100718485","project":"5503ea178c5e913700362c70","__v":2,"forked_from":"55053eeb84ad8c0d005b0a62","createdAt":"2015-04-14T05:37:23.319Z","releaseDate":"2015-04-14T05:37:23.319Z","categories":["552ca79368b9a02100718486","552ca79368b9a02100718487","55313cdbc68f493900aebb90"],"is_deprecated":false,"is_hidden":false,"is_beta":true,"is_stable":false,"codename":"","version_clean":"0.0.6","version":"0.0.6"},"__v":3,"githubsync":"","project":"5503ea178c5e913700362c70","user":"5503e897e508a017002013bd","category":{"_id":"552ca79368b9a02100718487","version":"552ca79368b9a02100718485","pages":["552ca79468b9a0210071848b","552ca79468b9a0210071848c","552ca79468b9a0210071848d","552ca79468b9a0210071848e","552ca79468b9a0210071848f"],"project":"5503ea178c5e913700362c70","__v":1,"sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-03-15T09:53:40.258Z","from_sync":false,"order":1,"slug":"tutorials","title":"Tutorials"},"metadata":{"title":"","description":"","image":[]},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-04-12T20:49:59.292Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":4,"body":"[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"MNIST: The \\\"hello world\\\" of deep learning\"\n}\n[/block]\nMNIST is a standard academic dataset of binary images of handwritten digits. In this tutorial, we will see how to use a few methods to quickly set up models to classify the images into their 0-9 digit labels:\n* The Prototype container model to quickly create a feedforward [multilayer perceptron model](http://deeplearning.net/tutorial/mlp.html) from basic layers.\n* Transform this Prototype into a Model of our own.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/DhKHDNzS0OJhH2uiMOkl_mnist.png\",\n        \"mnist.png\",\n        \"240\",\n        \"240\",\n        \"#bcbcbc\",\n        \"\"\n      ],\n      \"caption\": \"Some MNIST images.\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Prototype: Quickly create models by adding layers (similar to Torch)\"\n}\n[/block]\nThe `opendeep.models.container.Prototype` class is a container for quickly assembling multiple layers together into a model. It is essentially a flexible list of Model objects, where you can add a single layer (model) at a time, or lists of models linked in complex ways.\n\nTo classify MNIST images with a multilayer perceptron, you only need the inputs, a hidden layer, and the output classification layer. Let's dive in and create a Prototype with these layers!\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# imports\\nfrom opendeep.models.container import Prototype\\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\\nfrom opendeep.optimization.adadelta import AdaDelta\\nfrom opendeep.data.standard_datasets.image.mnist import MNIST\\n\\n# create the MLP\\nmlp = Prototype()\\nmlp.add(BasicLayer(input_size=28*28, output_size=1000, activation='rectifier', noise='dropout'))\\nmlp.add(SoftmaxLayer(output_size=10))\\n\\n# train the model with AdaDelta\\ntrainer = AdaDelta(model=mlp, dataset=MNIST())\\ntrainer.train()\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]","excerpt":"Using the Prototype container to quickly create models.","slug":"tutorial-classifying-handwritten-mnist-images","type":"basic","title":"Classifying Handwritten MNIST Images"}

Classifying Handwritten MNIST Images

Using the Prototype container to quickly create models.

[block:api-header] { "type": "basic", "title": "MNIST: The \"hello world\" of deep learning" } [/block] MNIST is a standard academic dataset of binary images of handwritten digits. In this tutorial, we will see how to use a few methods to quickly set up models to classify the images into their 0-9 digit labels: * The Prototype container model to quickly create a feedforward [multilayer perceptron model](http://deeplearning.net/tutorial/mlp.html) from basic layers. * Transform this Prototype into a Model of our own. [block:image] { "images": [ { "image": [ "https://files.readme.io/DhKHDNzS0OJhH2uiMOkl_mnist.png", "mnist.png", "240", "240", "#bcbcbc", "" ], "caption": "Some MNIST images." } ] } [/block] [block:api-header] { "type": "basic", "title": "Prototype: Quickly create models by adding layers (similar to Torch)" } [/block] The `opendeep.models.container.Prototype` class is a container for quickly assembling multiple layers together into a model. It is essentially a flexible list of Model objects, where you can add a single layer (model) at a time, or lists of models linked in complex ways. To classify MNIST images with a multilayer perceptron, you only need the inputs, a hidden layer, and the output classification layer. Let's dive in and create a Prototype with these layers! [block:code] { "codes": [ { "code": "# imports\nfrom opendeep.models.container import Prototype\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\nfrom opendeep.optimization.adadelta import AdaDelta\nfrom opendeep.data.standard_datasets.image.mnist import MNIST\n\n# create the MLP\nmlp = Prototype()\nmlp.add(BasicLayer(input_size=28*28, output_size=1000, activation='rectifier', noise='dropout'))\nmlp.add(SoftmaxLayer(output_size=10))\n\n# train the model with AdaDelta\ntrainer = AdaDelta(model=mlp, dataset=MNIST())\ntrainer.train()", "language": "python" } ] } [/block]