{"_id":"5513142a08fb5d17006dbd57","tags":[],"user":{"_id":"5503e897e508a017002013bd","username":"","name":"Markus Beissinger"},"__v":0,"initVersion":{"_id":"55053eeb84ad8c0d005b0a62","version":"0.0.5"},"project":"5503ea178c5e913700362c70","createdAt":"2015-03-25T20:01:46.059Z","changelog":[],"body":"Lots of progress has been made since the initial alpha. Today, I would like to announce the new Prototype class (opendeep.models.container.Prototype) - a container to quickly assemble layers into a model. This is similar to Sequential in Torch, but provides all of the power that the Model class gives for modularity!\n\nYou can create a Prototype in two main ways:\n\n1) Adding layers (or lists of layers) with the .add() method\n\n2) Initializing the Prototype with a list of its layers\n\nThese two methods are demonstrated below:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# Add one layer at a time\\nfrom opendeep.models.container import Prototype\\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\\n\\n# create the Prototype\\nmlp = Prototype()\\n# add a fully-connected relu layer\\nmlp.add(BasicLayer(input_size=784, output_size=1000, activation='rectifier'))\\n# add a softmax classification layer\\nmlp.add(SoftmaxLayer(inputs_hook=(1000, mlp[-1].get_outputs()), output_size=10))\\n# that's it!\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# Add all the layers during initialization\\nfrom opendeep.models.container import Prototype\\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\\n\\n# define the layers\\nrelu = BasicLayer(input_size=784, output_size=1000, activation='rectifier')\\nsoftmax = SoftmaxLayer(inputs_hook=(1000, relu.get_outputs()), output_size=10)\\n# create the Prototype\\nmlp = Prototype(layers=[relu, softmax])\\n# that's it!\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]\n\nFinally, here is a quick 'hello world' example of assembling a basic multilayer perceptron (MLP) over the standard MNIST handwritten digit dataset (you can see more advanced uses in the Tutorials section of the developer hub):\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# standard libraries\\nimport logging\\n# third party libraries\\nfrom opendeep.log.logger import config_root_logger\\nfrom opendeep.models.container import Prototype\\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\\nfrom opendeep.optimization.adadelta import AdaDelta\\nfrom opendeep.data.standard_datasets.image.mnist import MNIST\\nfrom opendeep.data.dataset import TEST\\n\\n# grab a log to output useful info, and configure our environment\\nlog = logging.getLogger(__name__)\\nconfig_root_logger()\\n\\n# define the model layers for the MNIST dataset\\n# create a fully-connected rectifier layer from the input to 1000 outputs\\nrelu_layer1 = BasicLayer(input_size=784, output_size=1000, activation='rectifier')\\n# create a second fully-connected rectifier layer from the 1000 units to another 1000.\\nrelu_layer2 = BasicLayer(inputs_hook=(1000, relu_layer1.get_outputs()), output_size=1000, activation='rectifier')\\n# create our output classification layer (softmax)\\nclass_layer3 = SoftmaxLayer(inputs_hook=(1000, relu_layer2.get_outputs()), output_size=10)\\n# add the layers as a Prototype\\nmlp = Prototype(layers=[relu_layer1, relu_layer2, class_layer3])\\n\\n# grab the dataset\\nmnist = MNIST()\\n# define the optimizer (adadelta is a good default)\\noptimizer = AdaDelta(model=mlp, dataset=mnist, n_epoch=10)\\noptimizer.train()\\n\\ntest_data = mnist.getDataByIndices(indices=range(25), subset=TEST)\\n# use the predict function!\\nyhat = mlp.predict(test_data)\\nlog.info('-------')\\nlog.info(str(yhat))\\nlog.info(str(mnist.getLabelsByIndices(indices=range(25), subset=TEST)))\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]","slug":"announcing-quick-experimentation-with-prototype","title":"Announcing quick experimentation with Prototype!"}

Announcing quick experimentation with Prototype!


Lots of progress has been made since the initial alpha. Today, I would like to announce the new Prototype class (opendeep.models.container.Prototype) - a container to quickly assemble layers into a model. This is similar to Sequential in Torch, but provides all of the power that the Model class gives for modularity! You can create a Prototype in two main ways: 1) Adding layers (or lists of layers) with the .add() method 2) Initializing the Prototype with a list of its layers These two methods are demonstrated below: [block:code] { "codes": [ { "code": "# Add one layer at a time\nfrom opendeep.models.container import Prototype\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\n\n# create the Prototype\nmlp = Prototype()\n# add a fully-connected relu layer\nmlp.add(BasicLayer(input_size=784, output_size=1000, activation='rectifier'))\n# add a softmax classification layer\nmlp.add(SoftmaxLayer(inputs_hook=(1000, mlp[-1].get_outputs()), output_size=10))\n# that's it!", "language": "python" } ] } [/block] [block:code] { "codes": [ { "code": "# Add all the layers during initialization\nfrom opendeep.models.container import Prototype\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\n\n# define the layers\nrelu = BasicLayer(input_size=784, output_size=1000, activation='rectifier')\nsoftmax = SoftmaxLayer(inputs_hook=(1000, relu.get_outputs()), output_size=10)\n# create the Prototype\nmlp = Prototype(layers=[relu, softmax])\n# that's it!", "language": "python" } ] } [/block] Finally, here is a quick 'hello world' example of assembling a basic multilayer perceptron (MLP) over the standard MNIST handwritten digit dataset (you can see more advanced uses in the Tutorials section of the developer hub): [block:code] { "codes": [ { "code": "# standard libraries\nimport logging\n# third party libraries\nfrom opendeep.log.logger import config_root_logger\nfrom opendeep.models.container import Prototype\nfrom opendeep.models.single_layer.basic import BasicLayer, SoftmaxLayer\nfrom opendeep.optimization.adadelta import AdaDelta\nfrom opendeep.data.standard_datasets.image.mnist import MNIST\nfrom opendeep.data.dataset import TEST\n\n# grab a log to output useful info, and configure our environment\nlog = logging.getLogger(__name__)\nconfig_root_logger()\n\n# define the model layers for the MNIST dataset\n# create a fully-connected rectifier layer from the input to 1000 outputs\nrelu_layer1 = BasicLayer(input_size=784, output_size=1000, activation='rectifier')\n# create a second fully-connected rectifier layer from the 1000 units to another 1000.\nrelu_layer2 = BasicLayer(inputs_hook=(1000, relu_layer1.get_outputs()), output_size=1000, activation='rectifier')\n# create our output classification layer (softmax)\nclass_layer3 = SoftmaxLayer(inputs_hook=(1000, relu_layer2.get_outputs()), output_size=10)\n# add the layers as a Prototype\nmlp = Prototype(layers=[relu_layer1, relu_layer2, class_layer3])\n\n# grab the dataset\nmnist = MNIST()\n# define the optimizer (adadelta is a good default)\noptimizer = AdaDelta(model=mlp, dataset=mnist, n_epoch=10)\noptimizer.train()\n\ntest_data = mnist.getDataByIndices(indices=range(25), subset=TEST)\n# use the predict function!\nyhat = mlp.predict(test_data)\nlog.info('-------')\nlog.info(str(yhat))\nlog.info(str(mnist.getLabelsByIndices(indices=range(25), subset=TEST)))", "language": "python" } ] } [/block]