{"__v":8,"_id":"5654d9907b89070d00f96354","category":{"project":"5503ea178c5e913700362c70","version":"563fc7631594380d009c1a5c","_id":"5654ff257b89070d00f96386","pages":[],"__v":0,"sync":{"url":"","isSync":false},"reference":false,"createdAt":"2015-11-25T00:21:57.507Z","from_sync":false,"order":1,"slug":"core-concepts","title":"Core Concepts"},"project":"5503ea178c5e913700362c70","user":"5503e897e508a017002013bd","version":{"__v":2,"_id":"563fc7631594380d009c1a5c","project":"5503ea178c5e913700362c70","createdAt":"2015-11-08T22:06:27.279Z","releaseDate":"2015-11-08T22:06:27.278Z","categories":["563fc7641594380d009c1a5d","563fc7641594380d009c1a5e","563fc7641594380d009c1a5f","5654ff257b89070d00f96386"],"is_deprecated":false,"is_hidden":false,"is_beta":true,"is_stable":true,"codename":"","version_clean":"0.0.9","version":"0.0.9"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-11-24T21:41:36.184Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"The first component necessary for building a deep learning model. I recommend exploring your input data and targets first to get an idea for what model architecture would work best.\n\nDatasets are built in a streaming, functional manner. Data needs to be in an iterable format so minibatching can work with the optimizer. We provide simple functional streaming capabilities to modify data in realtime in the [opendeep.data.stream](http://opendeep.readthedocs.org/en/latest/opendeep.data.stream.html) package (while this functional approach is good for quick experimentation, it would be faster to preprocess your whole dataset first to avoid doing these calculations on the fly).\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"opendeep.data\"\n}\n[/block]\nIn this package you will find the classes to hold your datasets:\n\n## Dataset\nThe Dataset object is the superclass object for all other built-in dataset types. You can use the Dataset class to wrap your own iterable streams of data into something usable by the optimizer. See the [dataset module documentation](http://opendeep.readthedocs.org/en/latest/opendeep.data.html#module-opendeep.data.dataset) for the initialization parameters and attributes.\n\nIn essence, there are six attributes in the Dataset object: train inputs, train targets, valid inputs, valid targets, test inputs, and test targets. Inputs are fed into the model, while targets (if applicable) are passed to the loss function. Train, valid, and test correspond to splits in the data.\n\n## Other wrappers\nSee the subclass implementations for the Dataset object to wrap text, images, or in-memory arrays. [Documentation here](http://opendeep.readthedocs.org/en/latest/opendeep.data.html).","excerpt":"","slug":"basics-dataset","type":"basic","title":"Dataset"}
The first component necessary for building a deep learning model. I recommend exploring your input data and targets first to get an idea for what model architecture would work best. Datasets are built in a streaming, functional manner. Data needs to be in an iterable format so minibatching can work with the optimizer. We provide simple functional streaming capabilities to modify data in realtime in the [opendeep.data.stream](http://opendeep.readthedocs.org/en/latest/opendeep.data.stream.html) package (while this functional approach is good for quick experimentation, it would be faster to preprocess your whole dataset first to avoid doing these calculations on the fly). [block:api-header] { "type": "basic", "title": "opendeep.data" } [/block] In this package you will find the classes to hold your datasets: ## Dataset The Dataset object is the superclass object for all other built-in dataset types. You can use the Dataset class to wrap your own iterable streams of data into something usable by the optimizer. See the [dataset module documentation](http://opendeep.readthedocs.org/en/latest/opendeep.data.html#module-opendeep.data.dataset) for the initialization parameters and attributes. In essence, there are six attributes in the Dataset object: train inputs, train targets, valid inputs, valid targets, test inputs, and test targets. Inputs are fed into the model, while targets (if applicable) are passed to the loss function. Train, valid, and test correspond to splits in the data. ## Other wrappers See the subclass implementations for the Dataset object to wrap text, images, or in-memory arrays. [Documentation here](http://opendeep.readthedocs.org/en/latest/opendeep.data.html).