Learning of invariant object recognition in hierarchical neural networks using temporal continuity
Abstract
A lot of progress in the field of invariant object recognition has been made in recent years using so called deep neural networks with several layers to be trained which can learn patterns of increasing complexity. This architectural feature can alreay be found in older neural models as, e.g., the Neocognitron and HMAX but also newer ones as Convolutional Nets. Additionally researchers emphasized the importance of temporal continuity in input data and devised learning rules utilizing it (e.g. the trace rule by F\"oldiak used by Rolls in VisNet). Finally Jeff Hawkins collected a lot of these ideas concerning functioning of the neocortex in a coherent framework and proposed three basic principles for neocortical computations (later implemented in HTM):- Learning of temporal sequences for creating invariance to transformations contained in the training data.
- Learning in a hierarchical structure, in which lower level knowledge can be reused in higher level context and thereby makes memory usage efficient.
- Prediction of future signals for disambiguation of noisy input by feedback.
Keywords
Computer Vision, Object Description and Recognition, Machine Learning and Data Mining, Classification and Clustering, Invariances in Recognition,Published
2015-12-21
Downloads
Download data is not yet available.
Copyright (c) 2015 Markus Lessmann

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.