Learning of invariant object recognition in hierarchical neural networks using temporal continuity


  • Markus Lessmann Ruhr-University Bochum, Germany


A lot of progress in the field of invariant object recognition has been made in recent years using so called deep neural networks with several layers to be trained which can learn patterns of increasing complexity. This architectural feature can alreay be found in older neural models as, e.g., the Neocognitron and HMAX but also newer ones as Convolutional Nets. Additionally researchers emphasized the importance of temporal continuity in input data and devised learning rules utilizing it (e.g. the trace rule by F\"oldiak used by Rolls in VisNet). Finally Jeff Hawkins collected a lot of these ideas concerning functioning of the neocortex in a coherent framework and proposed three basic principles for neocortical computations (later implemented in HTM):
  • Learning of temporal sequences for creating invariance to transformations contained in the training data.
  • Learning in a hierarchical structure, in which lower level knowledge can be reused in higher level context and thereby makes memory usage efficient.
  • Prediction of future signals for disambiguation of noisy input by feedback.
In my thesis I developed two related systems: the \emph{Temporal Correlation Graph} (TCG) and the \emph{Temporal Correlation Net} (TCN). Both make use of these principles and implement them in an efficient manner. The main aim was to create systems that are trained mostly unsupervised (both) and can be trained online, which is possible with TCN. Both achieve very good performance on several standard datasets for object recognition.


Computer Vision, Object Description and Recognition, Machine Learning and Data Mining, Classification and Clustering, Invariances in Recognition,




Download data is not yet available.