So far in our artificial neural network series, we have covered only neural networks that are using supervised learning. To be more precise, we only explored neural networks that have input and output data available to them during the learning process. Based on this information, this kind of neural networks change their weights and are able to learn how to solve a certain problem. However, there are other types of learning and we are going to explore neural networks that are using these other approaches as well.

Namely, we are going to get familiar with unsupervised learning. Neural networks that use this type of learning get only input data and based on that they generate some form of output. The correct answers are not known during the learning process and neural networks try to figure out patterns in the data on their own. A result of this approach is that we usually have some kind of clustering or classification of data. Self Organizing Maps, or SOMs for short, are using this approach.

Even though the early concepts for this type of networks can be traced back to 1981, they were developed and formalized in 1992 by Teuvo Kohonen, a professor of the Academy of Finland. In an essence, they are using vector quantization to detect patterns in multidimensional data and represent it in much lower dimensional spaces – usually one or two dimensions, but we will get into more details later. However, it is important to take a fresh perspective on these networks and forget standard neuron/connection and weights concepts. These networks are using the same terms but they have a different meaning in their world.

Learn all about this type of neural networks in this series of articles:

Thank you for reading!


This article is a part of  Artificial Neural Networks Series, which you can check out here.


Read more posts from the author at Rubik’s Code.


 

Discover more from Rubix Code

Subscribe now to keep reading and get access to the full archive.

Continue reading