The building blocks of Deep Learning

What are the building blocks? 

Deep learning is based in Neural Networks, but what are the underlying blocks of NN?

More...

Perceptrons

The most basic part of a Neural Network is called Perceptron. But how does it work?  

If you go back to your high school math class, maybe you can remember that piece of equation for a line, y = ax + b. The percetron does nothing more than that, it's a adder, it takes x1 and multiplies by w1, takes x2 and multiplies by w2 and sum all the pieces together with the bias. The final result is the output of the percetron.

Activation functions

The output of the perceptron can be any value, if we want to work with probabilities of events happening, we need to adjust the outputs to values between 0 and 1. Here the activation functions take place, we can use sigmoid functions, where the result is always between 0 and 1, we can also use step functions. 

Summary:

  • Perceptrons are the building blocks of Neural Networks.
  • They are linear classifiers.
  • We can use activation functions to adjust the outputs.

What are the pitfalls of perceptrons? 

As Perceptrons are linear classifiers, by themselves they can't solve non-linear problems. We group them by layers so they can't tackle a lot of more complex problems, and this is how Neural Networks are born! If you want more content like this, I have a YouTube Channel

2 MIN READ
>