Animated Explanation of Feed Forward Neural Network Architecture

Introduction

Feed forward neural network is the most popular and simplest flavor of neural network family of Deep Learning. It is so common that when people say artificial neural networks they generally refer to this feed forward neural network only.

In this post, we will start with the basics of artificial neuron architecture and build a step by step understanding of what is feed forward neural network and how to build one.

Artificial Neuron

An artificial neuron is the most basic and primitive form of any neural network. It is a computational unit which performs the following steps –

  1. It takes certain inputs and weights.
  2. Applies dot product on respective inputs & weights and apply summation.
  3. Apply some transformation using activation function on the above summation.
  4. Fires output.

The below illustration should help you in visualizing these steps !!

Artificial Neuron
Artificial Neuron

Artificial neurons are the essential building blocks of neural networks and we have covered it in detail in the following post. I do recommend to check it out –

In fact, this is the final part of a complete series dedicated to the evolution of various kind of artificial neurons since the 1940s and you should not miss them out either –

There are many activation functions that apply different types of transformations to incoming signals in the neuron. Activation functions are necessary to bring non-linearity in the neural network. Do check out this details post on why we need activation function and its different types.

Feed Forward Neural Network

So after having a quick understanding of how an artificial neuron works let us now see how we can stack multiple of such neurons together to form a feed forward neural network.

Let us first have a visual understanding of how a typical feed forward neural network looks like.

Feed Forward Neural Network
Feed Forward Neural Network

Feed Forward Neural Network Architecture

Feed forward neural network architecture consists of following main parts –

Input Layer 

  • This layer consists of the input data which is being given to the neural network.
  • This layer is depicted like neurons only but they are not the actual artificial neuron with computational capabilities that we discussed above.
  • Each neuron represents a feature of the data. This means if we have a data set with three attributes Age, Salary, City then we will have 3 neurons in the input layer to represent each of them. If we are working with an image of the dimension of 1024×768 pixels then we will have 1024*768 = 786432 neurons in the input layer to represent each of the pixels !!

Hidden Layer

  • This is the layer that consists of the actual artificial neurons.
  • If the number of hidden layer is one then it is known as a shallow neural network.
  • If the number of hidden layer is more than one then it is known as a deep neural network.
  • In a deep neural network, the output of neurons in one hidden layer is the input to the next hidden layer.
  • There is no rule of thumb on how many hidden layers and how many neurons each hidden layer should have in the neural network.  In fact, the practitioners will tell you that arriving at a good number of hidden layers & neurons is an art and mostly depends on the data in hand.
  • In most of the cases, all the neurons are connected with each other and it is also known as fully connected neural network.
  • In the case of a convolution neural network, however, not all neurons are connected with each other.

Output Layer

  • This layer is used to represent the output of the neural network.
  • The number of output neurons depends on number of output that we are expecting in the problem at hand.

Weights and Bias

  • The neurons in the neural network are connected to each other by weights.
  • Apart from weights, each neuron also has its own bias.

One more key point to highlight here is that the information flows in only one forward direction only. Hence it is known a feed forward neural network.

If the information is not passed in one direction and output of neuron is feedback into previous neuron in a cycle then it is know as recurrent neural network and is a counterpart of feed forward neural network.

[adrotate banner=”3″]

Backpropagation (Don’t get confused !!)

We just now stated that the information flows in forward direction in feed forward neural network. But beginners sometimes get confused when it comes to backpropagation being used in feed forward neural network,  both looks contradicting at first. Don’t worry it is quite obvious. Let us clear this confusion.

Backpropagation is actually a technique that is only used during the training phase of neural network, which is below  –

  1. During the training phase, the neural network is initialized with random weight values.
  2. Training data is fed to the network and the network then calculates the output. This is known as a forward pass.
  3. The calculated output is then compared with the actual output with the help of loss/cost function and the error is determined.
  4. Now comes the backpropagation part where the network determines how to adjust all the weights in its network so that the loss can be minimized.
  5. This weight adjustment starts happening from the rear end of the network. The error is propagated in the backward direction to the front layers till the end and the neurons across the network start adjusting their weights. Hence the name backpropagation.

The below animation tries to visualize how backpropagation looks like in a deep neural network with multiple hidden layers.

Backpropagation
Backpropagation

Backpropagation is a very important tool that has made learning of huge deep neural networks with many hidden layers possible. You should check out this video by 3Blue1Brown to learn more about backpropagation.

Coming back to our original discussion, as you can see backpropagation is used only to pass back the output error in hidden layers for adjusting their weights during training. But the word “feed forward” emphasizes that the network should not pass back any information while making the actual calculation of the output.

If it passes back the information during calculation itself then it is termed as a recurrent neural network and not feed forward neural network !!

In The End…

We started with a very basic understanding of how an artificial neuron works, then we saw how we can stack up these neurons to form a network. We also saw how a shallow neural network and deep neural network looks like. And finally concluded by touching upon backpropagation.

So I hope this post should help you to understand, especially if you are a beginner, on the architecture of feed forward neural network.

Do share your feedback about this post in the comments section below. If you found this post informative, then please do share this and subscribe to us by clicking on the bell icon for quick notifications of new upcoming posts. And yes, don’t forget to join our new community MLK Hub and Make AI Simple together.

  • MLK

    MLK is a knowledge sharing community platform for machine learning enthusiasts, beginners and experts. Let us create a powerful hub together to Make AI Simple for everyone.

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *