As mentioned in my previous blog Artificial Intelligence is the super set to Machine Learning which further is the super set to Deep Learning. Deep Learning is concerned with algorithms inspired by the structure and function of Human Brain. So, now lets dive deep into this field and try to understand how this works and what actually this powerful algorithm is.
What is Neural Networks?
It is a series of algorithms that works to recognize underlying relationships in the given set of data through process that mimics the way a Human Brain operates. Now if we actually discuss in Human Brain analogy, then this refers to neurons, either organic or artificial in nature. These help us to cluster and classify the given data.
As previously mentioned that this algorithm is purely inspired by the working of our Human Brain, our Brain works with the help of a series or an networks called neurons. For example, suddenly a cat or a dog comes to you, how will you predict which animal is this? There are many species of dogs as well as cats, so it gets difficult to predict. Our brain here visualizes the particular animal on the basis of different features, such as number of legs, what type of claws they have, what is the face features, likewise the network of neurons in our brain works Hierarchically and learns these features to come to a conclusion. Working of an Artificial Neural network is almost the same, here neural word is derived from a single neuron cell in our brain.
Neural Network Elements
Building a network which functions similarly as our brain does is not easy so it requires a particular set of elements which are very essential for your network and these elements are a part of each neuron in you network.
The layers are made of nodes. A node is just a place where computation happens, similar to a neuron in our brain, which fires when it encounters sufficient stimuli. A node combines input from the data with a set of coefficients, or weights, that either amplify or dampen that input, thereby assigning significance to inputs with regard to the task the algorithm is trying to learn. These input-weight products are summed and then the sum is passed through a node’s so-called activation function, to determine whether and to what extent that signal should progress further through the network to affect the ultimate outcome, say, an act of classification. If the signals passes through, the neuron has been “activated.”
So Basically there are total three important elements apart from the inputs and outputs - Weights, Sum and Non-Linearity. Pairing the model’s adjustable weights with input features is how we assign significance to those features with regard to how the neural network classifies and clusters input. That means weights are values given for a particular input feature(X1, X2…..Xm) according to its importance in the prediction of output. Sum element is actually responsible for summation of all the weighted outputs. Non-linearity or Activation Function part, it is an important part of an artificial neural network. A neural network without an activation function is essentially just a linear regression model. The activation function does the non-linear transformation to the input making it capable to learn and perform more complex tasks.
Intuition of the output Y formed after passing the input (X) through all these elements, also this process is called as Forward Propagation which will also be discussed in detail in coming blogs.
Y = Activation function(∑ (weights*input + bias))
Types of Neural Networks
There are many types of neural networks available or that might be in the development stage. They can be classified depending on their: Structure, Data flow, Neurons used and their density, Layers and their depth activation filters etc.
1. Perceptron
This is the simplest and oldest model of neurons. It accepts weighted inputs, and apply the activation function to obtain the output as the final result. Perceptron is a supervised learning algorithm that classifies the data into two categories, thus it is a binary classifier.
2. Feed Forward Neural Networks
This is again the simplest form of neural network in which data passes in only one direction like, input enters through the network nodes and exits through the output nodes. Where hidden layers may or may not be present, input and output layers are present there. Based on this, they can be further classified as a single-layered or multi-layered feed-forward neural network.
3. Multilayer Perceptron
Here input data travels through various layers of artificial neurons. Every single node is connected to all neurons in the next layer which makes it a fully connected neural network. Input and output layers are present having multiple hidden Layers. It has a bi-directional propagation i.e. forward propagation and backward propagation which will be explained in detail later.
One major advantage of using a multilayer perceptron is because of its deep structure it is highly helpful for deep learning problems and gives great results.
4. Convolutional Neural Network
Now comes the heart of Neural Networks and the most fascinating type of network, Convolutional neural network deals with input data as Image, video etc and has massive applications in Computer Vision, Image Processing, Speech Recognition etc.
This is an 3-D arrangement of neurons rather than 2-D arrangement, the first layer in this is a convolutional layer. Each neuron in the convolutional layer only processes the information from a small part of the visual field. Input features are taken in batch-wise like a filter. The network understands an image in parts and develops understanding hierarchically through different layers add in the model to complete the full image processing. Convolution neural networks show very effective results in image and video recognition, semantic parsing and paraphrase detection.
5. Recurrent Neural Networks
These types of networks at very good at modelling sequential data(For eg:- Text, audio). It feds back to the input to help in predicting the outcome of the layer. The first layer here is a feed forward layer which gives information a single direction and then it passes through a RNN layer where some information it had in the previous time-step is remembered by a memory function. In RNN a Sequential Memory Concept is used as the working through which previous state’s output is an essential factor for predicting present state’s output.
Further LSTM’s and GRU’s are some modified models which are used for the same applications but effectively and with rectifying error faced by RNN.
Conclusion
Here we discussed what actually a neural network is and how does it mimics our human brain, discussed its elements . We summed it up by discussing some important types of Neural Networks based on there function and applications. A neural network has basic three phases of working : Forward Propagation, Loss Calculation and Backward Propagation, which will be discussed in detail in the next blog, then we will move into the implementation of neural networks using Python and appropriate frameworks like Tensorflow, Keras and Pytorch.
Hope you guys now have a clear vision about neural networks, please leave your feedback and Happy Learning !
“Predicting the future isn’t magic, it’s artificial intelligence.” ~Dave Waters
Well said, but is predicting future that easy? What’s Deep Learning and how is it connected to this. Whenever Deep Learning is discussed we come across statements like Computational systems that operate internally as humans do. A algorithm working same as our mind works.
We have heard a lot more of such kinds of definitions when it comes to Artificial Intelligence, Deep Learning or Machine Learning. …
Comments
Post a Comment