Introduction To Deep Learning
History :
Deep learning is new emerging area of machine learning. Deep learning is a subset of machine learning and machine learning is subset of artificial intelligence. The term ‘Deep Learning’ was coined in 2006 by Geoffrey Hinton and Ruslan Salakhutdinov.
Explanation :
Currently, neural network learning algorithms include shallow neural network and deep neural network. In ‘shallow neural network’, one input layer, one hidden layer, and one output layer is present. In ‘deep neural network’, one input layer, more than one hidden layers and one output layer is present.
Neural Network:
Neural Network is a collection of neurons. The artificial neural network that is used in deep learning is similar to biological neural network. The smallest part of artificial neural network (ANN) is artificial neuron. Both (artificial neural network and biological neural network), act in the same way. Here is a table which elaborates it well.
In artificial neural network, input layer acts according to dendrites (in biological neuron), node acts as cell nucleus (in biological neuron), weight is as synapse (in biological neuron) and output layer is according to axons (in way of action).
It can be better illustrated with this image.
In this figure, the summing function is the sum of products of inputs as well as weights. The whole exercise of training the nucleus is about changing the weights and biases. There are three parts of artificial neural network including input layer, hidden layer, and output layer.
· Input layer is used to get input.
· In hidden layer, processing is performed.
· Output layer is used to provide output.
Activation Functions:
Through input layer, input is attained. In first step, summing function is applied i.e: product of weights, and inputs. In the second step, activation function is applied. (‘It’s required to bring in non-linearity’). There are many activation functions but some most common are discussed below.
Sigmoid function :
It gives the value of output from ‘0’ to ‘1’. Its equation is also given in below figure. This function changes incrementally. Its graph is this.
Threshold function :
It gives the value of output from ‘0’ up-to a certain value. It’s called step function as the change is rapid. Its equation is also given in below figure.It gives output ‘1’, when input is ‘0’ or some positive value. And it gives output ‘0’, when input is input is a negative value. Its graph is this.
ReLU function :
If input is 0 or less than zero, the output is zero. If input is greater than ‘0’, then output is equal to the input. Its equation is also given in below figure. If the value of input is some negative value or zero, then output is zero and in other cases it beyond zero. Its value does not stop at ‘1’, it goes beyond. There’s no upper limit. It’s most efficient and accurate function. Its graph is this.
Hyperbolic Tangent function :
It’s similar to sigmoid function. Its value goes from -1 to 1. The difference between sigmoid and hyperbolic tangent function is that in sigmoid function the value varies from 0 to 1 but in Hyperbolic Tangent function value varies from -1 from 1. Its equation is given in the below figure.
So readers! this is a brief introduction to deep learning. If you like my article, don’t forget to hit ‘clap’ and follow me.
Thank You.