we often hear and see the people around us using the terms “Artificial Intelligence,” “Machine Learning” and “Deep Learning” interchangeably. However, despite the conceptual similarities, these technologies are unique in their own way.

**What is Deep Learning?**** **

Deep Learning or Hierarchical Learning is a subset of Machine Learning in Artificial Intelligence that can imitate the data processing function of the human brain and create similar patterns the brain used for decision making. Contrary to task-based algorithms, Deep Learning systems learn from data representations – they can learn from unstructured or unlabeled data.

Deep Learning architectures like deep neural networks, belief networks, and recurrent neural networks, and convolutional neural networks have found applications in the field of computer vision, audio/speech recognition, machine translation, social network filtering, bioinformatics, drug design and so much more.

**What is a Neural Network?**

A Neural Networks is made of an assortment of algorithms that are modeled on the human brain. These algorithms can interpret sensory data via machine perception and label or cluster the raw data. They are designed to recognize numerical patterns that are contained in vectors within which all the real-world data (images, sound, text, time series, etc.) has to be translated.

Essentially, the primary task of a Neural Networks is to cluster and classify the raw data – they group the unlabeled data based on the similarities found in the input data and then classify the data based on the labelled training dataset. Neural Networks can automatically adapt to changing input. So, you need not redesign the output criteria each time the input changes to generate the best possible result.

To fully grasp and differentiate Deep Learning and Neural Networks, we need to evaluate the basic architecture.

A neural network consists of several layers; an input layer, a hidden layer(s) and an output layer. Every layer consists of *nodes*, loosely modelled from neurons in the brain.The input layer is where the predictors enter the model. As seen in the picture, every node in the hidden layer is connected with each input through the weights (lines in the picture). Weights are either augmented or reduced depending on its relevance to the output. A strong correlation with the output variable for a specific input would enlarge the weighting and vice versa. In each node in the hidden layer the weights products are summed and passed through an activation function. In the next step it’s resolved to what degree the signals from the nodes will progress further. I.e. to what extent it will influence the output.

The difference between neural networks and deep learning lies in the depth of the model. Deep learning is a phrase used for complex neural networks. The complexity is attributed by elaborate patterns of how information can flow throughout the model. In the figure below an example of a deep neural network is presented. The architecture has become more complex but the concept of deep learning is still the same. Albeit there’s now an increased number of hidden layers and nodes that integrate to estimate the output(s).The difference between neural networks and deep learning lies in the depth of the model. Deep learning is a phrase used for complex neural networks. The complexity is attributed by elaborate patterns of how information can flow throughout the model. In the figure below an example of a deep neural network is presented. The architecture has become more complex but the concept of deep learning is still the same. Albeit there’s now an increased number of hidden layers and nodes that integrate to estimate the output(s).

**The architecture of a Deep Learning model includes:**

**Unsupervised Pre-trained Networks**– As the name suggests, this architecture need no formal training since it is pre-trained on past experiences. These include Autoencoders, Deep Belief Networks, and Generative Adversarial Networks.**Convolutional Neural Networks**– This is a Deep Learning algorithm that can take in an input image, assign importance (learnable weights and biases) to different objects in the image, and also differentiate between those objects.**Recurrent Neural Networks**– Recurrent Neural Networks refer to a specific kind of artificial neural network that adds additional weights to the network to create cycles in the network graph so as to maintain an internal state.**Recursive Neural Networks**– This is a type of Deep Neural Network that is created by applying the same set of weights recursively over a structured input, to produce a structured prediction over or a scalar prediction on variable-size input structures by passing a topological structure.

While Neural Networks use neurons to transmit data in the form of input values and output values through connections, Deep Learning is associated with the transformation and extraction of feature which attempts to establish a relationship between stimuli and associated neural responses present in the brain.

Note: This article is out together the concepts and writings from different articles published by different personalities in the world wide web and the purpose is to give an over all idea where in the technical facts are not authenticated and left to the reader to adjudge. Any one who finds it inappropriate or finds the owner of words, sentences, pictures or concepts used and want to be removed from this blog, we would oblige to do so. The mail purpose of this blog is to share the technical info to get readers to know more about the topics.** **

**Note: The above article is a collection from various web sites, manufacturers and distributors / integrators of** **Neural Networks and Deep Learning****. If any company/individual finds any ownership of contents – they can let us know at [email protected] to either change or remove the contents. Article is provided only for the information purposes and facts can be cross verified by the readers.**