In one of the last posts, we have discussed about fundamentals of deep learning. In this post, we will go further and talk about the backbone of deep learning: neural network. This article is for beginners to understand basics of neural network and its applications.
A neural network is a type of computing technique in machine learning world. They are often misunderstood as difficult to use and learn. But, they are actually formed using simple processing nodes creating a shape of a network.
They are encouraged by the way that biological arrangements such as the human brain functions, although many orders of magnitude less complex now. Of course, no neural network developed until now can complete to human brain, ultimate objective of strong AI is to build neural network system in AI based robots that can overpower average human brain’s capacity.
They are basically pattern detection techniques and incline to be more valuable for chores which can be labelled in terms of pattern recognition. They are ‘trained’ by feeding them with datasets with known outputs or class labels.
The training part here is similar to human training or tutoring. The system learns from known events of data samples. Forms some rules and heuristic. In fact, this is general rule of any machine learning based system.
Visualize that you are trying to train a network to output as 1 when it is fed a image of a tom-cat and a 0 when it gets a image that is not a tom-cat. You would train the network by providing many images of tom-cats through it and applying a technique to twist the network parameters until it contributed the right answer. The parameters are typically a gain on every input and a weight on each node as well as the real organization of the network (number of nodes, number of layers, with type of interconnections).
Recognizing tom-cat images is essentially a fairly compound issue and would need a compound neural network (perhaps early node with one node per pixel). A normal opening point for investigating with neural networks is to attempt and develop simple logic gates, such as AND, OR, NOT etc. as neural nets.
Neural networks can be actually speedy method of attaining a composite outcome. They are very fascinating for AI research because they are a prototype for the human/animal intellect.
Possible drawback of neural network:
One of the major drawbacks of neural networks is that it is really tough to reverse engineer them. If your network agrees one specific picture of a dog is actually a tom-cat you can’t truly control ‘why’ in any suitable logic. All you can certainly do is crack training/alterating the network more.
Neural networks have a habit of to be castoff for well-bounded jobs such as currency/note identification in vending machines, or fault recognizing on construction positions.
Where to use neural networks?
Apply neural network techniques in situations where there is absence of known function for approximating the given features (or inputs) to their outputs (classification or regression).
One example of such problem domain can be the weather forecasting/detection.
There are plenty of variables present into the weather domain such as type, temperature, movement, cloud cover, past events, etc. However, no one can predict accurately how to estimate what the weather will be in next two days from current instant of time. A neural network is a function that is formed in an approach that styles it easy to adjust its factors to approximate weather forecasts based on features and variables.
In practice, a neural network is just a mathematical function. You give input as a vector of values, further, these values are multiplied by other values, and finally a single value or vector of values is output. That is all is about neural networks in a nutshell.
General Structure of neural network (feed forward)
- Above image shows structure of feed forward neural network
- Neural network is formed with different layers where there nodes connected by links. The feedforward network looks like the image shown below. Here, the first layer is for inputs, second in hidden layer and last one is output layer.
- The input layer takes up the input signals and passes them on to the hidden layer and so on until output layer. The input layer should signify the condition for which the neural network is being trained. Every input neuron should signify some self-determining feature that has an effect over the result of the neural network
- The layers that come in between input and output layer are called as hidden layer. Here, in image, only one hidden layer is present. However, in practice, depending upon complexity of problem being solved, no of hidden layers may vary.
- The hidden layer is the assembly of neurons which has activation function applied
- Its work is to process the inputs gained by its earlier layer. It is accountable for mining the essential features from the given input data
- Numerous approaches are used till now which do not deliver the exact formula for scheming the number of hidden layer as well as number of neurons in every hidden layer. You need to decide it by doing experiments on your specific dataset, use case and problem domain.
- Output layer is the final one that is responsible to deliver the outcome and gathers and conveys the evidence consequently in way it has been intended to provide
- The basic concept to begin if you are concerned to understand neural networks is perhaps to read about google ‘perceptron’ which is the term for one of the initial neural network essentials.
- It is the most fundamental part of structure of artificial neural; it is actually an activation function for the network.
Development and code:
Types of popular Neural Networks:
- Recurrent (RNN)
- Radial basis function
- Convolutional (CNN)
- Kohonen Self Organizing
Read about convolutional neural networks(CNN) in next post.