What is Transfer Learning? Its uses in Deep Learning and Machine Learning
The term transfer learning is self-explanatory. It is a technique that re-uses pre-trained weights to train a new model. It is a very effective technique since it embeds the weights of the pre-trained model to achieve good accuracy in our new model.
This technique is extremely helpful if you have a small set of datasets. Since a small dataset leads to poor accuracy, we can overcome it by transferring pre-trained weight to our model and by achieving good accuracy on our model containing a small dataset.
How does transfer learning work?
Transfer Learning is applicable in Deep Learning, CNN, and NLP. In order to apply transfer learning in algorithms. First, evaluate which particular algorithm you are running. Such as Deep Learning, CNN, or Machine Learning algorithms. Based on the characteristics of the algorithm, you will have to transfer that pre-trained weight and embed them in your model.
To illustrate & to understand better, let us assume we are dealing with images, hence a CNN problem. To summarize the methodology of CNN, it is a technique that uses layers to train the model. Each layer consists of its own function, such as a convolutional layer, max pooling layer, or a flatten layer. To use transfer learning in CNN, we will have to transfer pre-trained weights from layer to layer. You can transfer pre-trained weights up to your requirement. You can train the layers not affected by transfer learning and leave out the layers which consist of pe-trained weights. It leads to achieving high accuracy on your training dataset leading to a good accuracy on tests split. This helps Data scientists working in the health care domain as they have to deal with small data due to the government and private policies.
VGG16
VGG16 is the world's biggest pre-trained model, which has been trained on more than 1 million images. A lot of data scientists transfer pre-trained weights from VGG 16 to their models.
In fig 1, we declared VGG 16 with a total of 1000 classes.
In fig 2 above. Firstly, we declared the input shapes of our layers. Moreover, we passed the parameter “ weights=’imagenet’ ” to inform the compilers that weights are being transferred from the imagenet from VGG16. Lastly, in the last few lines, we informed our model that 4 layers will not be trained, because we used transfer learning in those layers. When the model executes, it will exclude those 4 layers and train other layers. The difference in accuracy is drastic. In this particular model, I was getting an accuracy of 40% without transfer learning. However, with transfer learning, it went up to 65%.
Good Resources: -
Difference between AI vs Machine learning vs Deep learning -> https://youtu.be/4fGx08QKymQ