Background In today's world of all sorts of Artificial Intelligence coming in, a major element required for the machines to be trained is "Data". A major portion of the data today is generated from social media and mediums like virtual assistants, blogs, news, videos, audio, images, all sorts of papers(research, white) etc. are mostly unstructured. As per current scenarios, there are around 8.5 billion Google searches per day and approximately 2 trillion global searches per year , similarly for Bing we have around 27 billion web searches per month. 37.5 million web searches per hour . But, if we go by industry estimates, less than 25% of the data today is available in structural or tabular data. So, now the question arises when the data is available in human languages then how we can use them to train machines and get accurate AI machines. The answer for one of the problems, i.e. the textual data is "Natural Language Processing". What is Natural Language Pro