What is Natural Language Processing?
Text has been the most efficient and reliable way to create, aggregate, and store information for the past few thousand years. And it will continue to be so for the foreseeable future; we are still only at the very beginning of the Big Data explosion.
Much of this text will come from social media, where language use is productive, informal, and multilingual, with little respect for grammar rules and lexical conventions. This is a difficult environment for traditional approaches to text analytics because they are based on an NLP processing pipeline that presumes stability and consistency.
Why is Natural Language Processing important?
For any business, there is a wealth of interactions and information which is recorded in text and/or speech formats. Customers may mention your products, services, or brand indirectly, in other conversations, both as a central topic or in passing, in shout-outs or comments in the news or social media. Customer Feedback may also be solicited directly by yourself in market research, in surveys, questionnaires, or other data collected by yourself.
It is of great importance to be able to interpret and analyse information which is expressed in this format.
A Traditional Approach to NLP
Traditional Natural Language Processing is based on a step-wise process where words in the text are annotated with various linguistic properties and relations. Each step adds properties or relations of increasing refinement. Typical processing steps include part-of-speech tagging, syntactic parsing, and named entity recognition. The result is a fine-grained account of the structural properties of the text and its component words.
There are several issues with such an approach. One is that there is no account of meaning in such an analysis, and meaning is the quintessential property of language. Another is that it does not work particularly well on noisy data, such as survey responses or social media. Furthermore, it is not very actionable; what business decisions can we take after having seen such an analysis?
Gavagai’s approach to NLP?
Unlike traditional approaches to text analytics, we begin by modelling meaning instead of structure. Instead of relying on the standard processing pipeline, we rely on a live semantic memory model that continuously learns to understand text based on how words are actually used in real-life contexts.
Our semantic memories are inspired by how our human brain understands the text. We humans can learn what words mean simply from their usage and context, we do this effortlessly and seamlessly, and it happens to each of us more frequently than we realise. Gavagai’s technology works in a similar way. It learns the meanings of words by observing their usages and contexts, and it never stops learning: language evolves constantly, as do our semantic memories.
Our technology is built for Big Data; our semantic memories learn from online text and are always listening to streams of live data. If you invent a new word and start using it on social media, our models will have learnt it in a matter of minutes. The same goes for new languages: as long as there are texts available, we can learn a semantic memory for that language. All this is made possible by clever engineering and the use of hyperdimensional representations.
Earlier advances in Machine Learning, Deep Learning and Natural Language Processing have always been held back by a lack of training data. However, on the internet, we have access to the largest imaginable training corpus known to mankind, with billions of real-world language interactions and usage. We have jumped on this opportunity to finally elevate semantic technology to its full potential.
Head over to Gavagai’s Living Lexicon where you can look up words to see their current left side neighbours, right side neighbours, n-grams, semantically similar words, and related topics, in 47 languages.
We leverage our invaluable lexicon resources to gain unprecedented insights into our clients’ text. Applying our learned semantic knowledge, we are able to extract and model topics, concepts and sentiments. Collected data is inherently imperfect and the fact that our word space models take advantage of online data allows us to understand noise (misspellings, regional variations, corruptions) in a way not permitted by traditional representations.
Our underlying word space technology, our topic modelling and proprietary sentiment analysis is a winning combination to understand your customers, clients or even your employees like never before. Upload your text to the Gavagai Explorer and start to understand your data.