🎉 Datathon 2020 is finally here! Register here! 🎉

Natural Language Processing: The Love Child of Machine Learning and Linguistics

Published 09 Sep 2020

By Aileen Wang



Languages are, at their basis, systems for organizing smaller, less meaningful components into larger, more meaningful components. A string of sounds is a word, a string of words makes a sentences, a series of sentences becomes a paragraph, and enough paragraphs make a book, an essay, a speech, or whatever other form one might envision. Language – text, speech, tweets and so on – is one of the most common forms of data we are exposed to and produce. It’s one of the primary forms of human communication – but while we understand each other perfectly fine, is it possible for the computer to understand, and even participate, as well?

What is Natural Language Processing?

Broadly defined, natural language processing is the usage of software and AI to interpret and manipulate human language. Rising from the cross-section of machine learning, computer science and computational linguistics, it has its roots in structuralist thinking, and is an older field than you probably think. In fact, in 1950, Alan Turing was already anticipating a machine who could imitate, respond to and communicate in human-understandable language.

However, it comes as little surprise that natural language processing is difficult. The precision required in coding languages for computer comprehension is not something we exhibit in our daily speech. While languages are systems, they are extremely badly formed systems. For almost every rule, there’s an exception. Ambiguity is never fully excised, the rules and words themselves are often changing, figures of speech incomprehensible without their cultural context pepper our lexicon. Getting a computer to consistently understand and respond in kind to human language is a monumental task.

That being said, incredible achievements have already been made in the field. Autocomplete, Google Translate, and virtual assistants such as Siri and Alexa are all examples of natural language processing at work. However, as much as they embody the progress of natural language processing, they equally display that there is still a long way to go, and much to be improved. For every ten words autocomplete anticipates, there’s three that it gets wrong. To this day, language teachers will roll their eyes at Google Translated assignments (for good reason!) and Siri, as efficient as she is, can still be struck dumb by an out-of-the-blue query.

How does it work?

If humans use language by constructing from the ground-up, building meaningless symbols into complex structures, machines understand in much the same way. Without going into the technical details, the process can be roughly summarized as deconstructing, then reconstructing the text to understand it.

The text is broken down into components, and analysed against the syntax rules of the language being used. The ‘dictionary meaning’ of individual words is identified, as well as combinations of individual words that form a distinct meaning. These are then contextualized: the word ‘bear’ can mean very different things, after all, when I say ‘I bear the weight of the world’ compared to ‘there’s a bear in my house!’.

A map of the steps in natural language processing

Natural language processing currently uses deep learning neural networks to train AI in human languages. However, this model requires a massive amount of data to ‘teach’ the AI – in much the same way a child requires constant spoken interaction to begin to grasp the usage of language. The quantity of data required proves one of the challenges facing this model, but it proves more flexible and intuitive than the earlier, machine-learning algorithms that followed a more rules-based approach.

Current Applications

It might be surprising to find that natural language processing is almost everywhere! From the humble spellcheck and grammar check to social media monitoring and customer service chatbots, we interact with the fruits of natural language processing a lot more than we think. As techniques and procedures improve, we achieve new levels in the sophistication of our models, unlocking new potential.

Deep learning models provide improvements in sentiment analysis and parsing abstract languages – things traditional approaches to natural language processing have struggled with – allowing an analysis not just of the meaning of text, but drawing inferences at the emotions behind it as well. In business and government, this manifests as targeted advertising, survey analysis and social media monitoring. Text is the primary medium we use to convey our opinions online – and more and more is this text becoming an accessible form of data, to be mass-analysed by the intelligent AI rather than painstakingly parsed by human means.

Where to next?

Despite advances, natural language processing still has a lot to tackle. For example, natural language processing still struggles with the ambiguity of language, the synonymy of words, and the parsing of the abstract as well as identifying opinion. An AI cannot reliably identify sarcasm. Larger contexts and analysis of longer documents, or multiple documents at once, is still an inefficient process. Evaluation of language technology proves to be its own challenge, especially since standardizing measures are hard to determine in data as unstructured as text.

However, problems such as lack of resources in creating natural language processing models in less popular languages are far more human in nature. While we hardly lack for textual data in language juggernauts such as English and Chinese, the same cannot be said for all languages. Low-resource languages are far harder to train deep learning neural networks with than high-resource ones – ironic in that low-resource languages are often where natural language processing makes its biggest impacts.

As natural language processing continues to improve, we can only imagine the advancements it will make in the future. At the forefront of technology, the field fuses human and machine through the welding agent of language. Its impacts will only continue to grow as its methodology grows more sophisticated.

In 1950, Turing set a test for the hypothetical machine. A machine, he argues, that could imitate human speech, down to emotional response, was in effect not just imitating human thought, but thinking of itself. What we have long relegated as science fiction and futuristic forecasting may be closer at hand than we think. The machine, long dismissed as too dumb and unintelligent to speak as humans do, could soon be holding the pen of tomorrow.


Tags: machine learning data science natural language processing